120
Looking Back, Moving Forward Sida Evaluation Manual

Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Looking Back,Moving ForwardSida Evaluation Manual

Page 2: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Authors: Stefan Molund and Göran Schill

Published by Sida 2004 Art.no: SIDA3753en ISBN: 91-586-8462-XDesign: Hemma Annonsbyrå Production: Jupiter Reklam Illustrations: Gaute Hanssen Photo: Bildhuset, eyeQnet, ibl bildbyrå, I F A D , Lars Hansen, Phoenix, Pressens bild, Tiofoto Printed by Edita, Stockholm

Page 3: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Looking Back,Moving ForwardSida Evaluation Manual

Page 4: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Chapter 1: What is Evaluation?1.1 The concept of evaluation 91.2 Monitoring and evaluation 101.3 Accountability and learning 121.4 Use and abuse 131.5 Partnership 141.6 External and internal evaluation 171.7 Participatory evaluation 191.8 Quality standards 21

Chapter 2: Evaluation Criteria2.1 Principal evaluation criteria 252.2 Further criteria 262.3 Standards of performance 272.4 Effectiveness 282.5 Impact 302.6 Relevance 342.7 Sustainability 352.8 Efficiency 37

Chapter 3: Evaluating Poverty Reduction3.1 Evaluating poverty reduction 413.2 Implications of multidimensionality 423.3 The poor are not a homogenous category 433.4 Direct and indirect intervention strategies 443.5 Integrating donor support with partner

country strategies 463.6 Participation and empowerment 463.7 Gender mainstreaming 483.8 The environmental perspective 50

BoxesBox 1: Monitoring and evaluation 11Box 2: Levels of monitoring 12Box 3: Process accountability questions 13Box 4: Varieties of use 15Box 5: Joint evaluation 16Box 6: Sida evaluations 17Box 7: Expert evaluation and participatory

evaluation 19

Box 8: Criteria for evaluation of humanitarian assistance 27

Box 9: What is a baseline study? 30Box 10: Types of impact 31Box 11: Types of negative unintended effects 33

Box 12: Evaluation and HIV/AIDS 43Box 13: Assessing development agency

poverty reduction efforts 47Box 14: Gender issues in evaluation 49Box 15: Evaluating environmental

consequences 51

Part One: Concepts and Issues

Page 5: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Step 1: Initial Considerations1.1 Involving stakeholders 571.2 Defining the evaluation purpose 591.3 Establishing an organisation for

evaluation management 61

Step 2: Preparation of Evaluation Tasks2.1 Reviewing the intervention 662.2 Formulating evaluation questions 682.3 Assessing evaluability 702.4 Making a pre-evaluation study 722.5 Estimating evaluation costs 732.6 Writing terms of reference 742.7 Recruiting evaluators 76

Step 3: The Evaluation Research Phase

3.1 The inception phase 793.2 Supporting the evaluators during

the research phase 81

Step 4: Reporting and Dissemination4.1 Checking the draft report 834.2 Accommodating stakeholder comments 874.3 Dissemination of evaluation results 884.4 Publishing the evaluation report 88

Step 5: Management Response5.1 Assisting in the formulation of

Sida’s management response 915.2 Facilitating dialogue with

co-operation partners 93

Annex A: Format for Terms of Reference 94

Annex B: Format for Sida Evaluation Reports 98

Annex C: Glossary of Key Terms in Evaluation and Results-Based Management 102

Part Two: The Evaluation Process Step by Step

Page 6: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

www.oecd.org Website of the OECD/DAC Network onDevelopment Evaluation. Contains useful downloads as wellas links to the websites of the evaluation departments ofmultilateral and bilateral development organisations.

www.worldbank.org Website of the World Bank with itsOperation Evaluation Department (OED). Contains awealth of useful publications.

www.eldis.org Gateway for information on developmentissues hosted by the Institute of Developments Studies(IDS), University of Sussex. Supported by Sida and otherdevelopment agencies. A well stocked database withmany useful online publications relevant to evaluators.

www.alnap.org The Active Learning Network forAccountability and Performance in Humanitarian Action –an international interagency forum working to improvelearning, accountability and quality across the humanitari-an sector. Features a standard evaluation manual forhumanitarian assistance.

www.policy-evaluation.org WWW Virtual Library:Evaluation. Worldwide gateway for evaluation informationwith links to evaluation societies, national courts of audit,and evaluation departments in international organisationsand the EU. Plenty of educational materials for down-loading.

www.mande.co.uk MandENews. News servicefocusing on the development of methods for monitoringand evaluation in development projects and programmes.Much information about publications and events.

Useful Evaluation Sites on the Internet

Page 7: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

This is a manual for evaluation of developmentinterventions. It is intended primarily for Sidastaff, but may also be useful to Sida’s co-operationpartners and independent evaluators engaged toevaluate activities supported by Sida.

It consists of two main parts. The first dealswith the concept of evaluation, roles and relation-ships in evaluation, and the evaluation criteria andstandards of performance employed in develop-ment co-operation. The second is a step-by-stepguide for Sida programme officers and othersinvolved in the management of evaluations initi-ated by Sida or its partners.

Although it can be read in its entirety all atonce, most readers will probably use it in a piece-meal fashion. A reader who wants a rapid over-view of basic concepts in the evaluation ofdevelopment interventions should consult partone. A reader who is engaged in the managementof an evaluation, may turn directly to part twoand return to part one as need arises.

Some familiarity with basic development co-operation terminology is helpful, but not essential.What is required, on the other hand, is a readinessto engage with new concepts, some of whichmay seem complicated at first glance. A certaintolerance of ambiguity is also useful. Althoughevaluation is a professional field of its own, manyof its key terms do not have standard definitions.

The manual adheres to the terminological con-ventions recommended by the OECD/DACGlossary of Key Terms in Evaluation andResults-Based Management, which is included asan annex. Yet, in some cases it also takes notice ofalternative usage. For Sida evaluation managerswho must agree on conceptual matters with co-operation partners and professional evaluatorssome knowledge of the variability of interna-tional evaluation terminology can be very useful.

Two limitations should be noted. One is thatthis is a handbook for evaluation managersrather than evaluators. While it deals extensivelywith conceptual and organisational matters, ithas little to say about the technicalities of theevaluation research process. Still, evaluators mayfind it useful as a reference point for discussionsabout evaluation with Sida and its partners.

A second limitation is that it focuses onmatters that are common to all or most kinds ofdevelopment evaluations. It does not deal with thepeculiarities of the different sub-fields of develop-ment evaluation. Readers seeking informationabout matters that are unique to the evaluation ofhumanitarian assistance, research co-operation,programme support, and so on, must turn toother manuals.

Instructions for Use

Page 8: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 9: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Part One: Concepts and Issues

Page 10: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

This chapter explains the concept of evaluation anddeals with some basic issues concerning evaluationin development co-operation. More specifically, it answers the following questions:

■ What is evaluation?

■ What is monitoring and how does it differ from evaluation?

■ What are the purposes and uses of evaluation?

■ How does evaluation fit into a framework of development co-operation partnership?

■ How can stakeholders participate in evaluations?

■ What is an external evaluation?

■ What is an internal evaluation?

■ What is a participatory evaluation?

■ What is a good evaluation?

Chapter 1

Page 11: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

What is Evaluation?

1.1 The concept of evaluation

he word evaluation has many meanings.In the widest sense it is:

“…the process of determining the merit, worth,

or value of something…” 1

So defined, evaluation covers a wide range ofactivities, many of which can also be describedby other words, such as appraise, assess, examine,judge, rate, review, and test. In development co-operation, however, the word evaluation isunderstood more narrowly. According to thestandard definition of the DevelopmentAssistance Committee (DAC) of the OECD anevaluation is:

“… a systematic and objective assessment of an ongoing

or completed project, program or policy, its design, imple-

mentation and results.” 2

The definition in Sida’s Evaluation Policy 3 ismuch the same:

“…an evaluation is a careful and systematic retrospective

assessment of the design, implementation, and results of

development activities.”

These definitions differ from the broader one intwo important respects:

■ An evaluation is an assessment of ongoingor completed activities, not activities thatare still at the planning stage. Project app-raisals, feasibility studies, and other assess-ments of proposed future activities aresometimes called ex ante evaluations or pro-spective evaluations. Without further specifi-cation, however, the term evaluation referssolely to retrospectively based assessments ofongoing or completed activities.

■ No assessment is an evaluation unless itsatisfies certain quality standards. To beconsidered an evaluation, an assessmentmust be carried out systematically and withdue concern for factual accuracy andimpartiality.

Evaluation, as we use the term, is first and fore-most a reality test, a learning mechanism thatprovides feedback on the results of action inrelation to prior objectives, plans, expectationsor standards of performance. The requirement

T

1 Scriven, M. Evaluation Thesaurus (4th ed.). Newbury Park, CA: Sage, 1991, p. 139.2 OECD/DAC. Development Assistance Manual. Paris, 2002.3 www.sida.se/publications

WHAT IS EVALUATION? 9

Page 12: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

that evaluations should be systematic and objec-tive derives directly from the function of evalua-tion as a reality test. To serve as a probe on therealism and accuracy of plans and expectations,an evaluation must use sound and transparentmethods of observation and analysis.

The term evaluation is sometimes used exclu-sively for assessments made at the very end of anactivity or later. Common in project cycle man-agement, this understanding of the term reflectsthe idea of an evaluation as a comprehensivestocktaking when an activity has run its entirecourse. In this manual, however, the term coversboth assessments of ongoing and completedactivities.

An evaluation of activities in progress is re-ferred to as an interim evaluation. An evaluationthat is carried out when an activity is completedis an end-of-project or end-of-programme evaluation.An evaluation at some later point in time isknown as an ex-post evaluation. The term review isoften used to describe an assessment that is lessambitious in scope or depth than an evaluation.

With regard to focus, there are two broad typesof evaluation, process evaluation and impact evalua-

tion. A process evaluation deals with the plan-ning and implementation of an activity as wellas with outputs and other intermediary results.An impact evaluation, by contrast, is mainlyconcerned with the effects – outcomes andimpacts – brought about through the use of out-puts. Whereas interim evaluations tend to bemainly process evaluations, end-of-programmeevaluations and ex post evaluations are morelikely to focus on effects.

The object of evaluation – what textbooksrefer to as the evaluand – provides a furtherground for classification. In development co-operation we talk about project evaluations, pro-gramme evaluations, sector evaluations, countryevaluations, and so forth. Definitions can befound in the glossary. Development interventionevaluation is a blanket term for all of them.

10 WHAT IS EVALUATION?

1.2 Monitoring and evaluation

valuation should be distinguished frommonitoring, the continuous follow-up ofactivities and results in relation to pre-set

targets and objectives. Monitoring is sometimesregarded as a type or component of evaluation,but in development co-operation they are sepa-rated. The distinction is primarily one of analyt-ical depth. While monitoring may be nothingmore than a simple recording of activities andresults against plans and budgets, evaluationprobes deeper.

“Monitoring and evaluation are interdependent”

While monitoring provides records of activitiesand results, and signals problems to be remediedalong the way, it may not be able explain why aparticular problem has arisen, or why a particularoutcome has occurred or failed to occur. To dealwith such questions of cause and effect, an eval-uation may be required. An evaluation may alsohelp us gain a better understanding of how adevelopment intervention relates to its social andcultural environment, or it can be used to examinethe relevance of a particular intervention tobroader development concerns. Furthermore,

E

Page 13: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

while the objectives of a development activityare unchallenged in monitoring, and progressindicators are assumed to be valid and relevant,evaluations may question both. Box 1 presents amore extensive list of contrasts.

Sida regards evaluation as a complement tomonitoring that should be used selectively to dealwith problems that monitoring cannot adequatelyhandle4. For an evaluation to be feasible, however,monitoring data may be necessary. If an inter-vention has not been properly monitored fromstart, it may not be possible to subsequentlyevaluate satisfactorily. Just as monitoring needsevaluation as its complement, evaluation requiressupport from monitoring.

As shown in Box 2, an intervention can bemonitored at different levels. The monitoring ofinputs and outputs, input-output monitoring forshort, keeps track of the transformation of finan-cial resources and other inputs into goods and

WHAT IS EVALUATION? 11

services. The monitoring of outcomes andimpacts – outcomes more often than impacts –seeks to register the intended effects of deliveredgoods and services on targeted groups or systems.Outcome monitoring, sometimes referred to asbeneficiary monitoring, measures the extent towhich intended beneficiaries have access to out-puts and are able to put them to good use.

In development co-operation, more effort isusually devoted to input-output monitoring thanto the monitoring of outcomes and impacts.But both types of monitoring are important.Without carefully tracking the processes leadingfrom inputs to outputs, a programme may fail toreach its production targets. Without outcome-impact monitoring, it runs the risk of producingoutputs that have little or no developmentaleffect. For the purpose of evaluation, informationabout outcomes and impacts is crucially impor-tant, of course.

Box 1

Monitoring and evaluationMONITORING EVALUATION

Continuous or periodic Episodic, ad hoc

Programme objectives taken as given Programme objectives assessed in relation to higher-level goals or to the development problem to be solved

Pre-defined indicators of progress assumed Validity and relevance of pre-defined indicators opento be appropriate to question

Tracks progress against small number of Deals with wide range of issuespre-defined indicators

Focus on intended results Identifies both unintended and intended results.

Quantitative methods Qualitative and quantitative methods

Data routinely collected Multiple sources of data

Does not answer causal questions Provides answers to causal questions

Usually an internal management function Often done by external evaluators and often initiated by external agents

4 Sida at Work – A Guide to Principles, Procedures, and Working Methods. Stockholm, 2003.

Page 14: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

1.3 Accountability and learning

n section 1.1, evaluation is described in generalterms as a research tool providing feedback onthe results of action with regard to prior

objectives, plans, expectations or standards ofperformance, but nothing further is said aboutthe practical purpose for which it can beemployed.

12 WHAT IS EVALUATION?

In development co-operation, the standardanswer to the question of the purpose of evalu-ation is that it serves two broad types of ends:accountability and learning. As briefly discussedin section 1.4 below, evaluation can also be usedfor other purposes, but these two are usuallytaken to be the main ones.

Accountability is a relationship which existswhere one party – the principal – has delegatedtasks to a second party – the agent – and the latteris required to report back to the former about theimplementation and results of those tasks. Ascommonly understood, accountability consists oftwo separate components. One is the agent’sanswerability vis-à-vis the principal, the other thepower of the principal to initiate remedial actionor impose sanctions in case the agent fails to carryout his obligations as they have been agreed.

The content of the reporting varies from caseto case. An important distinction is that betweenfinancial accountability, which is answerability for theallocation, disbursement and utilisation of funds,and performance accountability, which concernsresults. Evaluation, in so far as it serves as a toolfor accountability, provides information forreporting about performance and results. It is lessconcerned with financial accountability, which ismainly the province of auditors and accountants.As explained in section 1.2, evaluation is particu-larly useful when results must be analysed indepth. For routine reporting of outputs and easilymeasured outcomes, monitoring may serveequally well.

In general terms, what an evaluation foraccountability seeks to find out is whether theorganisations that are responsible for the evalu-ated intervention have done as good a job aspossible under the circumstances. This meanstrying to find out if and to what extent the inter-vention has achieved the results that it wasintended to achieve or that it could reasonablyhave been expected to achieve. It is also likely toinvolve an assessment of the quality of theprocesses of planning and implementation.When results are difficult to measure – a commonsituation in development co-operation – an eval-uation initiated for purposes of accountability

Box 2

Levels of monitoringIMPACTS Effects on life chances and living

standards (inter alia)– infant child mortality– prevalence of a specific disease

OUTCOMES Access, usage, and satisfaction of users– number of children vaccinated– percentage of vaccinated children

within 5 km of health centre

OUTPUTS Goods and services produced– number of nurses– availability of medicine

INPUTS Financial and other resources– spending in primary health care

Source: Africa Forum on Poverty Reduction Strategy.June 5–8, 2000, Côte d’Ivoire.

I

Page 15: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

may focus entirely on questions about process.Box 3 lists some of the issues that a process-oriented accountability assessment might consider.

When the purpose of evaluation is learning,on the other hand, the study is expected to pro-duce substantive ideas on how to improve thereviewed activity or similar activities. Althoughlearning, in itself, may be regarded as valuable,its real importance lies in the translation of newknowledge into better practice. If we did notexpect learning to enhance organisational per-formance or contribute to practical improvementin some other way, we would hardly see it as ajustification for spending money and effort onevaluations. Evaluations that are primarily meantto contribute to learning are often called formative

evaluations, whereas evaluations for account-ability are described as summative evaluations.

Note that the distinction between accounta-bility and learning, summative and formative,

WHAT IS EVALUATION? 13

refers to the purpose and use of evaluationsrather than to their contents. An evaluation thatis meant to satisfy a requirement for accounta-bility may of course raise very different ques-tions than an evaluation intended for learning.Still, there are many evaluation questions thatare likely to be important for both purposes. Forexample, most of the questions in Box 3 couldappear in an evaluation intended for learning aswell as in an accountability evaluation. Indeed,in many cases different audiences may use oneand the same evaluation for different practicalends. It is not unusual that an evaluation, used bythose who are responsible for the evaluatedactivity for improvement and learning, serves apurpose of accountability in relation to principalsand the general public. As evaluator RobertStake is reported to have said5:

“When the cook tastes the soup,that’s formative; when the gueststaste the soup, that’s summative”

1.4 Use and abuse

ote also that evaluations can be used forpurposes other than those of account-ability and learning. Box 4 contains a

classification of actual uses of evaluations adaptedfrom a standard textbook. While the first two of

5 Scriven, M. Evaluation Thesaurus (4th ed.). Newbury Park,CA: Sage, 1991, p. 169.

Box 3

Process accountabilityquestionsDOES (OR DID) THE INTERVENTION:

■ Ask the difficult questions?

■ Maintain a focus on outcomes and impacts?

■ Identify problems and limitations as well as satisfactory performance?

■ Take risks rather than play safe?

■ Actively seek evaluation feedback?

■ Actively challenge assumptions?

■ Identify mistakes, and how these can be rectified?

■ Act on the implications of evaluations?

■ Generate lessons that can be used by others?

■ Have effective routines for planning, management, and use of monitoring data?

Adapted from Burt Perrin, Towards a New View of Accountability. Ms. European Evaluation Society, Seville, 2002.

N

Page 16: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

the uses – the instrumental and the conceptual –are readily subsumed under accountability andlearning, the remaining ones do not fit theseconcepts equally well. The ritual use is clearly themost divergent, but using an evaluation to seeksupport for a decision that has already been madeon other grounds, as in the case of legitimisation,or using it for tactical ends, could also be incon-sistent with both accountability and learning.

The box may serve as a reminder that notevery reason for conducting an evaluation isequally legitimate. Evaluation is based on princi-ples of impartiality, transparency, and open dis-cussion, but these values can be subverted byother concerns. As evaluations may have impor-tant implications for stakeholders’ reputationsand resources, this is understandable. Still, weshould remember that there is a politics of eval-uation, and that evaluation is something morethan just a useful tool for social engineering.

We should also consider the possibility thatevaluations may be initiated for purely formalreasons, or because it is vaguely felt that theyought to be done. Although it is sometimesclaimed that many evaluations are conducted forsuch reasons, however, we should be sceptical ofthis proposition. In most cases where such a claimis made, the purpose of the evaluation couldprobably be more accurately described as aninstance of tactical use, legitimisation or some ofthe other types of use discussed in this chapter.Still, the weaker proposition that many evalua-tions are carried out routinely and without asufficiently considered purpose is all the moreplausible. In this manual we stress the point thatclearly and transparently defining the purpose ofthe study is the first and most important of all thesteps in a process of planning and conducting anevaluation.

For a development co-operation agency suchas Sida clearly articulating the purpose of theevaluations that it initiates is always advisable.What Sida’s co-operation partners and otherstakeholders will be particularly concerned about,is not if Sida’s purpose is accountability, learningor something else, but, more precisely, how the evaluation is intended to be used, and how this

14 WHAT IS EVALUATION?

use is likely to affect them. Transparency on thispoint is essential. For development agencies thebest way of promoting transparency is to invitepartners and other key stakeholders to partici-pate in the making of the evaluation.

1.5 Partnership

valuations of development interventionsare carried out in a context of partnership.The concept of partnership includes

notions of shared objectives, mutual transparency,and dialogue. As understood in the developmentcommunity, it is closely related to the idea ofownership. In a development partnership thegovernment and people of the developing countryare the effective owners of the activities encom-passed by the relationship, whereas the role ofexternal donors is a more restricted one of pro-viding financial and technical support.

“Evaluation provides opportunitiesto stand back and reflect”

The concepts of partnership and ownershiphave important implications for evaluation. Itmeans that evaluations should serve the interestsof all the parties to the relationship. As evaluationsprovide opportunities to stand back and reflect,they can be usefully employed to further thepartnership dialogue. Evaluations can also providethe general public in the respective countrieswith information about the results of the projects

E

Page 17: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

WHAT IS EVALUATION? 15

Box 4

Varieties of useINSTRUMENTAL USE

The findings and recommendations of the evaluation aredirectly used as an input to decision-making concerningthe evaluated intervention or some related intervention or set of interventions. The decision may follow the con-clusions or recommendations of the evaluation, but evenif it does not, the evaluation is seriously taken intoaccount in the deliberations leading to the decision.

CONCEPTUAL USE

The use of evaluation to obtain a deeper understandingof an activity or type of activity. It is expected that theevaluation will feed into ongoing processes of organisa-tional learning and eventually have a useful impact onpractice, but the evaluation itself is not directly used indecision-making.

LEGITIMISATION

The evaluation is used to mobilise authoritative supportfor views that are held regardless of the evaluation. It isintended to justify a particular interest, policy, or point ofview, rather than to find answers to unresolved questionsor provide solutions to outstanding problems.

TACTICAL USE

The evaluation is used to gain time, evade responsibility,or, perhaps, create an opportunity for dialogue amongstakeholders. It is intended to convince users that mat-ters are under control; that the programme is respons-ibly administered; and so on. The impact of the

evaluation, if any, derives from the fact that it is carriedout, rather than from its findings and conclusions.

RITUAL USE

There are two kinds of ritual use. In the one case,where ritual is understood as nothing more than pre-scribed formal behaviour – ‘empty ritual’, as it were –evaluations are carried out for no other reason than thatthey should be done. In the other case, where ritualsare understood more anthropologically as symbolicexpressions of collective beliefs and values, a ritual useof evaluation is one whereby the participants remindthemselves of the larger meaning of the evaluated activ-ities. The evaluation gives them an opportunity to focuson the ‘big picture’.

PROCESS USE

Ritual use and tactical use can be seen as specialcases of a wider category of use called process use.When we talk about process use, we refer to the use ofthe evaluation process itself rather than its products.Evaluation processes can be used to create sharedunderstanding or boost confidence and morale.Although evaluations may be designed with such pur-poses in view, process use is often an unintended by-product of an evaluation serving another purpose.

Adapted from Evert Vedung. Public Policy and ProgramEvaluation. New Brunswick: Transaction Publishers, 1997.

and programmes covered by the relationship.Even when one of the partners wishes to makean evaluation that is of little interest to the other,there are partnership issues to be considered.

It is inherent in these concepts that the recip-ient country should assume a major responsibilityfor monitoring and evaluation of projects andprogrammes. In many countries, however, theidea of partner country ownership is more of agoal than actual reality. This is true at all stagesof the development co-operation process, butperhaps especially at the stage of evaluation.While donors tend to use evaluations primarilyfor their own purposes of accountability and

learning, their developing country partners oftenlack the capacity and incentives to undertake theirown evaluations. Like donor agencies, organisa-tions in developing countries often see evaluationsmainly as instruments of donor control.

To achieve a real partnership in evaluation,developing countries must strengthen their capac-ities for monitoring and evaluation and seek tocreate political and administrative cultures that areconducive to evaluation and results-based styles ofmanagement. Donors, on the other hand, mustsupport those efforts, and at the same time seekto reform existing practices of monitoring andevaluation so that they better serve the interests

Page 18: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

of their partners. On both sides, learning newways of organising and conducting evaluationsgo hand in hand with unlearning practices thathave become dysfunctional and obsolete.

In many respects the process of change isalready underway. An important development isthe partial shift in the architecture of developmentco-operation from a fragmented system of projectsupport to a more integrated system of pro-gramme support where donors align their contri-butions with partner country policies and seek toharmonise procedures between themselves. Thischange affects monitoring and evaluation alongwith everything else. In sector-based programmes,where the inputs of several donors cannot beseparately evaluated, various forms of joint evalu-ation are becoming the norm (Box 5). For thetime being, joint evaluations tend to be dominatedby donors, but as the evaluation capacity ofpartner countries grows the balance will change.

While new types of evaluation are emerging,established ones will remain in use, although per-haps in modified forms. Like many developmentagencies, Sida to a large extent is still a project-based organisation, and despite the ongoing shift

16 WHAT IS EVALUATION?

towards programme support, this is not likely tochange radically in the short run. Thus, Sida willcontinue to make project-level evaluations, incollaboration with its partners or alone. Suchevaluations provide inputs to decisions aboutresource allocation and processes of organisa-tional learning. They also produce some of theinformation that Sida needs for its own account-ability purposes.

“Evaluations should be carried outin a spirit of partnership”

It is Sida policy that evaluations should be car-ried out in a spirit of partnership. When Sidainitiates an evaluation, the evaluation managershould invite partner country organisations toparticipate in the evaluation process. In somecases, the partner organisation will prefer not tobecome actively involved; in other cases it willwelcome the invitation. Questions that tend tobe important to donors are often just as importantto their co-operation partners, and vice versa.The results of jointly supported interventions are

Box 5

Joint evaluationAn evaluation that is sponsored by several donor agenciesand/or partner organisations is known as a joint evaluation.It is useful to distinguish between joint multi-donor evalua-tions and joint donor-recipient evaluations. An evaluationof the latter type can involve one or several donor organ-isations along with one or more organisations from partnercountries. A joint evaluation can be methodologicallyconventional, and is not the same thing as a so-calledparticipatory evaluation, where primary stakeholders andothers are actively involved in the evaluation researchprocess.

Understandably, joint evaluations tend to focus onmatters of common interest to their sponsors. A jointevaluation, including both donors and recipients, is asuitable format for assessing sector-wide approaches(SWAps) and other programmes where the contributions

of different participating organisations cannot or shouldnot be separated from each other. Interventions that aresupported by a single donor organisation are normallybest evaluated by that organisation in collaboration withthe concerned partner country organisation.

Joint evaluations are used for learning as well as for accountability, although they are perhaps especiallyuseful when the evaluation purpose is to identify goodpractice and draw useful lessons for the future. Since a joint evaluation builds on the experiences of severalorganisations, it is likely to have a wider and, in somecases, more powerful impact than an evaluation com-missioned by a single organisation.

A useful discussion about joint multi-donor evaluations is providedin the OECD/DAC paper Effective Practices in Conducting aMulti-Donor Evaluation. Paris, 2001.

Page 19: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

clearly a matter of common concern. Questionsabout aid effectiveness – the effects of aid asopposed to the effects of the interventions thataid helps finance – and the development co-operation relationship itself should also beimportant to both the parties.

Note that donors should avoid embarking onevaluations without first assessing the utility ofthose evaluations in relation to their cost topartner country organisations. Where there aremany donors, each with its own set of projects orprogrammes, evaluations commissioned bydonors can be a burden on the weak administra-tive systems of developing countries. Amongdonors pre-occupied with the success of ‘their’projects, this is easily overlooked. One of thearguments in favour of closer co-operation inevaluation – between donors as well as betweendonors and their developing country partners –is that it would reduce the transaction costs of aidand give overburdened partner country organi-sations more time to deal with other importantmatters.

WHAT IS EVALUATION? 17

1.6 External and internal evaluation

here are other questions about roles andrelationships in evaluations besides thoseabout partnership between donor agencies

and their co-operation partners. The followingtwo are particularly important:

➔ Who should formulate the evaluation questions?

➔ Who should design and implement the evaluationresearch process and take responsibility for itsoutputs?

Box 6

Sida EvaluationsSida commissions a considerable number of evaluations.Thirty to forty evaluations are completed every year.Most of these evaluations are project-level assessmentsinitiated by Sida’s operative departments, including theSwedish embassies in partner countries. The rest arestudies produced by the Department for Evaluation andInternal Audit (UTV), an independent unit reporting directlyto Sida’s Board. UTV has a mandate to produce largerevaluations of strategic importance for Sida.

All evaluations commissioned by Sida are published in theseries Sida Evaluations. Reports can be downloadedfrom Sida’s homepage on the Internet (www.sida.se).Paper copies can be ordered from Sida, SE-105 25Stockholm, Sweden.

T

Page 20: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The first of these questions may seem to beincluded in the partnership issue discussed in theprevious section, but here we are looking beyondthe donor-recipient partnership to the broaderquestion of stakeholder participation in general.In every development initiative there is a whole setof stakeholders, including the ordinary citizenswho benefit from it or in some other way areaffected by it, the people referred to as primarystakeholders in this manual. Identifying impor-tant stakeholder groups and considering if andhow they should participate in the formulationof the evaluation questions and subsequentstages of the evaluation process is a key step inevery evaluation.

Depending on how the questions above areanswered, an evaluation will approximate one ofthree alternative models of evaluation:

■ External evaluation

■ Internal evaluation

■ Participatory evaluation

External and internal evaluation are discussed inthis section, participatory evaluation in the next.

When an evaluation is said to be external orindependent, what is usually meant is that theevaluators stand outside the evaluated activitiesand have no stake in the outcome of the evalua-tion. In a stronger sense of the term, an evaluationis regarded as independent when the organisationthat formulates the evaluation questions andrecruits the evaluators is also independent of theevaluated activity. An evaluation of a Sida-sup-ported development intervention commissionedby an independent institution of oversight andaccountability, such as the Swedish SupremeAudit Institution, would be an example.

The distinction between the two types ofindependent evaluation is not unimportant. Anevaluation where the prerogative of asking thequestions rests with persons or organisations thatare external to the intervention under review willclearly carry more weight as an independentassessment than one where it is vested with organ-isations who are responsible for the intervention.

18 WHAT IS EVALUATION?

In many cases, the perspectives and interestsinfluencing the formulation of the evaluationquestions can be a more important source of biasthan the methods used in answering them.

Stakeholder participation tends to be limitedin both types of external evaluation. Indeed,limited participation is an essential part of whatwe mean when we speak about independentevaluation. Independent evaluations are basedon a clear and categorical line of demarcationbetween those who conduct the evaluation andthose who are the object of evaluation. If stake-holders interfere with the evaluators’ workingmethods or seek to prescribe how findingsshould be interpreted, the independence of theevaluation is put in jeopardy.

Internal evaluations differ from external eval-uations in that the evaluators are organisationallyattached to the evaluated activities. In otherrespects external and internal evaluations tend tobe similar. Both are based on a sharp distinctionbetween the evaluators and the evaluated duringresearch and both rely on conventional socialscience research methods. In many cases there areinstitutional safeguards protecting the independ-ence and integrity of internal evaluators.

“The perspectives influencingthe questions can be a more important

source of bias than the methodsused in answering them”

Using an internal evaluator has advantages aswell as disadvantages. On the positive side is thefact that internal evaluators tend to have a betterunderstanding of the organisation to be evaluatedthan their external counterparts. Since they arepart of the organisations that they examine, theyare usually also in a better position to facilitateprocesses of use, learning, and follow-up thanexternal evaluators. The most important draw-back is that they have less credibility in relationto external audiences. When accountability is thepurpose of the evaluation they cannot replaceexternal evaluators.

Page 21: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

1.7 Participatory evaluation

articipatory evaluation represents a furtherand more radical step away from themodel of independent evaluation. In fact,

it is a challenge to both internal and externalevaluation. While the latter two are forms ofexpert evaluation, participatory evaluation is aform of evaluation where the distinction between

WHAT IS EVALUATION? 19

expert and layperson, researcher and researched,is de-emphasised and redefined. Participatoryevaluations are led by professionals, just as moreconventional evaluations. Still, there is an impor-tant difference. In the one case, evaluators arehired to assess the merits of the intervention, inthe other they are mainly facilitators and instruc-tors helping others to make the assessment. Amore detailed overview of the differences betweenexpert evaluations and participatory evaluationsis provided in Box 7.

The case for participation in evaluation ismuch the same as the case for participation indevelopment co-operation generally. Participa-tion means putting ordinary people first, andredefining the roles of experts and laypersons.Participation can be seen as an end in itself, asthe right for people to have a voice in matters thatsignificantly affect their interests. It can also bejustified in more instrumental terms. Participa-tion in this case helps mobilise local knowledge

Box 7

Expert evaluation and participatory evaluationEXPERT EVALUATION PARTICIPATORY EVALUATION

WHAT Information required by funding agencies To empower participants to initiate, controland other external stakeholders. and take corrective action.

WHO External expert evaluators in consultation Community members in collaboration with project with stakeholders. staff and external facilitators.

WHAT Standards of performance externally Community members and other participants set defined, often with reference to formal their own standards of success.goals and objectives.

HOW External evaluators control data gathering Self-evaluation. Collaborative processes of data and analysis. Scientific criteria of objectivity. collection and analysis. Simple qualitative and Outsider perspective. Long feedback loops. quantitative methods. Immediate sharing of results.

WHEN Mid-term, completion, ex-post Continuous and iterative. Not sharply distinguished from monitoring

Adapted from Deepa Narayan, Participatory Evaluation. World Bank, 1993.

P

Page 22: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

and makes development efforts more relevantand effective.

In evaluation both arguments are used. Whereparticipatory evaluation gives stakeholders anopportunity to assess the actions of local author-ities, it serves as an instrument of downwardaccountability and popular empowerment. Inother cases, it increases the effectiveness ofdevelopment efforts by mobilising popularknowledge and strengthening the participants’sense of ownership with regard to the evaluatedactivities. As it allows participants to engage inopen and disciplined reflection on questionsconcerning the public good, it is sometimes alsoseen as a kind of self-education in democraticgovernance.

It is often useful to make a distinction betweena general concept of stakeholder participation anda more narrow concept of popular participation.When talking about popular participation, werefer to the participation of primary stakeholders.Popular participation can be direct or it can beorganised through some system of representation.As the term is often used, a participatory evalua-tion is not just any evaluation where stakeholdersactively participate, but specifically one whereprimary stakeholders participate directly and indepth.

The best way to promote popular participa-tion in evaluation is probably to strengthen theelement of participation in the preceding stagesof the intervention process. The following aresome of the necessary conditions for the success-ful use of participatory approaches to evaluationin community-based projects and programmes:

■ Shared understanding among beneficiaries,programme staff and other stakeholders ofprogramme goals, objectives and methods.

■ Willingness among programme partners toallocate sufficient time and resources to par-ticipatory monitoring and evaluation.

■ Participatory approach to programme management and learning. Adopting a participatory approach to evaluation can

20 WHAT IS EVALUATION?

be difficult if the intervention has beenplanned and managed in a top-down fashion.

■ A reasonably open and egalitarian socialstructure. Where local populations are inter-nally divided by distinctions of class, powerand status a participatory approach is notlikely to be successful.

Evaluations initiated directly by donor organisa-tions are usually not participatory evaluations, asthe concept is defined here. While participatory

methods may be used in such evaluations, theoverall form of the evaluation is normally that ofan external expert study.

Evaluations initiated by Sida are usuallyexternal or independent in the weaker of the twomeanings of the term noted above. Thus, whileSida formulates the mandate – usually in consul-tation with its co-operation partners – consultingfirms or academic institutions in Sweden andabroad carry out the research work and writereports. The evaluators are independent of Sida’sco-operation partners as well as of Sida itselfand they are fully responsible for the contents oftheir reports. Although the evaluators may beinstructed to consult closely with target groupsand other primary stakeholders, the evaluationagenda is defined by the evaluators – or by theterms of reference for the evaluation – ratherthan by the primary stakeholders, as would havebeen the case in a fully participatory study.

Still, although Sida rarely initiates participa-tory evaluations, it has an interest in promotingsuch evaluations in some of its supported interven-tions. Every development programme supportedby donors is expected to have a system of moni-toring and evaluation, and for the donor thedesign of that system is not unimportant. Whilenot seeking to impose on their partners any par-ticular format for monitoring and evaluation,Sida would normally wish to ensure that theadopted format is consistent with agreed princi-ples of stakeholder participation.

Page 23: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

1.8 Quality standards

valuation is a tool for quality assurance andquality control, and as such must satisfy itsown quality requirements. In this manual

we suggest that the quality of any particularevaluation can be assessed in relation to fourbroad sets of quality standards, namely those ofpropriety, feasibility, accuracy and utility. 6

The propriety standards are ethical standardsmeant to ensure that evaluations are conductedwith due regard for the rights and welfare ofaffected people. The most basic of the proprietystandards is that evaluations should never violateor endanger human rights. Evaluators shouldrespect human dignity and worth in their inter-action with all persons encountered during theevaluation, and do all in their power to ensurethat they are not wronged.

The principle of informed consent, which is cen-trally important to the ethics of scientific research,is also relevant to evaluation. The twofold require-ment of this principle is that a) people should notbe engaged as respondents, informants or parti-cipants in evaluations without their consent, andb) people should be given adequate informationabout the evaluation, its purposes and possibleconsequences, before they are actively involved.

It is an important aspect of propriety thatevaluations should be balanced and fair, both inthe research phase and in the final stage ofreporting. All relevant stakeholder groups should

WHAT IS EVALUATION? 21

be allowed to speak out and their views shouldbe correctly reported. People have a right not tobe misrepresented. Furthermore, evaluationsshould normally be concerned with the strengthsand weaknesses of systems, structures, and formsof organisation, rather than with the strengthsand weaknesses of particular individuals orgroups of individuals.

The feasibility standards are intended to ensurethat evaluations are realistic and efficient. To sat-isfy these requirements, an evaluation must bebased on practical procedures, not unduly dis-rupting normal activities, and be planned andconducted in such a way that the co-operation ofkey stakeholders can be obtained. They shouldalso be efficient. If the cost of an evaluation can-not be justified by the usefulness of the results tointended users, it should not be undertaken.

The accuracy standards are meant to ensurethat the information produced by evaluations isfactually correct, free of distorting bias, andappropriate to the evaluation issues at hand. Bysetting high standards for accuracy, we protectthe very function of evaluation as a means ofmaking sure that plans and expectations arebased on reality and not the result of prejudiceor wishful thinking.

“The quality of any particular evaluationcan be assessed in relation to four broad

quality standards; those of propriety,feasibility, accuracy and utility”

However, it must be recognised that accuracy isnot an end in itself. As evaluations can be costlyand time-consuming for all parties involved,efforts to achieve a high level of accuracy shouldbe tempered by a pragmatic principle of “optimalignorance.” In others words, we should not strivefor the highest degree of accuracy possible, butfor one that is good enough for the purpose athand and can be accepted as such by the usersof the evaluation.

6 These are the main standards of the Program Evaluation Standards of the American Joint Committee on Standards forEducational Evaluation (1994). A summary of these standards can be downloaded from www.wmich.edu/evalctr/jc/.

E

Page 24: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The utility standards, finally, are meant to ensurethat evaluations serve the information needs oftheir intended users. An evaluation that usersconsider irrelevant is hardly a success, regardlessof its other merits. To be useful, evaluationsmust be responsive to the interests, perspectivesand values of stakeholders. It is important thatevaluations are timely in relation to stakeholders’practical agendas, and that stakeholders regardthem as credible.

The credibility of an evaluation depends onseveral factors. First, the evaluators must be

accepted as impartial and unbiased. If the eval-uation appears to be influenced by partisaninterests, stakeholders will reject it. Second, theevaluators must be technically and culturallycompetent to deal with the questions raised by theevaluation. If they are thought to lack necessaryqualifications, the credibility of the evaluation isagain compromised. Third and finally, methodsand resources for data collection and analysismust be regarded as appropriate. An evaluationthat passes the other tests, but fails in this one,may still be rejected.

22 WHAT IS EVALUATION?

Page 25: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 26: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Chapter 2An evaluation is an assessment of the merits andworth of a project, programme or policy in relationto a particular set of evaluation criteria and standardsof performance. Defining criteria and standards is akey step in every evaluation. This chapter explainsthe distinction between criteria and standards andpresents five criteria that are particularly important in assessments of development interventions:

■ Effectiveness

■ Impact

■ Relevance

■ Sustainability

■ Efficiency

Used by development organisations around theworld, these criteria are essential components ofdevelopment co-operation evaluation. The chapterhas a separate section for each one of them. Each section concludes with a set of standard evaluation questions.

Page 27: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

2.1 Principal evaluation criteria

very evaluation involves one or severalcriteria by which the merit or worth of theevaluated intervention is assessed, explicit-

ly or implicitly. The following five have been rec-ommended by the OECD/DAC and adopted bySida as standard yardsticks for the evaluation ofdevelopment interventions:

EFFECTIVENESS

The extent to which a development intervention hasachieved its objectives, taking their relative importanceinto account.

IMPACT

The totality of the effects of a development interven-tion, positive and negative, intended and unintended.

RELEVANCE

The extent to which a development intervention conforms to the needs and priorities of target groupsand the policies of recipient countries and donors.

EVALUATION CRITERIA 25

SUSTAINABILITY

The continuation or longevity of benefits from a development intervention after the cessation ofdevelopment assistance.

EFFICIENCY

The extent to which the costs of a developmentintervention can be justified by its results, takingalternatives into account.

Each one of these criteria can be applied to everydevelopment intervention and each one of themrepresents something important that needs to beconsidered before it can be decided if a particularintervention should be regarded as a success.

To understand how the five criteria comple-ment each other, it is necessary to clarify thedistinction between effectiveness and efficiency.What we should bear in mind is simply that thesetwo are fundamentally different. Effectivenessrefers to the extent to which an evaluated inter-vention has achieved its objectives. Efficiency, bycontrast, refers to the extent to which the costs ofan intervention can be justified by its results. Ananalysis of efficiency makes little sense without aprior assessment of effects.

A point about languageSwedish-speakers often find the distinction betweeneffectiveness and efficiency awkward and confusing. The reason for this is that the Swedish word effektivitet coversboth efficiency and effectiveness. In many situations it refersto efficiency alone. In technical language, the word kostnads-effektivitet – cost-effectiveness – serves as an equivalent ofefficiency. Still, for Swedes consistently separating efficiencyfrom effectiveness may require a special effort.

Evaluation Criteria

E

Page 28: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The remaining criteria – impact, relevance, andsustainability – complement the criterion ofeffectiveness. Like the latter, they are used toassess results independently of costs. Withimpact, we transcend the managerial bias inher-ent in assessments of effectiveness and attemptinstead to examine the evaluated interventionfrom the point of view of target groups andother primary stakeholders. The concept ofunintended consequences is centrally importantin impact studies. When we focus on relevance,we look at the evaluated intervention in relationto larger contexts of needs, priorities, and policies,and we examine the objectives of the interven-tion as well as the means of achieving them.Sustainability, finally, is a criterion for assessingthe likelihood that the benefits produced by anintervention will be maintained beyond thecessation of external support.

We should not be confused by the fact that thefive criteria tend to overlap at several points. Forexample, in a study of impact we will encountersome of the same effects as we may already havedealt with in an assessment of effectiveness.Similarly, there may be overlaps between a studyof relevance and a study of effectiveness.However, this becomes a problem only if weexpect the five criteria to be concerned withcompletely different sets of facts, which is notthe case. Each criterion represents a particularway of looking at an intervention, nothing else.

The five criteria are discussed at length insections 2.4–2.8.

2.2 Further criteria

s stated in Sida’s Evaluation Policy7, all thefive standard criteria should be consideredwhen Sida initiates an evaluation. An

evaluation using all of them would give us muchof the information needed in order to form anoverall opinion of an intervention’s value. Still,the policy does not require all five to be adopted

26 EVALUATION CRITERIA

in each and every case. The policy requirementis rather that none of them should be put asidewithout a prior assessment of their relevance.

Furthermore, although these criteria havebeen accorded a special status by Sida’sEvaluation Policy, we are not prevented fromusing additional criteria. In many evaluationsprocedural values and principles are used asevaluation criteria. Participation, partnership,human rights, gender equality, and environmen-tal sustainability, are prominent examples. Theyare all values and principles governing the designand implementation of interventions supportedby Sida, as well as major policy goals. It would besurprising if they were not also used as criteriafor evaluation. Participation is discussed in 3.6,the gender equality criterion in 3.7, and theenvironmental perspective in 3.8.

Note also the existence of evaluation criteriarelated to particular areas of development co-operation. In research co-operation, for example,the term relevance can refer both to the scientificusefulness of a research project and to the rele-vance of such a project in a general developmentperspective. Other examples are the criteria ofappropriateness, coverage, connectedness, andcoherence that have been designed to fit the spe-cial conditions of humanitarian assistance.Sponsored by the OECD/DAC these criteriaare regarded as subcriteria to the five principalcriteria discussed in this chapter. Definitions andbrief explanatory notes are given in Box 8.

7 www.sida.se/publications

A

Page 29: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

2.3 Standards of performance

t is useful to make a distinction between eval-uation criteria and standards of performancerelated to those criteria. While evaluation

criteria are variables in terms of which perform-ance is measured or assessed, standards ofperformance are values on those variables repre-senting acceptable levels of achievement. Forexample, if we are dealing with a road construc-

EVALUATION CRITERIA 27

tion intervention and effectiveness is the criterionof evaluation, one kilometre of asphalt roadcompleted every three months could be a stan-dard of performance.

While standards of performance are oftentaken directly from the definition of goals andtargets in funding proposals and other documentsrelating to the evaluated intervention, there arealso other possibilities. Interventions may beassessed against policy-level goals and principles,or they may be evaluated against standardsderived from comparisons with similar interven-tions elsewhere or from models of best practice.An evaluation of an intervention in relation toSida’s gender policy would be an example of thefirst type of assessment. An evaluation of the sameintervention in relation to what is internationallyregarded as best practice for promoting genderequality would be an example of the second.

In some cases, standards of performance mustbe defined during the evaluation process itself,rather than during planning. Furthermore, even ifthe standards of performance are well defined in

Box 8

APPROPRIATENESS

The extent to which humanitarian inputs and activitiesare tailored to local needs, and the requirements of ownership, accountability, and cost-effectiveness. Howwell did the humanitarian activities respond to the changing demands of the situation?

COVERAGE

The extent to which the entire group in need had accessto benefits and were given necessary support. Key questions: Did the benefits reach the target group asintended, or did too large a portion of the benefits leak to outsiders? Were benefits distributed fairly betweengender and age groups and across social and culturalbarriers?

CONNECTEDNESS

The extent to which short-term emergency activities takeinto account longer-term needs and the interconnected-

ness of humanitarian problems. Examples of problemsto be dealt with are environmental effects of refugeecamps, damage of roads through food transports, dam-age to local institutions as a result of internationalNGOs taking over central government functions andrecruiting skilled staff from local government institu-tions.

COHERENCE

Consistency between development, security, trade, mili-tary and humanitarian policies, and the extent to whichhuman rights were taken into account. Important ques-tions: Were policies mutually consistent? Did all actorspull in the same direction? Were human rights consis-tently respected?

These criteria are extensively discussed in Evaluating HumanitarianAction. An Alnap Guidance Booklet. www.alnap.org

Criteria for evaluation of humanitarian assistance

I

Page 30: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

advance the evaluators may have to decidewhether they are reasonable or not, given theproblem addressed by the intervention and theamount of resources invested. When looking atthe facts of the case, the evaluators may come tothe conclusion that the performance standardswere too ambitious or not ambitious enough.They might even propose an alternative set ofstandards. Other complications regarding stan-dards of performance are due to the fact that thegoals and objectives of development interven-tions may be vague, incomplete or both.Problems of this kind are briefly discussed in 2.4.

Regardless of how standards of performanceare defined, however, it is essential that theyreflect the goals, commitments and formal obliga-tions of those who are involved in the evaluatedactivity and that they are not regarded as arbitraryimpositions. Standards of performance that eval-uators or other outsiders define according totheir own predilections, insiders may regard asunfair or irrelevant. The issue of fairness is par-ticularly important when the evaluation serves apurpose of accountability, but it can also be aconcern in evaluations that are intended forlearning.

2.4 Effectiveness

he term effectiveness refers to the extent towhich the objectives of an intervention havebeen achieved as a result of the implemen-

tation of planned activities. Effectiveness can be

28 EVALUATION CRITERIA

measured at the level of outputs as well as at thelevels of outcome and impact. In the first casewe are concerned with the achievement of tar-gets for the production of goods and services, inthe second with the achievement of the furthereffects that we intend to bring about throughthese goods and services.

Note that effectiveness at the output level isno guarantee for effectiveness in terms of out-comes and impacts. An intervention may achieveall its targets with regard to goods and services,and still not be effective at the outcome andimpact levels. Indeed, an intervention that iseffective in terms of outputs may be quite in-effective with regard to outcomes and impact.An important implication of this is that everyquestion about effectiveness in evaluations shouldbe carefully specified in relation to the intendedlevel of objectives. Effectiveness in general is anempty concept.

Assessing effectiveness is usually more diffi-cult at the level of outcomes and impact than atthe output level. At the output level, the job canbe regarded as completed when we have meas-ured the extent to which the goods and servicesproduced by the intervention match pre-definedtargets for quantity and quality. Assessing thequality of outputs can be difficult, especiallywhere clear quality standards are lacking. Inmost cases, however, there are solutions to suchproblems.

At the outcome level an assessment of effec-tiveness is made in two steps. First, the achievement

of objectives is measured. How have the conditionsof the target group changed since the interven-tion was launched and how do identified changescompare with intended changes? Second, theissue of causal attribution is addressed. To whatextent have the identified changes been causedby the intervention rather than by factors out-side the intervention?

Both these steps are important. For a govern-ment or development agency that wishes to investits resources where they make a difference,knowing that things have changed as expected isnot good enough. They also want to know that

T

Page 31: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

the evaluated intervention has significantly con-tributed to the recorded change. In most casesthere are many factors at play in addition to theevaluated intervention. Therefore, the possibilitythat recorded changes would have occurredeven without the intervention must be carefullyconsidered, and as far as possible ruled out.

Measuring goal achievement is, in principle,a fairly uncomplicated affair. In practice, how-ever, it may be difficult or even impossible. Thereason for this is that one or several of the fol-lowing types of information may be missing:

■ Baseline information about the situationbefore the intervention (see Box 9),

■ Records of the changes that have occurredrelative to the baseline during the imple-mentation of the intervention and later,

■ An empirically verifiable description of thestate of affairs that the intervention wasintended to bring about.

Missing baseline information is a commonproblem. In some cases a baseline can be recon-structed with the help of written documents andinterviews with members of target groups andothers. Human memory being what it is, however,a baseline reconstructed through memory recallis usually much less precise and usually muchless reliable than a baseline assembled before theintervention started.

Vagueness in the description of the state ofaffairs that the intervention is intended to bringabout can also cause problems. If goals andobjectives are unclear it may be difficult to decideif and to what extent recorded changes representan achievement in relation to them. In manyinterventions goals and objectives are furtherspecified through empirical indicators, but indi-cators may raise questions of their own. Weshould never take for granted that the indicatorsare valid and relevant.

Formulations of objectives tend to becomeincreasingly vague as we move upwards in thehierarchy of objectives. At the level of outputs itis often quite clear what the intervention intends

EVALUATION CRITERIA 29

to achieve and at the level of immediate out-comes the objectives may also be reasonably pre-cise. In the higher reaches of the goal hierarchy,however, the descriptions of objectives are oftenextremely vague. Often no stronger or moreprecise claim is made than that the interventionis expected “to contribute” to the achievementof a distant higher goal, or that it will help“create pre-conditions” for the higher goal to beachieved. Measuring goal achievement in rela-tion to objectives that are formulated in suchterms is difficult, if not altogether impossible.

“The possibility that recordedchanges would have occurred even

without the intervention must becarefully considered”

Despite these difficulties, however, measuringgoal achievement is often a lesser problem thanattributing causality. There are cases where eval-uators do not hesitate to attribute the recordedchange to the intervention. In other cases, how-ever, causal attributions are much less certain.Usually, short-term changes are more stronglyinfluenced by the evaluated intervention thanlonger-term changes. Conversely, the more distantthe change the more likely extraneous factorsare to intervene. In many cases, we can make nostronger claim regarding causality than that theintervention appears to have had some influenceon the recorded outcomes.

Section 2.5.2 on page 32 contains some furtherremarks on the problem of inferring causality.

Effectiveness is an important criterion for theevaluation of development interventions. It isimportant to principals, financiers and otherswho for reasons of accountability want to knowthat results have been delivered as promised. It isalso important to intervention managers whoneed the same type of information for manage-ment and learning. Setting goals and definingobjectives that are both challenging and realisticis a basic management skill. If we never reach ourgoals and never achieve our objectives something

Page 32: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

is clearly wrong, even if we can rightly claim tohave produced results that are valuable andworthwhile.

By itself, however, an assessment of effective-ness has only limited value. What we can learnfrom it is whether the evaluated intervention hasachieved its goals and objectives, nothing else.As already noted, an assessment of this kind saysnothing about the relevance and value of theachieved results; neither does it provide anyinformation concerning the important questionof unintended effects.

Standard questions about effectiveness:

■ To what extent do development changes inthe target area accord with the planned outputs, purpose and goal of the evaluatedintervention?

■ To what extent is the identified developmentthe result of the intervention rather thanextraneous factors?

■ What are the reasons for the achievementor non-achievement of objectives?

■ What can be done to make the interventionmore effective?

30 EVALUATION CRITERIA

2.5 Impact

he word impact has several meanings. Itoften refers to the totality of the effectsbrought about by an intervention, but it is

also used more narrowly to refer to effects in thelonger term or to effects at the scale of societies,communities, or systems. When it is used in thenarrow sense – as in logframe analysis for example– it is complemented by the word outcome,which refers to short and medium term effectson the attitudes, skills, knowledge, or behaviourof groups or individuals. In other contexts, theterm outcome may refer to the totality of effectsproduced by an intervention, just as impact.

Box 9

What is a baseline study?A baseline study is a description of conditions in a local-ity or site prior to a development intervention. A baselinestudy provides benchmarks against which change andprogress can be measured and evaluated. Without baseline information, assessments of effectiveness andimpact are impossible. Baseline information can often be assembled retrospectively, but, as a rule, a recon-structed baseline is much inferior to baseline informationassembled ex ante.

The scope and focus of a baseline study reflect thepurpose of the intervention and the anticipated futureneed for baseline data in monitoring and evaluation. Ifthe intervention is a project in support of entrepreneur-

ship among small farmers, for example, the baseline study will describe the nature and extent of rural entre-preneurship before the intervention. If it has a subsidi-ary goal of increasing the number of female entrepre-neurs, the baseline data should be disaggregated bygender. If it is expected that the project may affect thephysical environment, the baseline will contain bench-marks for the monitoring and evaluation of environmen-tal effects.

Useful reading: Soleveigh Freudenthal and Judith Narrowe.Baseline Study Handbook. Focus on the Field. Sida, Stockholm, 1993.

T

Page 33: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

In this manual the term impact is used in boththe senses. This should not create any problems.When used in conjunction with the term outcome,it refers to effects in the longer term. Elsewhere itmeans the totality of effects brought aboutthrough a development intervention. In the lattersense it encompasses expected and unexpected,positive and negative, as well as short-term andlong-term effects on people, organisations, soci-eties and the physical environment.

A study of impact in the wide sense coverspartly the same ground as a study of effectiveness.However, it differs from such a study in twoimportant respects. First, while assessments ofeffectiveness may deal with outputs as well aseffects, an impact study is limited to effects.Second, while studies of effectiveness tend tofocus on planned positive effects in the short ormedium term, a study of impact is concernedwith the entire range of effects, including thosethat were unforeseen, those that occur in thelonger term, and those that affect people outsidethe target group.

Note that impact is not a criterion of thesame kind as effectiveness or efficiency. Whilethe latter are normative criteria – effectivenessand efficiency are by definition desirable – theimpact criterion is primarily a recommendationthat all significant consequences of an interven-tion, negative as well as positive, should be takeninto account. To be able to distinguish betweenpositive and negative impacts we have to employadditional normative criteria, such as increasedwell being of primary stakeholders, utility forpoverty reduction, or something of that kind.

The impact criterion provides an importantcorrective to what could otherwise become anoverly narrow preoccupation with the intentionsof those who plan and manage developmentinterventions and a corresponding neglect of theperspectives of target groups and other primarystakeholders. This is a key point about impact.When applying the impact criterion, we turn totarget groups and other stakeholders to find outif and how the evaluated activities have affectedtheir situation, positively or negatively. Measuring

EXPECTED POSITIVE UNEXPECTED POSITIVE

EXPECTED NEGATIVE UNEXPECTED NEGATIVE

EVALUATION CRITERIA 31

change in relation to stated intervention objec-tives is less important.

Evaluations that give precedence to theperspectives of target groups and other primarystakeholders over those of planners and man-agers are sometimes referred to as user-oriented

evaluations (in contrast to goal-oriented evaluations

focusing on effectiveness). The term goal-free evalu-

ation has a similar meaning. In a goal-free evalu-ation the evaluators deliberately try to disregardthe established goals and purposes of an inter-vention in order that they may better appreciatethe value and significance of the intervention tothose who are affected by it.

2.5.1 Types of impactAs shown in Box 10, a study of impact dealswith four types of effect. In the left-hand columnthere are the intended positive effects justifyingthe intervention and the negative effects that thosewho are responsible for the intervention anticipateand accept as affordable costs or necessary evils.The right-hand column contains the unexpectedeffects, positive as well as negative.

Box 10

Types of impact

Page 34: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Evaluations should cover both positive andnegative effects. In the planning of developmentinterventions possible negative effects are easilyoverlooked or taken too lightly. Truly unexpectednegative effects are caused by ignorance, badplanning or wishful thinking. Normally, stake-holders and outside expertise are extensively con-sulted when development interventions areplanned but even so things may not turn out quiteas expected. In some cases, the actual results mayeven be the opposite of the intended results.Identifying unintended effects and analysing theircauses is one of the main tasks of evaluation.

Evaluators have the advantage of hindsight.In many cases, they can identify unintendedconsequences simply by asking primary stake-holders to describe how their situation haschanged since the intervention started. Althoughidentifying impact in this way can be time-consuming and may require a good deal of localknowledge, it need not be technically complicated.Still, interviewing representatives of targetgroups and other stakeholders is not a universalrecipe for impact study. Interventions may haveadverse effects that only experts can identify.Adverse effects that are difficult to identify forlaypersons can be found in all areas of develop-ment co-operation. Box 11 lists a few commontypes of negative unintended effects.

2.5.2 Measuring change and inferring causalityWhen studying impact we face the same technicalproblems of measuring change and inferringcausality as in studies of effectiveness.

However, the first of the two tasks, that ofmeasuring change, can be rather more difficult inimpact studies than in assessments of effectiveness.As noted in 2.4, change cannot be measuredwithout a baseline describing the situationbefore the intervention (Box 9, page 30). Since

32 EVALUATION CRITERIA

baseline information assembled before the inter-vention is likely to refer to those features of thesituation that are expected to change as a result ofthe intervention, however, it is not always usefulfor measuring change that was not planned andnot foreseen. To be able to measure unexpectedchange, we may have to construct a new baselineex post. However, in many cases constructing abaseline after the fact may be difficult or evenimpossible.

The second main task is to decide, with asmuch certainty as required or possible, whetherthe changes that have occurred since the begin-ning of the intervention were caused by theintervention, or if they would have occurredanyway. Impact, in the strict sense, is the differ-ence between the changes that have actuallyoccurred and the changes that would haveoccurred without the intervention. The hypo-thetical state of affairs to which we compare realchanges is known as the counterfactual.

With the help of control groups that have notbeen exposed to the intervention it is sometimespossible to get a good idea of how the targetgroup would have fared without the intervention.When the counterfactual cannot be estimated inthis way – a common situation in developmentco-operation – statements about impact rest onweaker foundations. The intervention is oftentaken to be the cause of the identified changes ifsuch a conclusion appears to be consistent withexpert knowledge and there seems to be no betterexplanation around. Although less compellingthan an explanation based on control groupmethodology in most cases, an argument of thistype can be good enough for the purpose of theevaluation.

For a deeper discussion about causal attribu-tion in relation to different types of researchdesign the reader should consult a textbook onevaluation research methods.8

8 See, for example:Bamberger, M. and Valadez, I. (eds.). Monitoring and Evaluating Social Programs in Developing Countries.Washington D.C: The World Bank, 1994.Rossi, P. et al. Evaluation. A Systematic Approach. Seventh Edition. Thousand Oaks: Sage, 2004.

Page 35: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

EVALUATION CRITERIA 33

Box 11

Types of negative unintended effectsTARGETING ERRORS

There are two main types, often referred to as Type Iand II Errors. The first type is concerned with coverage:Have benefits from the intervention reached all sub-groups of the targeted category, or is there a coveragebias such that some of the intended beneficiaries –women, children, the elderly, or the disabled, for exam-ple – have been excluded? The second type is con-cerned with leakage of benefits to people outside thetargeted category. In other words, the question iswhether outsiders have seized benefits intended for the target group. Is there a problem of overinclusion?

SUBSTITUTION AND DISPLACEMENT

In both cases the intended positive effects for a particu-lar target group are realised, but only at the expense ofanother group or category that is equally deserving ofsupport. A case of substitution would be that of sub-sidised workers replacing unsubsidised workers whowould otherwise have been employed. Displacement, on the other hand, would occur if the creation of sub-sidised jobs in one firm led to reduction of workers inother firms.

RECOIL EFFECTS

These are unintended effects of an intervention on theorganisations responsible for its implementation andmanagement. The aim of development assistance is notjust to help solve immediate development problems of apressing kind, but to strengthen the capacity of develop-ing countries and their organisations to deal with suchproblems with their own resources. In many cases, however, recipient country organisations are

overburdened by externally financed development inter-ventions. Evaluations can be used to assess the extentto which development initiatives produce such negativerecoil effects.

FUNGIBILITY

By lending support to a particular activity the donormakes it possible for the recipient to shift scarceresources to other activities, all or some of which maybe inconsistent with the donor’s mandate. Thus, whilethe donor focuses on the activities ear-marked for sup-port, the main impact of aid is perhaps to make possi-ble activities that the donor regards as undesirable.Important as it may be, however, the phenomenon offungibility can usually not be handled within an evalua-tion of a single development intervention. To find out ifand how donor support is fungible, it must be examinedin relation to the national budget and wider patterns ofpublic expenditure in the partner country. Note that it isonly when the recipient’s priorities are inconsistent withthose of the donor that fungibility becomes a problem.

PERVERSE EFFECTS

Substitution and displacement, fungibility, as well assome of the recoil effects of aid programmes can all bedescribed as perverse effects. Effects referred to bythis term are directly opposed to the objectives andgoals that the intervention was intended to achieve.Donor support that is meant to strengthen the ability ofpeople to deal with their own problems but results inincreased dependency on aid is a prime example fromdevelopment co-operation.

Page 36: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Standard questions about impact:

■ What are the intended and unintended,positive and negative, effects of the inter-vention on people, institutions and the physical environment? How has the inter-vention affected the well being of differentgroups of stakeholders?

■ What do beneficiaries and other stakeholdersaffected by the intervention perceive to bethe effects of the intervention on themselves?

■ What is the impact of the intervention onthe recipient country organisation(s) thatmanage it? To what extent does the inter-vention contribute to capacity developmentand the strengthening of institutions?

■ To what extent can changes that haveoccurred during the life span of the inter-vention or the period covered by the evaluation be identified and measured?

■ To what extent can identified changes beattributed to the intervention? What wouldhave occurred without the intervention?

■ Have plausible alternative explanations for identified changes been considered andconvincingly ruled out?

2.6 Relevance

hen we talk about relevance as a criterionof evaluation, we are concerned with thevalue and usefulness of the evaluated

intervention in the perspectives of key stake-

34 EVALUATION CRITERIA

holders. More precisely, a development co-operation intervention is considered relevant ifit matches the needs and priorities of its targetgroup, as well as the policies of partner countrygovernments and donor organisations.

The question of needs comes first. If anintervention does not help satisfy importantdevelopment needs, directly or indirectly, it canobviously not be regarded as relevant. However,the fact that it addresses important needs is notenough. To be considered relevant a developmentintervention should also be consistent with theeffective policies and priorities of target groupsand others. In addition, it should be technicallyadequate to the problem at hand – an effectiveand inexpensive cure without harmful side effects,as it were. This is implied by the definition. Ifthe intervention is out of tune with stakeholderpriorities, or if it is technically inadequate insome way, it will probably not achieve its aims,and, again, is not relevant as a solution to thedevelopment problem at hand.

In many evaluations, the objectives of theevaluated intervention are taken as given. Whenthe relevance of an intervention is assessed,however, the objectives as well as the means ofachieving them are carefully examined. Theperspective is holistic. At one level, we try toascertain if the intervention is well adapted to thelivelihood patterns and the social and politicalconditions of its intended end-users and otherprimary stakeholders. At another level, we wishto establish that it is well in line with governmentpolicies and systems of administration as well aswith concurrent interventions supported byother development agencies.

Questions about partner country ownershipare important. To what extent is the evaluatedintervention an independent host country initia-tive; to what extent is it an adaptation to donorpreferences? To what extent is it managed andcontrolled by host country actors, to what extentare donor agencies actively involved in the oper-ations? Does ownership extend to the intendedbeneficiaries and other citizens? Are thereadequate mechanisms for accountability andpopular participation?

W

Page 37: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

While stressing that relevance is about consistencywith existing priorities and policies as well asneeds, however, we should not lose sight of thefact that many development interventions areconceived as social experiments. The point isobviously not that interventions that challengeestablished interests or existing ways of doingthings are always irrelevant. It is rather that eveninterventions that go against the grain of existingpractice should conform to the needs and inter-ests of those who are intended to benefit fromthem. When we assess the relevance of an inno-vation, one of the key questions is whether it hasa potential for replication.

Relevance is an important issue throughoutthe intervention cycle. At the stage of planningand preparation, the responsible organisationsmake a first assessment of the relevance of theobjectives of the intervention, and they also tryto make sure that the intervention strategy issound. Later, interim or ex-post evaluationsshould revisit this analysis. The initial assessmentmay have been incorrect all along, or the situationmay have changed in such a way that it has to berevised.

Standard questions about relevance:

■ Is the intervention consistent with the livelihood strategies and living conditions ofits target group? How urgent is it from thepoint of view of the target group?

■ Is the intervention well in tune with thedevelopment policies and administrative systems of the partner country governmentat national and regional levels? Is it consistentwith a policy of supporting partner countryownership?

■ Is the intervention a technically adequatesolution to the development problem athand? Does it eliminate the main causes of the problem?

■ Do proposed innovations have a potentialfor replication?

EVALUATION CRITERIA 35

■ Is the intervention consistent with Sida policies and priorities?

■ Is the intervention consistent and comple-mentary with activities supported by otherdonor organisations?

2.7 Sustainability

hen we discuss sustainability we areconcerned with the likelihood that thebenefits from an intervention will be

maintained at an appropriate level for a reason-ably long period of time after the withdrawal ofdonor support. Strictly speaking, sustainability ispart of the wider criterion of impact, but as itconcerns a recurrent issue of extreme importancein development co-operation we treat it as anindependent criterion.

The criterion of sustainability refers to theresults obtained through development co-opera-tion interventions, not the development co-operation interventions themselves. In somecases, sustainability means that a particularorganisation or facility constructed with externalassistance will remain in use, but in other cases theorganisation or facility built in the course of theintervention more resembles a temporary scaffold-ing that is needed only in the construction phase.

Sustainability must be specified in relation tothe particular intervention under review. Differenttypes of intervention have different time framesand serve different types of function in thedevelopment process. Sustainability does not

W

Page 38: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

have the same meaning in regard to short-termemergency assistance as in interventions withlong-term development objectives. While theevaluation criterion is the same in both cases, theperformance standards will differ.

Analyses of sustainability in evaluations areforward-looking assessments made during theimplementation process or when the interventionhas been completed. The main question that suchanalyses seek to answer is usually not whetherintervention benefits have in fact been sustained.This is a question that in most cases cannot beanswered ahead of time. The question is ratherif the intervention has a potential for being sus-tained, and if it is likely that its positive impactwill be a lasting one.

There is a range of factors that determinewhether or not the results of the evaluated inter-vention will be sustained into the future:

PARTNER COUNTRY PRIORITIES

Development interventions always operate in the policy environment of partner countries. The prioritiesof partner organisations are critical to the sustain-ability of their results. Interventions stand a muchbetter chance of being sustained if they reflect part-ner country priorities than if donors drive them.

PARTNER COUNTRY OWNERSHIP AND

PARTICIPATION

Without partner country ownership developmentinterventions can usually not be sustained. Theactive participation of partner country stakeholdersin the planning, implementation and follow-up ofdevelopment activities helps stimulate local ownership.

INSTITUTIONAL AND CULTURAL FACTORS

Interventions should be well integrated in the localinstitutional and cultural context. Interventions thatare out of tune with local norms and sensibilities, or lack institutional support, are unlikely to be sustained.

TECHNOLOGICAL FACTORS

The technology utilised in the intervention should beappropriate to the economic, educational and culturalconditions of the host country. If the level of techno-logy is too advanced and spare parts scarce or tooexpensive, continued maintenance is unlikely.

36 EVALUATION CRITERIA

ENVIRONMENTAL FACTORS

In developing countries the natural environment isoften under pressure from population growth and poormanagement of natural resources. Environmentaldegradation may force partner organisations to discontinue otherwise positive intervention results.

FINANCIAL FACTORS

In many interventions, sustainability depends on partners’ financial capacity to maintain results.Government revenue, user fees and other incomegenerating activities may secure such funding, andhence contribute to sustainability.

MANAGEMENT AND ORGANISATION

The governance of recipient country institutions is a key determinant of sustainability. Weak, ineffective,or “unsound” management and organisation may significantly reduce the likelihood that results will besustainable. In many cases, a working system ofaccountability to citizens is likely to increase thechances that benefits will be sustained.

EXIT STRATEGY

The chances that the benefits from an interventionwill be sustained are also likely to increase if thereare time limits and well defined exit points. An exitstrategy, including a plan for sustainability, should be part of every development intervention from thevery beginning.

The above list of factors is not necessarily ex-haustive. Neither is it a catalogue, which must beincluded in all evaluations initiated by Sida. Theanalysis of sustainability must always be adaptedto the circumstances of each evaluation.

Standard questions about sustainability:

■ Is the intervention consistent with partners’priorities and effective demand? Is it sup-ported by local institutions and well inte-grated with local social and cultural condi-tions?

■ Are requirements of local ownership satis-fied? Did partner country stakeholders par-ticipate in the planning and implementationof the intervention?

■ Are relevant host-country institutions char-acterised by good governance, includingeffective management and organisation?

Page 39: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

■ Is the technology utilised in the interventionappropriate to the economic, educational andcultural conditions in the partner country?

■ Do partners have the financial capacity tomaintain the benefits from the interventionwhen donor support has been withdrawn?

■ Is the intervention compatible with a sus-tainable use of natural resources? Or is itharmful to the natural environment?

2.8 Efficiency

fficiency is a relation between means andends. More exactly, it is the ratio of thevalue of the results of an intervention to

the value of the resources used to produce them.An intervention is optimally efficient if its valueis greater than the value of any alternative use ofthese resources. If the same resources could haveproduced better results in some other way, or ifthe same results could have been produced withfewer resources, it is less than fully efficient.

Economists distinguish between differenttypes of efficiency. At it simplest, efficiency issynonymous with technical efficiency. When we saythat an intervention is technically efficient wemean that it produces as much as possible of aspecified result, given the available resources.Alternatively, an intervention is technically effi-cient when it produces a planned result with aminimum of resources. In either case, technicalefficiency is a matter of avoiding waste and reduc-

EVALUATION CRITERIA 37

ing costs. An intervention is technically efficientwhen its productive capacity is fully utilised.

A more complex concept of efficiency is thatof allocative efficiency. To be efficient in this sense anintervention must first of all be technically effi-cient. But this is not enough. Optimal allocativeefficiency is not achieved unless the interventionproduces the highest possible level of utility orvalue to society. If an alternative use of the sameresources would have produced a higher level ofaggregate value or utility, the intervention is notoptimally efficient from an allocative point ofview, although it may well be highly efficient inthe technical sense.

It is important to understand the differencebetween the two concepts. In an assessment oftechnical efficiency, an intervention is evaluatedagainst other ways of achieving the same concreteobjective, regardless of the value of that objective.In an assessment of allocative efficiency, by con-trast, an intervention is evaluated against everyalternative use of the same resources, at least inprinciple. Whereas an assessment of technicalefficiency focuses on the relation between inputsand outcomes (or outputs) and takes the objectiveas given, an assessment of allocative efficiencyraises the more far-reaching question of whetherthe intervention is economically worthwhile,given the alternatives foregone.

Assessments of efficiency are known as econo-

mic evaluations. By a standard definition, an eco-nomic evaluation is a comparative analysis ofalternative courses of action in terms of boththeir costs and their consequences. The object ofcomparison is usually another intervention, but itcan also be a constructed model of best practiceor standard of performance. The term partial

economic evaluation is sometimes used to describeforms of analysis that lack the central element ofcomparison between alternatives, but satisfy thedefinition in other respects.

Among the full economic evaluations thefollowing are the most important: cost-effective-ness analysis (CEA), cost-utility analysis (CUA)and cost-benefit analysis (CBA). All three areassessments where an analysis of outputs and

E

Page 40: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

effect like those discussed in previous sections iscoupled to an analysis of costs. The most impor-tant of the differences between them concernthe analysis of the outputs and effects. In CEAresults are measured in terms of natural units:number of households supplied with fresh water,number of persons cured of a particular disease,etc. In CBA results are measured in a variety ofways but valued in monetary terms. In CUA,finally, which occupies a middle ground betweenthe other two, results are reduced to a generic,non-monetary measure of utility that is applicableto a particular type of intervention. For example,in the health sector qualitatively different inter-ventions are made comparable through suchgeneric measures of health status as QualityAdjusted Life Years (QALYs) and DisabilityAdjusted Life Years (DALYs).

Which of these types of evaluation is themost appropriate in a particular study dependslargely on the question to be answered. If we areinterested in the technical efficiency of an inter-vention in relation to alternative ways of achievingthe same objective, CEA could be the appropriatetool. If the question concerns allocative efficiency,on the other hand, CBA would be a much betterchoice. When we are dealing with allocative effi-ciency on a societal scale there is in fact no alterna-tive to CBA. CUA is also a method for analysingallocative efficiency, but it is more limited inscope. It is used for answering questions aboutresource allocation at the level of a particularsector of the economy when measuring out-comes in monetary terms seems inappropriate.For example, by using QALYs and other genericoutcome measures specific to the health sector,health economists can compare the utility of dif-ferent types of health care interventions, whileavoiding some of the moral problems of measur-ing the value of human life in monetary terms.

CBA is the most powerful but also the mostdebated of the different methods of economicevaluation. In a CBA the evaluator first seeks toidentify all the significant costs and benefits ofan intervention, direct and indirect, tangible andintangible, and then attempts to estimate thevalue of each one of them in monetary terms.

38 EVALUATION CRITERIA

The costs and benefits may include factors like ahealthy or aesthetically attractive environmentand human life itself. Costs and benefits that aredifficult to measure in monetary terms are some-times set aside for separate consideration whenthe CBA proper has been completed. In the finalsteps of the analysis, costs and benefits are addedup and compared. If the total value of the bene-fits exceeds the total value of the costs, the inter-vention is considered worthwhile, otherwise not.

CBA is used in prospective appraisals of pro-grammes and policies as well as in evaluations ofcompleted interventions. It is intended to facilitatean economically rational use of scarce resources;this is its whole purpose. One of its main strengthsis that it provides a tool for systematically settingbenefits against costs. As both critics and sup-porters would agree however, CBA is not with-out limitations. One is that it tends to favour theinterests of the present generation over theinterests of future generations. Since the value oflong-term benefits and costs are calculated froma standpoint in the present – a technique knownas discounting– it can lead to an altogether toolight-hearted treatment of issues of environmentalsustainability and intergenerational equity.When the time-scale of the analysis is restrictedto a period of 30–40 years, however, there areno such problems.

Another limitation concerns the analysis ofdistributive effects. In CBA an intervention isregarded as socially useful and economicallyworthwhile if it increases the total amount ofsatisfaction in society as measured by the marketor some surrogate of the market, such as con-sumers’ stated ‘willingness to pay’. However,simply aggregating costs and benefits can easilyconceal the fact that there may be losers as wellas winners. Questions of distribution are usuallydiscussed as an additional issue in CBA. Still, thedistributive effects are not a central part of theanalysis of efficiency per se.

To describe this as a limitation of CBA,however, is perhaps not quite correct. What itmeans is that CBA is concerned with economicefficiency rather than equity or justice. Still, indevelopment co-operation, where we are con-

Page 41: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

cerned with poverty reduction and human rights,this is something to bear in mind. Normally,assessments of efficiency provided by CBA, asindeed any efficiency assessment, should becomplemented by considerations based on othercriteria. The point to remember is that all costsand benefits are relative to particular individualsor groups.9

Standard questions about efficiency:

■ Has the evaluated intervention been man-aged with reasonable regard for efficiency?What measures have been taken duringplanning and implementation to ensure that resources are efficiently used?

EVALUATION CRITERIA 39

■ Could the intervention have been implemented with fewer resources withoutreducing the quality and quantity of theresults?

■ Could more of the same result have beenproduced with the same resources?

■ Could an altogether different type ofintervention have solved the same development problem but at a lower cost?

■ Was the intervention economically worth-while, given possible alternative uses of theavailable resources? Should the resourcesallocated to the intervention have been usedfor another, more worthwhile, purpose?

9 Useful reading on economic evaluation:Belli, P. et al. Handbook on Economic Analysis of Investment Operations. Washington, D.C: World Bank, 1998.Drummond, P. H. et al. Methods for economic evaluation of health care programmes. Oxford: Oxford University Press, 1997.Levin, H. M. Cost-Effectiveness. A Primer. Newsbury Park: Sage, 1983.Rossi, P. H. et al. Evaluation. A Systematic Approach. Seventh Edition. Newsbury: Sage, 2003.

Page 42: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Chapter 3

This chapter starts with the observation that difficulties in evaluating development initiatives against the goal ofpoverty reduction in some cases are due to a lack of clarityregarding poverty and poverty reduction rather thaninsufficient understanding of evaluation. The following topicsare briefly discussed from an evaluation point of view:

■ The multidimensionality of poverty

■ The diversity of poverty within and between societies

■ Direct and indirect intervention strategies

■ The alignment of development assistance with partner country development strategies and administrative frameworks

■ Empowerment and popular participation

■ Gender mainstreaming

■ Poverty reduction and the environment

Page 43: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

EVALUATING POVERTY REDUCTION 41

3.1 Evaluating poverty reduction

or Sweden poverty reduction is the overallgoal of development co-operation. Supportfor human rights and democracy, economic

growth, gender equality, health and education,sustainable use of natural resources and protec-tion of the environment are all regarded as partand parcel of the struggle against poverty.10

It follows therefore that poverty reduction isalso the main standard against which Swedishdevelopment co-operation efforts should beevaluated. To put it very simply, if it turns out

that a strategy, programme or project supportedby Sweden has not contributed to poverty reduc-tion in some significant way, we ought to concludethat it should not have been selected for supportin the first place. Similarly, if we wish to claimthat one intervention is superior to another, allthings considered, we should be able to put for-ward a good argument to the effect that it hasserved, or is likely to serve, the cause of povertyreduction better than the other.

When planning to evaluate a developmentintervention in a poverty reduction perspective,the first thing to remember is that this is nodifferent from evaluating it in relation to anyother objective or goal. All that is said in thismanual about evaluation in general applies toevaluations focused on poverty reduction. Whenwe find it difficult to evaluate an intervention inrelation to its contribution to poverty reduction,it is usually not because we lack any special eval-uation tool or technique, although this may alsobe the case. More commonly, it is because wehave not fully considered what poverty is allabout, or because it is unclear how the evaluatedintervention is expected to help reduce poverty.

Evaluating Poverty Reduction

F

10 Directives for Swedish involvement in international development co-operation are laid down in the Policy forGlobal Development adopted by the Swedish Parliamenton December 16, 2003. For development co-operationthe main goal is to create pre-conditions for poor peopleto improve their living conditions. As emphatically statedby the policy, Swedish support for development shouldhave a rights-based perspective and consistently reflect theexperiences and priorities of poor people. The followingcomponents are singled out as particularly important:

• Democracy and good governance;• Respect for human rights;• Gender equality;• Sustainable use of natural resources and protection of

the environment;• Economic growth;• Social development and security;• Conflict prevention and resolution;• Global public goods.

Page 44: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

42 EVALUATING POVERTY REDUCTION

3.2 Implications of multidimensionality

s understood in the development communi-ty, poverty is multidimensional. It not onlyinvolves material deprivation and lack of

economic opportunity, but also vulnerability anddeprivation with respect to health and education,power and influence, social status and humandignity. In Sida’s view, poverty is essentially acombination of lack of power and choice andlack of material resources.11

This has far-reaching implications for evalu-ation. When evaluating a development co-oper-ation intervention in a poverty reductionperspective, we need to look at it in relation to allthe main dimensions of poverty:

➔ Will it increase the assets of poor people, createbetter opportunities for poor people to earn aliving, or otherwise improve their material standardof living?

➔ Will it have a significant impact, positively or negati-vely, on the health and life chances of poor people?

➔ Will it provide poor people with education and increase their access to useful and otherwisevaluable information and knowledge?

➔ Will it strengthen the rights of poor people andmake state organisations more responsive to theirneeds and interests?

➔ Will it empower poor people, individually or collectively? Will it increase their ability to asserttheir rights in relation to the state and more affluent citizens?

➔ Will it make poor people less vulnerable to theadversities of armed conflict, natural and humani-tarian disasters, market fluctuations, and otheruntoward developments?

All these questions should be considered duringthe initial stages of an evaluation, if only in a ten-tative fashion. In an evaluation of an educationalprogramme, for example, we could start with theeducational impact itself. Since illiteracy and lackof knowledge are part of poverty, improving poorpeople’s access to education is an immediate wayof reducing their poverty. But, obviously, provid-ing poor people with access to education may notbe the sole contribution to poverty reduction ofan educational programme. It could also havenumerous indirect effects on poverty, both posi-tive and negative. It could make the beneficiariesof the support more competitive in the labourmarket, for example, or it could empower them inrelation to local authorities, both of which couldbe more important impacts than the direct educa-tional one. A conceivable negative impact is thatthe programme might benefit relatively privilegedpeople rather than the disadvantaged, andthereby reinforce rather than reduce existinginequalities.

One would expect likely effects on poverty tobe well considered in funding proposals and otherdocuments underlying interventions, but occa-sionally this is not the case. In some cases, positiveimpacts in relation to poverty are simply taken forgranted. In other cases, poverty reduction is thestated goal of the intervention, although it is notclear how the outputs from the intervention areexpected to lead to this result. Questions aboutpossible negative effects are often ignored.

Faced with such problems, the evaluators’ firsttask is to reconstruct the intervention logic in themanner suggested in the second part of this man-ual. Without a clear idea of what the interventionreasonably could be expected to accomplish, for-mulating evaluation questions and designing astudy is bound to be difficult.

11 Perspectives on Poverty. Sida. October 2002.

A

Page 45: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

EVALUATING POVERTY REDUCTION 43

3.3 The poor are not a homogenous category

o be poor is always to be deprived of goodsthat are essential to human well-being.What this actually means, however, may

vary significantly from one period, place, society,group, or person to another. As stressed in policydocuments and guidelines, an intervention thatis intended to contribute to poverty reductionshould be based on a careful analysis of thecharacteristics of poverty in the society, area, orlocation where it is implemented. Such an analy-sis revolves around a few key questions:

➔ Who are the poor? What are their characteristics interms of gender, age, household status, ethnicity,religion, occupation, and so forth?

➔ How are they poor? What is the situation of differentgroups of poor people in terms of income and consumption, human rights, political power, socialdiscrimination, gender status, health and education,natural resources dependency, occupation, etc.?How do the different dimensions of poverty interactin the case of this particular group?

➔ What are the obstacles preventing different cate-gories of poor people from moving out of poverty? Lackof economic growth in the surrounding society?Lack of secure property rights with regard to landand other natural resources? Lack of marketableskills and resources? Lack of security? Lack ofpolitical power and voice? Discrimination by genderor ethnic origin? Etc.

➔ What is there to build on? How could strategiesfor coping with poverty be strengthened? Whatare the resources and opportunities of the poor?

Raised for the first time during planning, ques-tions such as these recur at the stage of evalua-tion. As evaluators or users of evaluations, weneed to know if the intervention to be evaluatedis built on a correct understanding of poverty inits local context. Without such knowledge thereis no way of telling whether the intervention isrelevant in terms of targeting, or if the supportthat it offers is well designed, given the aspira-tions and overall situation of the targeted groupor category. Without a proper understanding ofthe local meaning of poverty and the opportuni-ties and constraints facing the poor, a donor mayeasily end up targeting the wrong categories, orsupporting a correctly targeted category in thewrong way.

Note that poverty varies within as well asbetween societies, and that this is easily over-looked. Development organisations are notimmune to overly simplified notions about thecharacteristics of poverty. In the field of ruraldevelopment, for example, stereotyped ideas ofrural society as an aggregate of small-holderpeasant households have sometimes stood in theway of a full appreciation of the complexities ofsocial structures and livelihood patterns in thecountryside. A development strategy that isbased on such notions is obviously not going tobe of much direct help to all those who are land-less and depend on other sources of income.

Box 12

Evaluation and HIV/AIDSWhen preparing evaluations of projects and programmesin societies severely affected by HIV/AIDS, the epidemicshould always be considered. Not only are there importantquestions concerning the impact of development inter-ventions on the epidemic and the problems that it creates,but there are also important questions about how theepidemic affects the interventions themselves.

T

Page 46: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Note also that poverty changes and fluctuatesover time, sometimes in unpredictable ways, aswhen a community or society is thrown into aviolent conflict or is hit by a natural catastropheor a major epidemic. The impact of HIV/AIDSin Sub-Saharan Africa is the prime example,creating a development crisis of unprecedentedscale. For the international development com-munity as well as for national governments it is achallenge that requires a reassessment of develop-ment strategies at every level. The role of evalu-ation is to provide feedback to those who areresponsible for this rethinking and help dissemi-nate useful experience.

3.4 Direct and indirect interventionstrategies

e should also consider the fact thatstrategies for combating poverty may bemore or less direct and more or less

inclusive. Sida makes a threefold categorisationof poverty reduction interventions:12

■ Interventions directly and predominantlyfocused on different categories of poor people. Examples are targeted safety nets,labour-intensive works programmes, supportto refugees and internally displaced persons,and, in some cases, support to non-governmental organisations.

44 EVALUATING POVERTY REDUCTION

■ Inclusive actions where poor people areaffected as members of a broader group of beneficiaries, including sector-wideapproaches that are geared to sectors ofimportance to the poor (education, ruraldevelopment, small-scale enterprises).

■ General structural approaches aimed atunderpinning pro-poor policies. These givesupport to efforts for democracy and goodgovernance, macro-economic stability,increased accountability, transparency andthe fight against corruption.

These types tend to prompt somewhat differentkinds of evaluation issues. Where direct inter-ventions and interventions of the inclusive typeare concerned, questions of targeting are likely tobe prominent. If programme benefits are ear-marked for a particular category of poor people,we should make sure that all subgroups of thetargeted category have benefited, and that peoplewho are not entitled to them have not capturedtoo large a portion of the benefits. In the firstcase, we are concerned with the issue of coverage– are benefits evenly spread across the targetgroup? Have some parts of the target groupbeen favoured over others? In the second case,we are concerned with leakage of benefits topeople outside the target group. Have benefitsbeen appropriated by groups that were not en-titled to them?

We must also consider the economic andpolitical costs of targeting. On the economicside, there is an important question whether atargeted programme is a more cost-effective wayof reaching programme goals than a non-targetedprogramme. On the social and political side wehave to consider the possible adverse effects ofsingling out a particular low-status group forsupport, thereby perhaps antagonising large seg-ments of the surrounding society. As one wouldexpect, sustainability can be a serious problem inprogrammes that are directly targeted on thepoor and provide no benefits to other citizens.

12 Perspectives on Poverty. pp 9–10.

W

Page 47: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

EVALUATING POVERTY REDUCTION 45

When dealing with interventions of the inclusivetype, on the other hand, it is important to makesure that poor people have equal access to thebenefits, and that the programme has notbecome a means for privileged groups to furtherstrengthen their advantage. While programmesof the inclusive kind are often more likely to besustainable than programmes intended to benefita particular group of poor people, they mayhave their own characteristic weaknesses.Exclusion of poor people is often likely to be oneof them. When evaluating programmes of thistype it is important to make sure that they havea pro-poor profile and that the cards are notstacked against the poor.

A common problem with regard to interven-tions of the indirect structural type is that it maybe difficult to distinguish those that may have amajor poverty reducing effect from those thathave only marginal effects on poverty or no sucheffects at all.13 It is not unusual that the claimthat an intervention will benefit the poor rests onuncertain assumptions about developments in adistant future. When this is the case, evaluatorsneed to be on their guard. The longer theexpected causal chain, and the larger the numberof problematic assumptions concerning develop-ments outside program control, the more uncer-tain the impact. When dealing with such complexcases the evaluators’ first task is to revisit the inter-vention logic, and carefully assess the risk analysisagainst existing expert and local knowledge. Areall the assumptions in the intervention modelvalid? Is the list complete?

It is important that we understand what canand cannot be accomplished through such ananalysis. Assessing the quality of the interventionlogic itself is always possible. Is it based on clearand coherent assumptions about contextualfactors? Are posited causal links well supportedby research and experience? Is the risk analysisreasonable? However, verifying that intendedresults have been produced as expected is afurther and often much more difficult step. Not

infrequently, the conclusion of the analysis of theintervention logic is simply that some or all of ourquestions cannot be empirically investigated.

As noted in Chapter 2, the difficulties of eval-uating results tend to increase as we approachthe upper reaches of the hierarchy of objectives.In logframe terminology, it is when the linksconnecting outcomes to impacts are investigatedthat the real difficulties appear. This is true forall types of intervention. While the posited linksfrom outputs to outcomes can often be verified –at least if the outcome objectives have beenclearly defined in behavioural terms – the con-nections between outcomes and impacts are oftenwell beyond the reach of evaluation research.

One of the problems is that top-level goals areusually set very high. Project planning manualsrecommend that intervention goals should notbe set higher than necessary in order to justify theintervention. If we believe that improving com-munications for an isolated community is goodenough as a justification of a road building projectthis should be the main goal of the intervention. Ifthe construction of the road is justified by anexpected increase in trade, that should be the goal.

In development co-operation, however, inter-ventions often have to be doubly justified.Providing a valuable benefit for a particular targetgroup is not enough. We should also be able toexplain how the supported intervention is expect-ed to contribute to the achievement of develop-ment goals at a national or regional level.Sometimes we are expected to do this even whenthe intervention is relatively small. How does thisparticular project in support of a human rightsadvocacy group contribute to the development ofdemocracy and human rights in the country as awhole? How important is this particular interven-tion, given alternative strategies for reaching thesame overall development goal?

Not much reflection is needed to realise thatquestions like these are very different fromquestions about outcomes in relation to more nar-rowly defined objectives. Clearly, the two types of

13 David Booth et al. Working with Poverty Reduction inSida. Ms. Stockholm, 2001.

Page 48: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

questions may require different types of answers.When faced with questions about developmentimpact at the highest level, the appropriateanswer is sometimes that the question itself is ill-conceived. At that level, reassessing the interven-tion logic may be the best that we can do.

3.5 Integrating donor support withpartner country strategies

further point is that there is a growingconsensus in the development communitythat development co-operation results will

improve if the external support from donors isaligned with the poverty reduction strategies anddevelopment policies of developing countries.The constraints and opportunities facing donorsdiffer from country to country, of course, andthere are countries where the government doesnot have the capacity to direct the developmentprocess as required by the current developmentagenda. Still, the ambition is the same every-where. The partner country government andother organisations in the partner country shouldincreasingly direct and control the developmentprocess; programmes should increasingly replaceprojects as the dominant mode of aid delivery;and among donors co-ordination should replacefragmentation.

Again, there are clear implications for evalua-tion. The most important is simply that the effortsof donors to support partner country ownershipshould be systematically assessed. A second is

46 EVALUATING POVERTY REDUCTION

that there must be evaluations of the largerdevelopment co-operation system as well asevaluations of programmes and projects. A thirdis that questions about national ownership,capacity development, donor co-ordination,integration of external support with nationalsystems of administration and so on should besystematically addressed in project and pro-gramme evaluations. Evaluating intended inter-vention outcomes and impacts is not enough.Interventions should also be assessed in relation totheir systems level effects and the agenda for sup-porting ownership.

Box 13 contains an extensive list of possiblecriteria for assessing the contributions of donorsto the emerging development co-operationagenda. Designed for the evaluation of agencyperformance, several of these criteria can easilybe translated into criteria for the assessment ofindividual programmes and projects.

3.6 Participation and empowerment

hapter 1 notes that promoting participatoryforms of monitoring and evaluation inprogrammes and projects supported by

Sida is part of a wider Swedish policy of sup-porting popular participation in development.Here we make the related but different pointthat the same policy requires that participationbe used as a criterion or principle of assessmentin evaluations of development interventionssupported by Sida.

A

C

Page 49: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The stated rationale for supporting popular par-ticipation is twofold: participation is an end initself and participation increases the relevanceand effectiveness of development co-operationinitiatives. Promoting participation as an end itselfis obviously an important part of the overall effortto reduce poverty. While poverty is characterisedby exclusion and disempowerment, participationis the very opposite. Support for the participationof poor people is the same as support for theirempowerment and inclusion in society on equalterms.

Thus, when assessing development interven-tions against principles of popular participationwe are concerned not only with the ability of poorpeople to take part in the planning, implementa-tion and monitoring of the interventions them-selves, although this is often important. A bigger

EVALUATING POVERTY REDUCTION 47

question is how development interventions affectthe ability of poor people to give voice to theirinterests and participate on an equal basis insociety. As suggested in section 3.2 above, this isone of the questions that should always bereviewed when an evaluation is being prepared.Although some interventions are likely to have amuch greater impact on structures of governanceand participation than others, questions of popu-lar participation are rarely irrelevant.

Note that the issue of participation overlapswith that of ownership. When we talk aboutnational ownership, as distinct from governmentownership, we refer to the relationship betweenthe developing country government and itscitizens as well as to the patterns of interactionbetween the former and its external partners.Are public institutions and government policy-

Box 13

Is the development agency’s country strategy based onthe partner country’s own assessment and strategy foraddressing poverty? Is it based on a multidimensionalpoverty concept?

To what extent have the agency’s activities been carriedout jointly or in co-ordination with other developmentagencies, bilateral or multilateral?

To what extent have agency administrative and financialrequirements been adjusted to, or harmonised with thepartner country’s existing procedures or with those ofother external partners, where these procedures aredeemed appropriate?

To what extent has the agency implemented its supportin a manner which respects and fosters partner countryownership?

Has the agency supported and strengthened country-ledplanning, implementation and co-ordination processes?

Has the agency helped facilitate civil society’s participa-tion (at local, national, or international level) in debating

and deciding the contents of the country’s povertyreduction strategy in ways that respect governmentefforts and concerns?

Has there been a clear, serious commitment ofresources to poverty reduction?

Has a commitment been made to provide predictableresources over a medium-term planning timeframe?

Has sufficient care been taken to avoid duplication effortand to build on complementarities across the externaldevelopment community?

Have efforts been made to improve policy coherencewithin the agency, and, more broadly, across the fullrange of DAC Member government ministries anddepartment, and has progress been achieved?

Source: The OECD/DAC Guidelines for Poverty Reduction.Paris. 2001, p. 59.

Assessing development agency poverty reduction efforts

Page 50: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

making processes responsive to the interests ofpoor people and other citizens? Can poor peopleexpress their grievances and hold public author-ities to account? Without popular participation,national ownership does not exist.

3.7 Gender mainstreaming

ida has a policy of mainstreaming genderequality in all its work. In the context ofevaluation this means two things:

■ Evaluations initiated by Sida should produceinformation that is relevant to the implemen-tation and follow-up of Sida’s policy forpromoting gender equality.

■ Evaluations should be designed, implementedand reported in a manner that is consistentwith a policy of gender mainstreaming.

A first elaboration of these points is simply thatwomen and men face different types of con-straints and opportunities and that this must bereflected in evaluations. When we talk aboutpoor people without further qualification, as wetend to do, gender-based differences are easilyoverlooked. To avoid that this happens, standardquestions concerning relevance, impact, effective-ness, etc. should normally be further defined ingender terms. Thus, instead of asking if a projectis relevant to the needs and priorities of thepoor, we should ask whether it is relevant to theneeds and priorities of both poor men and poor

48 EVALUATING POVERTY REDUCTION

women. In some cases such a specification maynot be necessary, but often it will be all the moreimportant.

We should also bear in mind that genderequality is a relational phenomenon involvingmen as well as women. It will always be importantto know if and how the conditions of women (ormen) have improved as a result of a particularintervention. Yet, what we need to know as far asgender equality is concerned is how relationshipsbetween men and women have changed as aresult of the intervention. This point may seemobvious but it is easily ignored. It is not unusualthat evaluations fail to consider how recordedchanges in the conditions of women (or men) mayaffect established patterns of gender-basedinequality.

A third point is that the pursuit of genderequality is part of the struggle against poverty, andshould be evaluated as such. Thus, evaluatorsmust not only examine interventions in relation towhat they intend to achieve with regard to genderequality, but should also look at them in relationto the larger goal of poverty reduction. In manycases the link between increased gender equalityand poverty reduction may seem obvious. In othercases, however, it may be less apparent. Thus,everything that was said about evaluation of inter-ventions in a perspective of poverty reduction insections 3.2–3.6 applies in this case as well.

A fourth point concerns the implications ofSida’s mainstreaming policy for the planning,implementation and reporting of evaluations. Asshown in Box 14 there are important questionsabout gender inequalities to be considered at allstages of the evaluation process. The practicalimplications of gender mainstreaming vary withthe nature of the evaluated activities and thepurpose of the evaluation itself. If mainstreamingturns out to be time-consuming or otherwise de-manding, as it may, this simply shows that genderissues are of great importance to the evaluation.

Notice, finally, that gender mainstreamingcan itself be made the subject of evaluation. Ifan organisation has adopted a policy of gendermainstreaming, it goes without saying that itmay also want to evaluate its implementation.

S

Page 51: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

EVALUATING POVERTY REDUCTION 49

Box 14

PREPARATION

What does the funding proposal and related documentssay about the importance of the intervention with regard to poverty reduction and gender equality? What is theproject expected to achieve in these terms?

According to the same sources, what is the importanceof gender-related factors in the implementation of theintervention?

What significance should the evaluation assign to questionsabout gender equality, given what we know about theintervention and its purpose?

What is the evaluability of identified gender equalityissues? Are there baseline data for assessing genderequality impact?

Do the ToR clearly specify what the analysis of gendershould include?

Is the recruited evaluation team competent to assessthe gender equality issues in the ToR? Is their approachto gender sufficiently described in the tender documents?

THE RESEARCH PHASE

Do the evaluators consider the fact that men and womenoften communicate differently, and that women may notbe able to express themselves freely in all situations?

Are gender-sensitive methods of data collection usedwhere required?

Do the evaluators consult with both men and women?

Are data properly disaggregated by gender?

REPORTING

Does the report answer the questions about genderequality in the ToR?

Does the analysis of gender issues match the importanceof these issues in the ToR?

Are findings about gender equality assessed in relationto the standard evaluation criteria of effectiveness,impact, relevance, sustainability, and efficiency?

FOLLOW-UP, DISSEMINATION AND USE

Has the need for gender-sensitive methods of feedbackand communication been considered?

Has the evaluation report been disseminated to all interested parties?

Are conclusions and lessons concerning gender equalityfed back into operations and spread to others who maybe interested in them?

Gender issues in evaluation

Page 52: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

3.8 The environmental perspective

ida has a policy of integrating a concernfor the environment in all its activities, andit also supports development interventions

that specifically promote a sustainable use of nat-ural resources and protection of the environment.

As explained in Sida’s Guidelines for theReview of Environmental Impact Assessments,14

all Sida contributions should include an environ-mental impact assessment (EIA). Made at thestage of planning and appraisal, such an assess-ment is intended to make sure that benefits topoor people do not entail unacceptable costs interms of environmental degradation, now or inthe future. It also looks for positive linkagesbetween a policy of environmental sustainabilityand poverty reduction. Measures to improve theenvironment can lead to poverty reduction, justas poverty reduction efforts can have positiveeffects on the environment.

The questions about environmental impactraised during planning and appraisal should berevisited when evaluations are initiated. Theoriginal prospective assessment should not betaken for granted. It may have been invalidatedby unexpected developments during the imple-mentation phase or it may have been incorrectall along. The main question is how the evaluatedintervention affects existing linkages betweenpoverty and the environment. In some cases,certain infrastructure projects, for example, theimpact may be direct and easy to identify, inother cases, such as a sector reform programmeor a health care project, it may be more difficultto detect, although no less important.

“The main question is how the evaluatedintervention affects existing linkages

between poverty and the environment”

The scope of the assessment varies greatly withthe type of intervention being evaluated. Insome cases, the environmental issues occupy thecentre stage of the evaluation, in other cases theycan be summarily dealt with in the preparatoryphase. Yet, in no case can they be entirely dis-regarded. Box 15 lists questions about environ-mental consequences and their management asthey appear at different stages of the evaluationprocess. Further guidance can be found in Sida’sGuidelines for the Review of EnvironmentalImpact Assessments.

S

14 www.sida.se/publications

50 WHAT IS EVALUTION?

Page 53: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Box 15

PREPARATION

What was the conclusion of the ex-ante EnvironmentalImpact Assessment (EIA)? Has a follow-up of environmentalconsequences been made?

Should environmental issues be assigned a central ormarginal position in the evaluation, given what we knowabout the potential environmental consequences of theintervention and their relation to poverty reduction?

Does the EIA provide sufficient baseline data to make asatisfactory evaluation of environmental impact possible?How can we compensate for weaknesses in the EIA?

Do the ToR contain clear directives for the evaluation ofenvironmental issues?

Do the ToR make clear that the environmental analysisshould cover the actual and potential impact of the intervention on the conditions of the poor.

Do the tenders from interested evaluators reflect a properunderstanding of the environmental issues at stake inthe evaluation? Does the evaluation team have the competence and experience required for an assessmentof the intervention in the perspective outlined in the ToR?

Have the evaluators been properly briefed about Sida’sGuidelines for the Review of Environmental ImpactAssessments and other relevant instructions regarding theenvironmental dimensions of development interventionssupported by Sida?

THE RESEARCH PHASE

Have Sida and its partners provided the evaluators withthe necessary support to evaluate the environmentalissues as required by the ToR?

REPORTING

Does the report adequately answer the questions aboutenvironmental consequences in the ToR? Have both positive and negative consequences been considered?

Does the report answer questions about the analysis andmanagement of environmental issues during the planningand implementation of the intervention?

Does the report contain a reassessment of the EIA carried out during planning?

Are findings regarding environmental dynamics assessed in relation to the standard evaluation criteria of effectiveness, impact, relevance, sustainability, andefficiency?

FOLLOW-UP, DISSEMINATION AND USE

What are the practical implications of findings and conclusions regarding environmental impact and themanagement of environmental issues by Sida and itspartners? What are the lessons learned?

Are findings and conclusions regarding environmentalissues properly reflected in Sida’s managementresponse to the evaluation?

Are the findings and conclusions of the evaluation properly disseminated within Sida and effectively madeavailable to Sida’s partners?

Evaluating environmental consequences

WHAT IS EVALUTION? 51

Page 54: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 55: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Part Two: The Evaluation Process Step by Step

Page 56: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 57: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The Evaluation ProcessStep by StepThe step-by-step guidelines presented in this part of the manualare designed to help Sida and its partners manage evaluations.The guidelines cover the main steps of the evaluation process,and provide practical advice on how evaluations can be tailoredto the needs and interests of their intended users.

The guidelines are divided into five sections corresponding tothe five main steps of the evaluation process. Each sectionbegins with an overview of the main tasks during that particularstep. The tasks are then further described in a series of sub-sections. At the end of each sub-section, there is a checklist thatbriefly summarises the tasks from a practical “how-to-do-it”perspective.

The guidelines are primarily directed to evaluation managers,

i.e. Sida staff and others who are responsible for managing andco-ordinating evaluation initiatives. In many cases the evaluationmanager is identical with the programme officer managingSida’s support to the evaluated intervention. As we are focusingon roles and relationships in the evaluation process, however, weconsistently refer to the person administrating the evaluation asthe evaluation manager.

The guidelines are relevant for most types of evaluation. Thedescribed tasks are essentially the same, irrespective of whetherthe object of the evaluation is a project, programme, interventiontheme, or aid modality.

THE EVALUATION PROCESS STEP BY STEP 55

Step 3The evaluation

research phase

Step 2Preparation of evaluation

tasks

Step 4Reporting anddissemination

Step 5Management

response

Step 1Initial

considerations

Page 58: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 59: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Step 1

Initial ConsiderationsIn this first step of the evaluation process, the most importanttasks of the evaluation manager are to:

1.1 Involve interested stakeholders,

1.2 Define the purpose of the evaluation,

1.3 Establish an organisation for evaluation management.

Involving stakeholdersAs briefly discussed in Chapter 1.4, evaluations commissionedby Sida should be carried out in a spirit of partnership. Sida’sco-operation partners and other important stakeholder groupsshould be encouraged to actively participate in the evaluationsthat Sida initiates. For Sida, stakeholder participation is an endin itself as well as a means.

The pragmatic argument for participation is that the qualityof evaluations tends to improve when co-operation partnersand other stakeholder groups are actively involved in the evalu-ation process. Among the benefits of participation are greateraccuracy and depth of information, increased credibility andacceptance of findings, and better correspondence to the prac-tical concerns of stakeholders.

Stakeholder participation may, however, increase both cost andtime. Therefore, the evaluation manager and others responsiblefor the evaluation should discuss to what extent different stake-holder groups should participate, given their legitimate interest inthe evaluation as well as costs, timing and other practical aspects.

INITIAL CONSIDERATIONS 57

1.1

Step 3The evaluation

research phase

Step 2Preparationof evaluation

tasks

Step 4Reporting anddissemination

Step 5Management

response

Step 1Initial

considerations

Page 60: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Stakeholder groups in evaluations

CO-OPERATION PARTNERS

The parties that request donor support and that are responsible for planning, implementing and following up the evaluated intervention.

PRIMARY STAKEHOLDERS

Target groups who benefit from the results of the evaluated intervention (beneficiaries), but also those groups of people who maybe adversely affected.

DONOR ORGANISATIONS

Sida and other donor organisations that provide financial, technical and other types of support to the evaluated intervention.

INTERESTED PARTIES

Groups that have other stakes in the evaluation, for example partner governments, implementing consultants, and organisationschannelling donor support.

Involving co-operation partnersWhen Sida initiates an evaluation, and starts to involve interestedstakeholders, among the first steps are consulting with co-operation partners. Through these consultations Sida, firstly,establishes what interest co-operation partners have in the evalu-ation. Do they wish to be actively involved in the evaluationprocess, and do they want the evaluation to be geared towardstheir own management purposes? If yes, a joint evaluation of thekind discussed in Chapter 1 could be a suitable arrangement forevaluation management. Secondly, Sida and its co-operationpartners may have to discuss how other partner country stake-holders should be involved in the evaluation process.

Involving co-financing donorsWhen Sida is one of several development agencies supportingthe intervention, it should, as far as possible, seek to co-ordinateits evaluations with those of the other agencies. To avoid unne-cessary duplication of efforts, the evaluation manager shouldexamine whether the planned evaluation can be jointly carriedout or if there are other means of collaboration. In some cases,it may be possible for Sida to use information already producedby other agencies, rather than to undertake a study of its own.As a rule, however, co-operation in evaluation should be dis-cussed well in advance of any particular study.

Involving primary stakeholdersIt is important to consider how target groups and other primarystakeholders should be involved in the evaluation. This shouldbe done as early as possible, even when target groups cannotrealistically be expected to actively participate in preparatory

58 INITIAL CONSIDERATIONS

Page 61: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

work. It is essential that the evaluation is designed to enable tar-get groups to constructively participate in the research processand express their point of view. Note that target groups areoften neglected in evaluation processes, even when they repre-sent a key source of knowledge and information.

Involving other interested partiesEvaluations may have impacts also on stakeholder groups thatdo not participate actively in the evaluation process and do notbelong to the intended beneficiaries. Evaluation ethics requirethat such groups are informed about the evaluation and givenreal opportunities to express their interests.

Using stakeholder resources efficiently Every evaluation is dependent on the co-operation of stake-holders for information and practical arrangements. For example,co-operation partners and implementing consultants are oftenrequested to provide documentation and prepare meetings.They are almost always expected to be interviewed and sharetheir views on intervention performance and results.

The evaluation manager must ensure that the evaluationdoes not overtax stakeholders’ time and resources. Stakeholdersshould be informed as early as possible about the evaluation.This will assist them in planning their inputs to the study with-out straining other commitments and duties.

Defining the evaluation purposeDefining the evaluation purpose is one of the most importanttasks in the early stages of the evaluation process. A clear purposehelps the formulation of evaluation questions, and makes it eas-ier for external evaluators to produce a study that Sida and itspartners may find relevant and useful.

INITIAL CONSIDERATIONS 59

CHECKLIST

for involving key stakeholders

■ Carry out a preliminary stakeholder analysis, and discuss withpartners as early as possible how key stakeholder groups might wishto be involved in the process.

■ Inform major stakeholder groups about the evaluation initiative withoutunnecessary delay.

■ Indicate, clearly and as early as possible, how different stakeholdergroups are expected to contribute to the evaluation with informationand practical support.

1.2

Page 62: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The intended use of the evaluationThe evaluation purpose should be formulated in a way thatspecifies how the information produced by the evaluation is tobe used. “To analyse project effectiveness and impacts”, “assessthe relevance of the project”, and the like are descriptions ofthe means by which evaluation purposes can be achieved; theyare not descriptions of the purposes themselves. The purpose ofan evaluation is always an action that the evaluation is intendedto make possible.

As noted in Chapter 1, the purpose of development co-operation evaluations can be summarised in terms of account-ability and learning. Below are some concrete examples ofpurposes for which Sida initiates evaluations.

Examples of Sida evaluation purposes:

■ Provide Sida and its partners with an input to upcomingdiscussions concerning the preparation of a second phaseof the evaluated intervention.

■ Help Sida and its partners make sure that the interventionis well on track and is likely to reach its objectives.

■ Help Sida decide whether support to the intervention shallcontinue or not.

■ Provide Sida with relevant and useful background information for an annual sector review.

■ Gather data about the effectiveness and impacts of theevaluated interventions in order to help Sida elaborate a results-based country strategy.

■ Provide Sida and its partners with lessons that can be used in policy work or when designing programmes andprojects elsewhere.

■ Provide information about results that Sida and its partners can use in reporting to principals and the general public.

If no similar purpose can be identified, it may signal that theevaluation is not sufficiently useful to justify the investment. Insuch cases, the evaluation could be postponed until it can betterbe fed into operative and policy-making processes. Perhaps itshould not be carried out at all. Clearly, there is no point in car-rying out evaluations unless they can be used productively.

Consulting with partners and other possible usersWhen Sida initiates an evaluation it has a tentative idea of howit intends to use the study. This, however, does not mean that thedefinition of the evaluation purpose always is Sida’s exclusive

60 INITIAL CONSIDERATIONS

Page 63: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

responsibility. If Sida’s co-operation partner or a co-financingdonor wishes to participate actively, the definition of the pur-pose becomes a joint undertaking. If the information needs ofthe prospective partners are too diverse to be accommodated inone and the same evaluation, the idea of a joint study should beabandoned. Where accountability is the main purpose of theevaluation, a joint study may also not be the best option.

Several rounds of consultations must often be held before anacceptable definition of the purpose or purposes of the evalua-tion is found. This process must be allowed to take time. MostSida evaluations are listed in Sida’s annual evaluation plan,often months before details are elaborated. Therefore, Sida usu-ally has enough time to consult with relevant stakeholders.

Note that the timing of the evaluation is likely to be impor-tant. The evaluation must be phased in such a way that the par-ticipants can make good use of it. If the evaluation cannot becompleted before it is to be used, it should be redesigned or can-celled. It must be taken into account that different stakeholdersmay have different time frames and deadlines.

Establishing an organisation for evaluation managementIn many cases, the necessary contacts between the evaluationmanager and different stakeholder groups can be maintainedwithout establishing a formal organisation for evaluation manage-ment. However, in complex evaluations, where several stake-holder groups have major interests, a reference group or steeringgroup is usually formed.

While a reference group is an advisory body, a steering groupis formed to give stakeholders an opportunity to participate indeciding key issues during the evaluation process. Involving rep-resentatives of major stakeholder groups is likely to enhance thecredibility of the evaluation and may also create a more activeinterest in the results of the study among those involved.

INITIAL CONSIDERATIONS 61

CHECKLIST

for the definition of the evaluation purpose

■ Identify possible users of the evaluation, i.e. those groups that areexpected to make use of the evaluation process and its results.

■ Ensure that the evaluation purpose is defined through a participatoryprocess that engages all interested users of the evaluation.

1.3

Page 64: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

A typical steering group would include representatives of at leastone of the following groups: co-operation partners, implementingorganisations and co-financing donors. These are typical tasksfor a steering group:

■ Providing inputs to the terms of reference,

■ Formally accepting the terms of reference,

■ Monitoring the recruitment of external evaluators,

■ Approving the selection of evaluators,

■ Commenting on draft reports and

■ Approving the final report.

62 INITIAL CONSIDERATIONS

CHECKLIST

for establishing an organisation for evaluationmanagement

■ In consultation with stakeholders establish practical arrangements for communication and co-operation during the evaluation process.

■ Consider forming a reference group or a steering group when the evaluation is complex, involves several stakeholder groups, andthe intended users include non-Sida stakeholder groups.

Page 65: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 66: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 67: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Step 2

Preparation of EvaluationTasksIn this second step of the evaluation process, the most importanttasks for the evaluation manager are to:

2.1 Review the intervention selected for evaluation,

2.2 Define the questions that the evaluation should answer,

2.3 Assess evaluability, i.e. the extent to which these questions are answerable,

2.4 Consider the option of dealing with the above tasks in a pre-study,

2.5 Estimate the total budget of the assignment,

2.6 Formulate terms of reference for a team of external evaluators, and

2.7 Recruit a team of evaluation consultants.

The complexity of the preparatory tasks varies from one evalu-ation to another. For example, if the evaluation manager isalready familiar with the intervention and its objectives, and theintervention is a relatively free-standing project of limited scope,the initial review is usually a fairly simple task. Also, if the evalu-ation is geared towards planning and implementation processes,rather than outcomes and impacts, the assessment of evaluabilitymay not be very complicated.

If the preparatory tasks turn out to be complex and time-consuming, the manual suggests alternative ways of handlingthem. First, there is always the option of including tasks 2.1–2.3in a separate pre-evaluation study carried out by consultants.Second, there is the alternative or additional option of includingsome of these tasks in the inception phase of the evaluationassignment itself. Directions for these two options are given inSections 2.4 and 3.1 respectively.

PREPARATION OF EVALUATION TASKS 65

Step 3The evaluation

research phase

Step 2Preparationof evaluation

tasks

Step 4Reporting anddissemination

Step 5Management

response

Step 1Initial

considerations

Page 68: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Reviewing the interventionIt is important to review, as early as possible, the main featuresof the intervention selected for evaluation. Without a goodunderstanding of the intervention and its intended logic, wecannot identify which questions that should be answered by theevaluators.

Time periods and intervention phasesDevelopment interventions often consist of distinct but relatedphases. While a phase may cover a period of 2–3 years, theintervention itself may date back several years and even decades.The question of when individual interventions actually start andend is not always easy to answer. Still, it is important that theterms of reference for the evaluation are precise about whatperiods and phases the study should deal with. The decisionmay depend on a variety of factors, not least which informationwe expect from the evaluation. This must be discussed betweenthe intended users of the evaluation.

The intervention logicWhen preparing the evaluation, the evaluation manager shouldsummarise the intervention logic in the conventional logframeformat, including planned activities, expected outputs, outcomesand impacts, as well as the indicators and assumptions on whichthe intervention logic is based. Without a preliminary analysis ofgoals and objectives, indicators and assumptions, etc, it is difficultto pose the right questions. A preliminary analysis of the inter-vention logic will also be useful to the evaluators. Otherwise,they may waste plenty of time in figuring out what they actuallyshould be evaluating.

Background information for an analysis of the interventionlogic is found in project documents and Sida’s rating system(SiRS). The rating system summarises the intervention logic,and describes the outputs of all interventions with a total budgetexceeding SEK 3 million, and an agreement period of at leastthree years.

When project documents and SiRS lack required information,consultations with co-operation partners and other stakeholdergroups may help fill the gaps. Co-operation partners and otherstakeholders may also be asked to check that the documentaryanalysis is correct.

66 PREPARATION OF EVALUATION TASKS

2.1

Page 69: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Activities and outputsInformation about activities and outputs is necessary whenpreparing evaluations. It allows us to check if the identified inter-vention logic indeed is plausible and hence useful for guiding theevaluation. There is little point in carrying out an evaluation onthe basis of an intervention logic that does not correspond towhat actually took place during implementation.

In many cases, a good way to gather information aboutactivities and outputs is to review the budget and activity state-ments that partner organisations send to Sida in the form ofquarterly, annual and final progress reports. The review shouldidentify, as far as possible, the content, budget and total expen-diture of distinct activity and output categories.

Note that SiRS can also be helpful in this respect. However,since it is limited to data on six outputs (also for more complexinterventions) and does not automatically provide informationabout activities and the budgets and disbursements per eachoutput, it must be supplemented by an analysis of progressreports.

Complex interventions Interventions supported by Sida often embrace several activityand output categories, or components, each with its own dis-tinct set of objectives. To the extent that these components havea common objective, such interventions are normally referredto as programmes.

Complex interventions such as these represent a special chal-lenge. If all the components of a complex intervention were tobe evaluated, the evaluation process could itself become verydifficult, and perhaps unacceptably time consuming and costly.It may therefore be necessary to focus the evaluation on a lim-ited number of components. In many cases, the evaluation pur-pose can be achieved if the study is limited to components thatare financially important or central to the main theme of theintervention.

PREPARATION OF EVALUATION TASKS 67

Page 70: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Formulating evaluation questionsAn evaluation should not seek to answer more questions thanrequired for its practical purpose. Limiting the focus and scopeof the study helps ensure an efficient use of evaluation resources.On the other hand, a narrow focus or scope may not be fullycompatible with a participatory approach. Where several stake-holder groups are involved, the list of questions to be answeredby the evaluation is likely to become longer.

Regardless of the number of stakeholders, however, it isnecessary to formulate evaluation questions and choose whatevaluation criteria, policy issues, and standards of performanceshould be included in the evaluation. The key question for thisstep in the process can be formulated as follows: Which infor-mation is required in order to satisfy the practical purpose forwhich the evaluation is undertaken?

Evaluation criteriaThe criteria of relevance, effectiveness, impact, sustainability andefficiency discussed in Chapter 2 provide us with a useful pointof departure. These criteria are basic yardsticks for evaluationand can normally be accepted as such by all stakeholders. Eachone represents an important general question that is relevant toevery development intervention:

68 PREPARATION OF EVALUATION TASKS

2.2

CHECKLIST

for the review of the evaluated intervention

■ Specify what intervention phases the evaluation should deal with,and make sure that the intervention logic, and the activities and outputsduring those phases are reviewed in detail.

■ Review planning documents and SiRS data and summarise theintervention logic with the normal distinction between planned activities,outputs and effects.

■ Review SiRS data and co-operation partners’ progress reports and summarise the content, budget and expenditure of the activities andoutputs implemented and produced to date.

■ Analyse the extent to which the activities carried out can reasonablybe expected to meet the objectives of the intervention.

■ If necessary, consult with co-operation partners to obtain more specific,plausible and up-to-date accounts of intervention objectives andassumptions.

■ Consider focusing on key components when the intervention iscomplex and involves several components.

■ If these tasks are likely to be complex and time-consuming, considercommissioning a pre-evaluation study from external consultants.

Page 71: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

EFFECTIVENESS

Has the intervention achieved its objectives or will it do so in the future?

IMPACT

What are the overall effects of the intervention, intended and unintended, long term and short term, positive and negative?

RELEVANCE

Is the intervention consistent with the needs and priorities of its targetgroup and the policies of the partner country and donor agencies?

SUSTAINABILITY

Will the benefits produced by the intervention be maintained after the cessation of external support?

EFFICIENCY

Can the costs of the intervention be justified by the results?

As stated in Sida’s Evaluation Policy, these questions shouldalways be considered when Sida initiates an evaluation.15 Theyare not compulsory, but none of them should be set aside with-out a tentative assessment of their bearing on the evaluation. Inmany cases there are good reasons to include all five in thestudy. Chapter 2 discusses them in detail and provides examplesof more specific evaluation questions related to them.

Policy issuesThe Swedish policy for development co-operation, with its maingoal of poverty reduction, is a second major point of reference forthe formulation of evaluation questions. Since all interventionssupported by Sida should contribute to poverty reduction in oneway or another, questions about poverty reduction should bediscussed in virtually all evaluations initiated by Sida. If nothingelse, the analysis of the intervention logic should indicate howthe evaluated activities are intended to serve this end. Normally,one would also expect the evaluation to make an assessment ofthe plausibility of the intervention logic.

Chapter 3 discusses what evaluating interventions against thepoverty reduction goal may entail. It underlines the importanceof assessing interventions in relation to a multi-dimensional andrights-based concept of poverty and reviews a number of issuesthat should always be taken into consideration when povertyreduction is the standard of evaluation. It also notes that supportfor gender equality and protection of the environment areimportant elements of the poverty reduction goal, and brieflydiscusses their implications for evaluation.

PREPARATION OF EVALUATION TASKS 69

15 www.sida.se/publication

Page 72: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Suggestions from other evaluationsReviewing evaluations of similar interventions elsewhere or evalu-ations of other types of intervention in the same society or region,can be a useful way of identifying pertinent evaluation questions.It can help us identify design factors that could be important forthe success of the intervention, and make us sensitive to aspectsof culture and social structure that may affect its implementationand results. Building on lessons from previous evaluation workis essential if the evaluation is intended to contribute to widerdiscussions about good development practice.

Recommendations and lessonsGiven a well-defined purpose it may be self-evident what kindsof recommendations and lessons the evaluation should produce.However, it is generally advisable to be explicit in this respect.Useful lessons are not produced automatically. If we want theevaluators to reflect on the significance of the evaluated inter-vention in relation to wider concerns of some kind, we must askthem do to so. Note that asking the evaluators to consider aparticular issue is not the same as preventing them from raisingissues that are not specifically mentioned in the terms of reference.As a rule, the evaluators should be encouraged to make usefulcontributions of their own.

Assessing evaluabilityThe main objective of the evaluability assessment is to find outto what extent the proposed evaluation questions can beanswered. This assessment often leads to modifications of theevaluation design. Some evaluation questions will then be with-drawn as impossible, overly difficult or excessively costly. Otherquestions will have to be further elaborated.

Evaluability is usually a matter of degree; many or mostevaluation questions can be answered to some extent. Note alsothat in many cases definitive answers about evaluability cannotbe expected at this early stage of the evaluation process. Often,

70 PREPARATION OF EVALUATION TASKS

CHECKLIST

for the definition of evaluationquestions

■ Consider which information is required in order to satisfy the purposefor which the evaluation is undertaken.

■ Decide what evaluation criteria and policy issues should be usedin assessing the selected intervention.

■ Review evaluations of similar interventions elsewhere and evaluationsof other types of interventions in the same society or region.

■ Consider which recommendations and lessons that are particularlyrelevant to the evaluation purpose.

2.3

Page 73: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

questions about evaluability must be further examined during theresearch phase of the evaluation (Section 3.1). Still, the earlierquestions about evaluability are discussed and answered, thebetter.

Of the many factors that determine evaluability, the followingtend to be particularly important: the specificity of interventionlogic; the existence and quality of baseline and implementationdata; the availability of key informants; and the timing of theevaluation in relation to the intervention cycle.

The specificity of the intervention logicMost evaluations include an analysis of project effectiveness, i.e.an assessment of whether planned outputs, outcomes and impactshave been realised as expected. As discussed in Chapter 2, suchan analysis requires that the objectives of the intervention clearlystate what the intervention is expected to achieve in terms ofobservable developments. In some cases, there are verifiableindicators at every level of the goal hierarchy. If the planningdocuments do not contain such indicators, clarifications fromco-operation partners and other stakeholders, are necessary assuggested in Section 2.1.

The availability of relevant dataThe availability of relevant information is a key issue in everyevaluation. Baseline information is necessary for outcome andimpact-oriented evaluations, as they provide benchmarksagainst which developments can be identified. Implementationdata that clarify intervention activities and outputs are impor-tant in all types of evaluation. In studies of impact we may alsoneed data that can be used to estimate the counterfactual. Thisis further discussed in Chapter 2.

Collecting existing documentation is crucial for evaluationefficiency. Even if the intervention has never been evaluatedbefore, there may still be useful information available, for exampleinternally produced monitoring data or information produced byexternal monitoring teams. Taking stock of existing informationeconomises with stakeholder resources, as repetitive and over-lapping interviews can be avoided.

Access to key informantsSuccessful evaluation requires that different stakeholders arewilling and able to participate in the evaluation. Access to keyinformants may prove difficult because of, for example, staffchanges, vacations, or problems of travel and communication.Stakeholders may also lack interest in participating in the evalu-ation process.

PREPARATION OF EVALUATION TASKS 71

Page 74: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

It is therefore imperative to check if key informants are avail-able before the evaluation assignment starts. If co-operationpartners or other stakeholders are unavailable, it may be impos-sible to implement the study as planned.

The timing of the evaluation in relation to the intervention cycleThis is clearly an important determinant of evaluability. If theintervention is still at the implementation stage some of theexpected outcomes and impacts have probably not yet occurredand can therefore not be subject to study. In some cases, the fulleffects of the intervention will not have occurred until manyyears after its completion. Evaluations may always assess thelikelihood that an expected future impact will occur, of course,but a study of a planned future impact is not the same as a studyof real impact.

Making a pre-evaluation studyReviewing the intervention, formulating evaluation questions,and assessing evaluability are important tasks, all of which shouldbe carried out before the terms of reference are finalised. Inmany cases, however, they require considerable efforts. Wherethe intervention is complex or where the scope and focus of thestudy must be negotiated with several different stakeholders, thepossibility of commissioning a pre-evaluation study from anexternal consultant should be considered.

A pre-evaluation study can include representatives of Sidaand the partner organisation responsible for the intervention.Assisted by an external consultant, Sida’s evaluation managerand a representative of the co-operating organisation can jointlyreview the intervention and attempt to identify questions that

72 PREPARATION OF EVALUATION TASKS

CHECKLIST

for the evaluability assessment

■ Assess the extent to which the intervention logic provides evaluatorswith operational benchmarks against which outputs, outcomes,impacts and assumptions can be evaluated.

■ Check whether necessary baseline and monitoring data is available.

■ Check the availability of key informants, such as planners, interventionstaff, and target group representatives.

■ Assess the extent to which the evaluation questions can be answered,given the timing of the evaluation in relation to the current phase ofthe intervention cycle.

■ Keep in mind that it may not be possible to make a full evaluability assessment at this early stage. Consider the possibility of completingthe evaluability assessement during the research phase.

2.4

Page 75: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

both parties find useful. In many cases, this might be a moreconstructive way of promoting partner country participation inSida evaluations than to solicit inputs to terms of referencealready drafted by Sida.

While several of the tasks described in Sections 2.1–2.3 can becarried out with the assistance of an external consultant, majordecisions regarding the scope and focus of the study rest withSida and its partners in the evaluation. Note that not even a pre-evaluation study may be able to fully assess the evaluability of allthe preliminary questions. In some cases, questions of evaluabilitycannot be answered before the evaluation has started.

Estimating evaluation costsWhen the evaluation questions have been formulated, the nextstep is to estimate the costs of the evaluation assignment. Settinga budgetary limit for the consultant’s assignment is importantfor several reasons. First, it helps indicate the level of ambitionfor tasks associated with the evaluation. Second, it is necessaryfor Sida’s overall financial planning. Third, a budgetary limit isneeded for decisions on correct procurement procedures.

Budgetary limits do not seriously undermine cost competition.Even if all bids are close to the given upper limit for the assign-ment, the bidding consultants still compete in terms of weeklyfees and the number of working weeks that they propose to spendon the assignment.

The most important cost item in an evaluation budget is theconsultant’s fee. The evaluation manager can arrive at a budgetlimit by estimating the total number of person-weeks suitable forthe assignment. By multiplying the number of weeks with a com-petitive weekly fee, an approximate fee limit can be calculated.

The time required for evaluation assignments is often under-estimated. In many cases, the information necessary to answerevaluation questions is not readily available, but has to becollected by the evaluators through interviews with stakeholdersand time-consuming analyses of secondary data.

PREPARATION OF EVALUATION TASKS 73

CHECKLIST

for a pre-evaluation study

■ Consider if there are any major preparatory tasks that cannot be carried out without an externally commissioned pre-evaluation study.

■ Consider if a pre-evaluation study is compatible with the overall timeschedule and budget of the evaluation.

■ If a pre-evaluation study is carried out, remember that key decisionsabout the evaluation rest with Sida and its partners, and that the pre-evaluation study is only one of several inputs when formulating the terms of reference.

2.5

Page 76: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

With the relatively few person-weeks used for evaluations com-missioned by Sida – typically three to six person-weeks in thefield – even a small number of evaluation questions may wellstretch the limits of the possible. From a purely technical point ofview, it is usually advisable to examine a smaller number of ques-tions in depth rather than a larger number more superficially.

When estimating the time required for the evaluation it is agood idea to consult with more experienced colleagues andexternal expertise. It should be noted, however, that estimates ofthe time necessary for an evaluation generally tend to be some-what arbitrary. When there is much uncertainty about the timeneeded for the study, the option of a flexible contract should beconsidered.

Notice that the budget limit shall cover reimbursable costs,such as travel, hotel and other costs, as well as fees. Of the totalreimbursable costs, a 10% contingency item should be includedin the estimated budget.

Writing terms of referenceThe terms of reference (ToR) summarise the results so far ofthe preparatory tasks undertaken. The ToR also outline a workplan and time schedule for the assignment, the required com-petence and composition of the evaluation team, and thereports and other outputs that the evaluators should deliverunder the contract.

The ToR may also indicate what kind of methodologyshould to be used by the evaluators for data collection andanalysis. In most cases, however, it is the responsibility of theevaluators to elaborate a suitable methodology and work plan,either in the tender document or in an inception report.

The importance of investing sufficient resources in thepreparatory steps of the evaluation, and of documenting theresults in the ToR, cannot be overemphasised. The ToR consti-tute the evaluation manager’s main instrument in instructingevaluators how the assignment should be carried out. They alsoserve to document the main points of the agreement betweenSida and its partners in the evaluation.

74 PREPARATION OF EVALUATION TASKS

CHECKLIST

for estimating evaluation costs

■ Set a budget limit for the evaluation.

■ Use standards such as a competitive consultancy fee and an estimatedamount of person-weeks for the evaluation when setting the budget limit.

2.6

Page 77: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Formulating the ToR must be allowed necessary time, and bebased on consultations with concerned stakeholder groups. If areference group has been formed, its comments on the ToRmay be the group’s first major input to the evaluation. If a steer-ing group is involved in the direction of the evaluation, thestakeholder representatives must have real opportunities toinfluence the ToR. Normally, several rounds of consultationand redesign are needed before the ToR can be finalised.

The ToR must be formulated in a way that facilitates mutualunderstanding between the evaluation manager and the evalu-ators on all matters of importance. Ambiguous ToR, resultingin misunderstandings between the evaluation manager and theconsultant, are a common problem that tends to be difficult tosolve if not deliberately tackled in the preparatory phase.

Before the ToR are finalised, we must examine them fromthe point of view of prospective bidders. The following aresome of the questions that evaluators are likely to regard asimportant:

■ Can the evaluation questions be answered? Are there anyprovisions for the possibility that some of them may not beanswerable? Are the ToR flexible enough to allow forcontingencies of this kind?

■ Are there any non-negotiable directives in the ToR regardingevaluation methods or other aspects of the evaluation process? Are they reasonable?

■ Which expert knowledge is required for the job?

■ Have sufficient time and resources been allocated for theevaluation?

■ Which practical support will the evaluators receive fromthe client and other stakeholders during the evaluationprocess?

■ Are client expectations regarding reporting clearly formulated?

■ What do the ToR imply with regard to the interaction between the evaluators and the client? Will the evaluatorshave a substantive role in the design of the study? Are theyexpected to assess the intervention from a truly independentpoint of view? Is their mandate more limited?

If the ToR have been well formulated, prospective bidders willbe able to find answers to all of these questions.

PREPARATION OF EVALUATION TASKS 75

Page 78: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Recruiting evaluatorsSida typically contracts out its evaluations. Evaluators areselected through a procurement process that is intended to iden-tify candidates suited to deliver expected results. The process issubject to the same rules of procurement as other Swedishdevelopment co-operation activities.

The selection of the evaluators is an important step. If evalu-ators are not qualified for the assignment, work in involvingstakeholders and formulating the ToR will be wasted. If theevaluators are well qualified, on the other hand, we may receivegood evaluations even with vaguely formulated ToR.

The skills and other qualifications needed by the evaluatorsvary from case to case, but the following are usually important:

EVALUATION EXPERTISE

This is a package of skills. It includes the conceptual and methodological skills required for successful evaluation research. It includes the communicative skills necessary for creating rapport withstakeholders, facilitating stakeholder participation, and effectively presenting evaluation results to diverse audiences. It also includes theorganisational skills necessary for planning and managing an oftencomplex evaluation research process involving many people.

SUBJECT MATTER EXPERTISE

This is always important, although more so in some evaluations than inothers. For example, in an evaluation of an intervention to encouragepolice officers to alter attitude and behaviour towards the poor, someknowledge of factors determining police attitudes and behaviour wouldbe required. It is not until the evaluation questions have been formulated,however, that the need for subject matter expertise can be more precisely defined.

76 PREPARATION OF EVALUATION TASKS

CHECKLIST

for the formulation of the ToR

■ Consider the guidelines for the writing of ToR found in Annex A ofthis manual, as well as on Sida’s Intranet (Sida Templates – avdelnings-specifika mallar/UTV).

■ Use Part 1 of this handbook for explanations of evaluation conceptsand as a general source of ideas for the formulation of the ToR.

■ Make sure that the ToR provide all the information necessary forprospective evaluators to decide whether the assignment matchestheir interests and qualifications.

2.7

Page 79: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

LOCAL KNOWLEDGE

Since determinants of project success are often local in nature, a good understanding of local social and cultural conditions can bevery important. For example, in an evaluation of efforts to reformpolice behaviour in South Africa, some knowledge about apartheid andSouth African politics of ethnicity would no doubt be useful. Normally,evaluators also need local knowledge to be able to successfully inter-act with stakeholders. When the evaluation involves contacts with street-level officials or representatives of target groups, languageskills may be required.

INDEPENDENCE AND DETACHMENT

Independence from the object of evaluation and freedom from bias are important requirements regardless of the purpose of the evaluation.Along with the skills mentioned above, both are determinants of thecredibility of the evaluation. If the evaluation team is thought to beunqualified, its members regarded as culturally or ideologically biasedor too closely involved with the client or other stakeholders, credibilitywill suffer.

Since the CVs attached to the proposals from bidding evalua-tors may not contain all the information needed for identifyingthe evaluation team that is best suited for the job, the tenderdocuments should include a note that explains how the skills andexperiences of the team match the requirements listed above.

The evaluation team should be recruited well before theevaluation takes place. The supply of evaluators is limited and wellqualified evaluators must often be contracted well in advance.Recruiting evaluators at the last minute may considerablyreduce prospects for obtaining a good evaluation.

PREPARATION OF EVALUATION TASKS 77

CHECKLIST

for recruiting consultants

■ Formulate required skills and qualifications of evaluators in the ToR.

■ Make sure that bidders have the required skills and that they satisfythe requirements of independence and detachment.

Page 80: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 81: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Step 3

The Evaluation ResearchPhaseIn this third step of the evaluation process, the most importanttasks of the evaluation manager are to:

3.1 Supervise the inception period of the assignment,

3.2 Assist and communicate with the evaluators during the research phase.

The inception phaseIn the tender document, the evaluators present their proposalfor carrying out the evaluation. However, when preparing thetender document, evaluators are usually not in a position to for-mulate a fully operational evaluation plan. It may therefore benecessary to include an inception period in the research phase.

Provision for an inception study should be made in the ToR.In many cases, the ToR explicitly require that the inceptionreport must be accepted by the client before the evaluation canproceed to the next phase.

It should also be noted that an inception report can be a verysmall part of the assignment, comprising a few days work result-ing in a short report or perhaps simply a fine-tuning meetingwith the evaluation manager. In some cases, not even a smallinception study like this may be required.

The scope and focus of the inception study vary from caseto case, but the following are standard topics:

■ Remaining evaluability issues,

■ Interpretation of evaluation questions,

■ Methodology for data collection and analysis, and

■ The evaluation work plan.

These issues are briefly reviewed in the following sections.

THE EVALUATION RESEARCH PHASE 79

3.1

Step 3The evaluation

research phase

Step 2Preparationof evaluation

tasks

Step 4Reporting anddissemination

Step 5Management

response

Step 1Initial

considerations

Page 82: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Remaining evaluability issuesAs mentioned in Section 2.4, evaluability questions cannot alwaysbe conclusively dealt with before the evaluation starts. The assess-ment may have to continue into the inception phase and beyond.When this is the case, such questions must be discussed in theinception report.

Interpretations of the evaluation questions When the evaluation questions in the ToR are formulated ingeneral terms, the evaluators must translate them into questionsthat are more directly researchable. For example, if the ToR askabout the efficiency of the intervention, the evaluators mayhave to provide external benchmarks that clarify what could orshould count as an efficient use of resources in the particular caseat hand. Likewise, the evaluators may have to reinterpret uncleargoals and objectives to make them empirically verifiable. Theinception report provides an early opportunity for evaluationmanagers and evaluators to ensure that their interpretations ofthe ToR are mutually consistent.

Methodology and work planIf required by the evaluation manager during contract nego-tiations, the inception report should provide further informationabout the proposed methodology and work plan, beyond thatpresented in the tender documents. Also, if the evaluation ques-tions are changed or further developed during the inceptionperiod, the methodology and work plan are likely to need similaradjustment. If the evaluation includes case studies and theselection of the cases is made by the evaluators, the proceduresof selection should be clarified in the inception report.

80 THE EVALUATION RESEARCH PHASE

CHECKLIST

for supervising the inceptionperiod

■ Ensure that the inception study deals with evaluability questions thatcould not be addressed before the evaluators were contracted.

■ Ensure that the inception period develops an operational methodologyand work plan on the basis of those evaluation questions that are considered answerable.

■ Decide on necessary changes in relation to the terms of referenceand the evaluators’ technical proposal and regulate these changes inan addendum to the contract for the assignment.

Page 83: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Supporting the evaluators during the research phaseThe evaluation manager and the evaluators should strive todevelop an effective working relationship. Both parties shouldemerge from the planning stage with a clear understanding ofhow the evaluation is to be carried out, who is to do what, whatis to be produced and when delivery is expected.

Unexpected developments may occur even after the inceptionphase. When this happens, the ToR and the other documentsgoverning the evaluation process may be open to conflictinginterpretations that can only be resolved through informaldiscussion. To be able to deal with upcoming problems bothparties must maintain an open and flexible attitude.

During the research phase the role of the evaluation man-ager is largely that of a broker. Providing background docu-ments and letters of introduction, arranging site visits, bookinginterviews are only a few activities where evaluators mayrequire support from the evaluation manager. When the resultsof the inception study are discussed, the evaluation manager isadvised to enquire which assistance is needed.

THE EVALUATION RESEARCH PHASE 81

CHECKLIST

for communication and practical assistance duringthe research phase

■ Agree with the evaluators before or during the field visits which upcomingand strategic issues may require consultation.

■ Ensure that all strategic decisions about the evaluation are taken bythe evaluation manager, also during the research phase, throughrecurrent communication with the evaluators.

■ Ensure that the evaluators receive adequate help with practical mattersduring the field visits, for example documents, booking interviews andpreparing site visits.

3.2

Page 84: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 85: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Step 4

Reporting and DisseminationIn this fourth step of the evaluation process, the most importanttasks of the evaluation manager are to:

4.1 Examine the evaluators’ draft report against the contract, including

any addenda, and formulate a written response. Make sure that other key

stakeholders are also invited to respond to the draft report,

4.2 Make sure that the final report accommodates stakeholder comments

on the draft report,

4.3 Disseminate the results of the evaluation to all interested parties, in

particular the intended users, and

4.4 Facilitate publication of the final report in Sida’s evaluation series, as

well as on Sida’s website.

Checking the draft reportThis task consists of two main parts:

■ Make sure that the report fulfils agreed presentation format and content as well as language and style.

■ Make sure that the report contains a qualitatively satisfactoryresponse to the evaluation questions in the ToR.

Virtually all evaluations are presented as written reports. Themain objective of the report is to convey the results of the eval-uation in a way that corresponds to the information needs ofthe intended users of the evaluation.

Evaluation users may have little patience with difficult andtime-consuming language. An evaluation report that stimulatesthe readers’ interest, matches their decision-making and learningrequirements, and economises with their time, enhances theoverall value of the evaluation.

REPORTING AND DISSEMINATION 83

4.1

Step 3The evaluation

researchphase

Step 2Preparationof evaluation

tasks

Step 4Reporting anddissemination

Step 5Management

response

Step 1Initial

considerations

Page 86: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The following is standard advice for effective reporting:

■ Present main findings and conclusions up-front and use therest of the report for more detailed analyses and presentationof findings,

■ Focus on readers’ expectations with regard to the object ofevaluation. When learning is the purpose, highlight theunexpected and the problematic and do not dwell on mattersof limited value to intended readers,

■ Make sure that the overall structure of the report is clearand easy to understand. Describe the intervention logic,explain the evaluation questions, and be explicit about eva-luation criteria and standards of performance,

■ Present negative findings constructively. Be frank aboutshortcomings and mistakes, but avoid blame,

■ As far as possible, avoid jargon and difficult technical terms,

■ In evaluations commissioned by Sida we should try to follow the terminological conventions presented in thishandbook and in the OECD/DAC Evaluation Glossary(Annex C),

■ Use a consistent and conventional system for footnotes andreferences in the text,

■ Explain abbreviations and consider listing them in a sepa-rate glossary,

■ Use tables and figures to facilitate understanding.

Sida recommends a model format for evaluation reports (AnnexB). Regulating the structure and content of seven standardchapters, this format is intended both to facilitate writingreports by evaluators and checking reports by evaluation man-agers and others. The format is not compulsory, but it should beused unless there is good reason for doing otherwise. The evalu-ators should consult with Sida’s evaluation manager beforeadopting a different framework.

84 REPORTING AND DISSEMINATION

Page 87: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Report structure

EXECUTIVE SUMMARY

Summary of the evaluation, with particular emphasis on main findings, conclusions, lessons learned and recommendations.

INTRODUCTION

Presentation of the evaluation’s purpose, questions and main findings.

THE EVALUATED INTERVENTION

Description of the evaluated intervention, and its purpose, logic, history, organisation and stakeholders.

FINDINGS

Factual evidence relevant to the questions asked by the evaluation andinterpretations of such evidence.

EVALUATIVE CONCLUSIONS

Assessments of intervention results and performance against givenevaluation criteria and standards of performance.

LESSONS LEARNED

General conclusions with a potential for wider application and use.

RECOMMENDATIONS

Actionable proposals regarding improvements of policy or management addressed to the client of the evaluation or other intended users.

ANNEXES

Terms of reference, methodology for data collection and analysis,references, etc.

Making sure that the evaluation report satisfies basic formalrequirements is relatively simple. First, we examine if the reportis organised as agreed and that no part is missing. Next, we makesure that all questions raised in the ToR have been covered andthat the text is clear and succinct. While more thorough than thepreliminary perusal of the report, this second task is not espe-cially complicated. The points above can serve as a checklist.

Assessing the quality of response to evaluation questions canbe more difficult. Exactly what constitutes acceptable quality inthis regard varies from case to case.

The following are general quality criteria:

CLEAR STATEMENT OF THE EVALUATION QUESTIONS

The report should contain a clear restatement of the questions raisedin the ToR so that readers will understand how the information in thereport should be interpreted. Revisions of the original questions madein the course of the study should be duly noted.

CLEAR PRESENTATION OF CRITERIA AND STANDARDS OF PERFORMANCE

The report should clearly present evaluation criteria and standards ofperformance. The grounds for value judgements made in the reportshould be explicitly stated.

REPORTING AND DISSEMINATION 85

Page 88: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

TRANSPARENT ACCOUNT OF RESEARCH METHODS

The report should include an account of sources of data and methodsof data collection to help readers assess the likely accuracy of factsand figures.

JUSTIFIED CONCLUSIONS

It should be possible for readers to follow each step of the argumentleading from question to answer. Supporting evidence should be clearlypresented and alternative explanations of findings explicitly consideredand eliminated. To help readers assess the quality of arguments, thereport should contain a description of the logic of the intervention.

IMPARTIAL REPORTING

The perspectives of all major stakeholder groups should be impartiallyreflected in the report. The report should not give precedence to anyparticular perspective or point of view without saying so. It must coverboth strengths and weaknesses, and should not be written in a mannerthat suggests that it is totally unbiased and represents the final truth.

CLEAR STATEMENT OF LIMITATIONS

All studies have limitations. They are limited with regard to scope andcoverage as well as regarding depth of analysis. Unless these limita-tions are noted in the text, however, they may be difficult for readersto detect. Therefore, an account of major limitations should normallybe included in reports.

Here a warning against encroaching on the evaluators’ domainis warranted. As evaluation managers we try to make sure thatthe evaluation report conforms to the ToR and generallyaccepted quality standards for evaluations. While doing this,however, we must respect the evaluators’ independence.

86 REPORTING AND DISSEMINATION

CHECKLIST

for the examination of thedraft report

■ Check that the report meets the formal requirements stipulated by the contract and any contractual addenda.

■ Ensure that the report conforms to the agreed format for the structure and main contents of evaluation reports. In evaluations initiated bySida, this is usually the format recommended by Sida (see Annex B).

■ Check that the report is well written and that it provides a qualitativelysatisfactory response to the evaluation questions.

Page 89: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Accommodating stakeholder commentsThe evaluation manager should make sure that the draft reportis shared with the intended users of the evaluation and repre-sentatives of other key stakeholder groups. The consulted personsshould have sufficient time to read and comment. Even if theimpartiality of the evaluators is not disputed, comments fromstakeholders can often help the evaluators correct factual errorsand add important information. Stakeholders are normally in afar better position to identify gaps and mistakes than the evalu-ation manager.

It is important that the participatory perspective is not lostwhen the research phase is over. Persons who contribute to anevaluation in which they have a stake naturally want assurancesthat they have not been misinterpreted. For practical reasons itis often impossible to consult each and every person who hasparticipated in a study. Yet, representatives of the intendedusers of the evaluation and other key stakeholders should begiven an opportunity to comment before the study is finalised.Asking stakeholders to assess the draft report is not just a moralrequirement. It is also ensures that the evaluation will be as usefulas possible.

Sida’s Intranet, e-mail and electronic project rooms may beused to facilitate discussions about the report. It is also a goodidea to arrange early face-to-face meetings between the evaluationmanager, major stakeholder groups and the evaluators. In suchmeetings, alternative views about the evaluation can be debatedand put to test. In many cases, meetings held to discuss draftreports are more productive than meetings where the final resultsare presented.

When the revised draft report has been submitted, the evalu-ation manager should check that relevant stakeholder commentshave been duly taken into account.

REPORTING AND DISSEMINATION 87

4.2

CHECKLIST

for accommodating stakeholder comments in the final report

■ Solicit comments from intended users and other key stakeholders.

■ When possible, consider arranging a meeting between major stakeholders and the evaluators to discuss the draft report.

■ Check if stakeholder comments are adequately reflected in the report.

Page 90: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Dissemination of evaluation resultsThe dissemination of evaluation results must always, as withany successful communication strategy, be tailored to the audi-ences’ needs and interests. It is important not to lose sight of thefact that a sound dissemination strategy tends to enhance theoverall usefulness of the evaluation.

Some users may require information specifically targeted fordecision-making with regard to the evaluated intervention.Others may seek to apply the lessons learned from the evalua-tion to other interventions and circumstances. Still others mere-ly have an overall policy interest in the study.

The evaluation manager should work out a strategy forcommunicating the results of the evaluation in consultationwith partners and other stakeholders. The discussions aboutcommunication and dissemination should not be postponed tothe very end, when the study has been completed. Rather, theyshould be an integral part of the planning of the evalaution.

Apart from report writing, there is a wide range of optionsavailable to disseminate evaluation results. Meetings, seminars,workshops, conferences, media presentations are just some ofthe options. Information technology and software for specialpresentations may also be considered.

Publishing the evaluation reportWhen the final report has been approved, the evaluation man-ager should make sure that it can be published by Sida’sDepartment for Evaluation and Internal Audit (UTV) in one ofthe two report series Sida Evaluations and Sida Studies inEvaluation, as well as on Sida’s website on the Internet. It is Sidapolicy that evaluations should be made accessible to the inter-national development community as a whole.

88 REPORTING AND DISSEMINATION

CHECKLIST

for dissemination of evaluationresults

■ Discuss with relevant stakeholders to whom, when and how the results of the evaluation should be disseminated, and implement thedissemination plan accordingly.

■ Consider a range of dissemination options, including dialogue withpartners, meetings, seminars, workshops, and any other kind of relevant and effective communications strategy.

■ Make sure that the dissemination is tailored to the specific needs,interests and information requirements of individual audiences.

■ Focus dissemination efforts on the intended users of the evaluation,and other groups that can be expected to make effective use of theevaluation.

4.4

4.3

Page 91: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

The evaluation manager should also make sure that all statisticaldetails of Sida Evaluations Data Worksheet are delivered toUTV together with the full report.

The final responsibilities of the evaluation manager are todecide if the report should be translated into another language,to produce a mailing list of people and institutions that shouldreceive the report, and to ensure that copies are dispatched inprint or electronically without undue delay.

REPORTING AND DISSEMINATION 89

CHECKLIST

for publication of the evaluation report

■ Check the guidelines for the publication and dissemination of evaluationreports and the instructions regarding evaluation statistics in SidaTemplates/UTV on Sida’s intranet.

■■ Check that the final report received from the consultant is ready forpublication and that it has been delivered along with the required statistical details about the assignment.

■ Consider translation of the report into relevant languages.

■ Make a list of people and institutions that should receive the report,and dispatch printed or electronic copies as soon as possible.

Page 92: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 93: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Step 5

Management ResponseThis fifth and last step of the evaluation process, consists of twotasks:

5.1 Make sure that the evaluation is submitted for management

response, and assist the concerned embassy or department in interpreting

the evaluation report,

5.2 Facilitate the use of the evaluation as an input to the dialogue

between Sida, its co-operation partners and other stakeholders.

Both these tasks may be important for the overall success of theevaluation process. An evaluation is of little value if its conclu-sions, lessons and recommendations are not properly understoodby its intended users and other interested parties.

Assisting in the formulation of Sida’s management responseTo ensure that evaluations are taken into account by concerneddepartments and embassies, and that their response is well docu-mented, Sida has developed a system for management response.As stated in Sida’s Evaluation Policy, all Sida-financed evalua-tions should be followed up by a management response outliningthe concerned embassy’s or department’s conclusions with regardto the evaluation. In most cases, the management responseincludes an action plan and time schedule, indicating what willbe done as a result of the study and when it will be done. If theembassy or department rejects some of the recommendationsof the evaluation, or plans to implement certain recommen-dations in different ways than those proposed by the evaluators,the reasons for such modification should be clearly explained.

MANAGEMENT RESPONSE 91

5.1

Step 3The evaluation

research phase

Step 2Preparation of evaluation

tasks

Step 4Reporting anddissemination

Step 5Management

response

Step 1Initial

considerations

Page 94: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

As a rule, a Sida management response consists of three mainparts:

■ An overall assessment from Sida’s point of view of the relevance, accuracy and usefulness of the evaluation andits findings.

■ A point-by-point response to the recommendations and/or main findings. Are the recommendations accepted orrejected? Will they prompt action? Do the findings andconclusions of the evaluation have any practical implica-tions beyond those raised in the recommendations?

■ A summary action plan with a completion date for eachaction. If it is decided that no measures should be taken inresponse to the evaluation, an action plan is obviously notnecessary.

The evaluation manager is normally actively involved in theformulation of the management response. When the evaluationmanager is identical with the programme officer in charge ofSida’s support to the evaluated intervention, which is often thecase, he or she would usually draft the response. In other cases,the evaluation manager may usefully assist the responsibleembassy or department in interpreting the evaluation and itsfindings.

92 MANAGEMENT RESPONSE

CHECKLIST

for assisting the formulationof a management response

■ In consultation with the concerned embassy or department, draft amanagement response to the evaluation. Alternatively, assist thosewho are drafting the management response to interpret the evaluationreport.

■ If necessary, make sure that the management response is completeand satisfies Sida’s requirements.

Page 95: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Facilitating dialogue with co-operation partnersThis manual cannot provide instructions for how Sida’s co-operation partners and other stakeholders should respond toevaluation results. If the evaluation report contains recommen-dations addressed to Sida’s co-operation partner, they will beprocessed in accordance with the partner’s own procedures.

As a specialist on the evaluation in question, however, theevaluation manager is often well equipped to identify issues that ought to be raised in discussions between Sida and the co-operation partner. As noticed elsewhere, Sida frequently usesevaluation results as inputs to the dialouge with its partners.

MANAGEMENT RESPONSE 93

CHECKLIST

for facilitating the dialoguewith co-operation partners

■ Remember to make relevant dialogue use of the evaluation also after the evaluation process has been concluded and the managementresponse is implemented.

5.2

Page 96: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Annex A

Format for Terms of ReferenceThe format below is intended to help guide the structure andmain content of the terms of reference (ToR) for Sida financedevaluations:

EVALUATION PURPOSE

Details about the intended use of the evaluation, including any operative decisions the evaluation is expected to feed into.

INTERVENTION BACKGROUND

Details about the intervention logic, as well as the structure and substance of the activities and outputs carried out and delivered.

STAKEHOLDER INVOLVEMENT

Account of how stakeholders are expected to participate inthe research, reporting and dissemination activities of the evaluation.

EVALUATION QUESTIONS

List of evaluation questions, based on a set of criteria, policy issuesand performance standards that suit the evaluation purpose.

RECOMMENDATIONS AND LESSONS

Clarification of what kinds of recommendations and lessonsthe evaluation is intended to provide.

METHODOLOGY

Directives regarding the methodology to be used by the evaluators for data collection and analysis.

WORK PLAN AND SCHEDULE

Indications of e.g. what sites should be visited and how evaluators’time should be divided between field and reporting phases.

REPORTING

Specification of what reports should be delivered and when, and the evaluators’ role in follow up seminars and workshops.

EVALUATION TEAM

The qualifications of the evaluators, such as evaluation skills, sectorknowledge and socio-cultural competence.

By following this model format, the evaluation ToR are likely tocover many aspects that are necessary for successfully guidingthe evaluation process. Still, note that the evaluation managerin many cases cannot be expected to gather all informationrequired by this format before the ToR are finalised.

94 ANNEX A

Page 97: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ANNEX A 95

Evaluation purposeIn this first section of the ToR, the evaluation purpose isdefined. Remember to specify a purpose that reflects the intendeduse of the evaluation. If this intended use concerns operativedecisions for project cycle management and dialogue, makesure to elaborate also on the particular context of such decision-making. State who the intended users of the evaluation are. Themost typical users of Sida-financed evaluations are Sida depart-ments, Swedish embassies, and co-operation partners. SeeSection 2.2 of the second part of the manual for more detailsabout the evaluation purpose.

Intervention backgroundIn this section, the intervention selected for evaluation isdescribed in order to give evaluators a better understanding ofwhat they should evaluate. In particular, the logic of the inter-vention should be described, including: the goal structure frominputs and activities up to the overall goal of poverty reduction;the indicators that provide specificity to the intervention logic;and the internal and external assumptions on which the logic isbased.

It is essential that the ToR are explicit about the time periodsand intervention phases that the evaluation should focus on. Also,if the intervention is complex, and if the evaluation manager andconcerned stakeholders have agreed on a particular focus oncertain intervention components, the details about this focusneed to be spelled out in the ToR. Note that the description ofthe intervention logic normally needs to deal only with thosecomponents on which the evaluation focuses.

If the evaluation manager does not have access to full infor-mation on the selected intervention when preparing the evalu-ation, which may often be the case, the ToR should indicatehow and when additional gathering of preparatory informationshould be undertaken as part of the evaluation assignment. SeeSection 3.1 of the manual for more details in this respect.

Stakeholder involvementThis section summarises the stakeholder analysis made in thepreparatory step of the evaluation process. It also identifies allagreements on how different stakeholder groups are to partici-pate in and contribute to the research, reporting and dissemi-nation phases of the evaluation.

In many cases, partners and other stakeholders will alsoparticipate during the implementation of the study. The ToRshould specify how the evaluators are expected to interact withdifferent groups of stakeholders during the process.

Page 98: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Evaluation questionsThis section lists the evaluation questions that should beanswered by the evaluation. The list should be based on a feasi-ble choice of evaluation criteria and policy issues that suits theevaluation purpose. Since evaluation questions tend to be diffi-cult and time-consuming to answer with adequate precision andreliability, it is often a good idea to limit the number of questionsin order to allow them to be addressed in depth.

Evaluation questions can often not be decided before it isdetermined whether they really can be answered as intended. Ifthis evaluability assessment has not been carried out when theToR are being finalised, the ToR need to indicate if and howthis assessment will be undertaken and how, later in the process,it will be decided what questions the evaluation should address.

Recommendations and lessonsWhile it is impossible during the planning stage of the evalua-tion process to foresee answers to specific evaluation questions,the ToR should indicate what kinds of recommendations andlessons that the evaluators should provide. For example, are weinterested in recommendations and lessons about a particularform of co-operation, a particular socio-economic context, or a certain policy? Even if the kinds of recommendations and lessons that are required given a specific evaluation purpose areself-evident, the evaluation manager is generally advised to beexplicit in this respect.

MethodologyNormally, the evaluators are responsible for the research methods.The chosen methods should be described and justified in relationto possible alternatives in the tender documents, or in the incep-tion report produced at an early stage of the assignment.Instructions for when and how a discussion about methodologyis needed should, however, always be given in the ToR.

Workplan and scheduleIn this section, the ToR give instructions for the practicalaspects of the evaluation process, such as what parts of the eval-uation should be carried out in Sweden and in partner coun-tries, and what partner countries and sites that should be visitedby the evaluators.

As in the case of evaluation methodology, details about theevaluation workplan is often something that needs to be elabo-rated in the tender and the inception phase of the assignment.However, it may be necessary for the ToR to indicate when theassignment is to be concluded, and, roughly, how the evaluators’

96 ANNEX A

Page 99: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ANNEX A 97

time should be divided between the inception, field and reportingphases of the evaluation.

ReportingThis final section specifies the reports that should be deliveredunder the evaluation contract, such as inception reports, anddraft and final reports. The consultant should be instructed toadhere to the terminological conventions of the OECD/DACGlossary on Evaluation and Results-Based Management as faras possible. The section should further specify delivery dates forthe reports, and the evaluators’ roles in follow up activities suchas seminars and workshops.

That the evaluation report should consider the report formatpresented in Annex B of this manual, and that a completed SidaEvaluations Data Work Sheet should be presented along withthe report also need to be clarified. It should be explicitly notedthat evaluation reports will be assessed against standard qualitycriteria for evalutaion reporting, such as those described in thismanual.

Evaluation teamThis section defines the necessary qualifications of the evaluationteam and individual team members, for example in terms ofevaluation skills, country and sector experience, and social andcultural competence. When the evaluation is intended to servea purpose of accountability, a requirement that the evaluatorsshould be independent of the evaluated activites and have nostake in the outcome of the evaluation must be inserted in theToR.

For some further notes on the writing of terms of reference, consult the Evaluation Manual, section 2.6.

Page 100: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Annex B

Format for Sida Evaluation ReportsThis format is intended to help guide the structure and maincontents of evaluation reports commissioned by Sida. It is notcompulsory, but should be used if there is no particular reasonfor doing otherwise.

Report structure

EXECUTIVE SUMMARY

Summary of the evaluation, with particular emphasis on main findings,conclusions, lessons learned and recommendations.

INTRODUCTION

Presentation of the evaluation’s purpose, questions and main findings.

THE EVALUATED INTERVENTION

Description of the evaluated intervention, and its purpose, logic, history, organisation and stakeholders.

FINDINGS

Factual evidence, data and observations that are relevant tothe specific questions asked by the evaluation.

EVALUATIVE CONCLUSIONS

Assessment of the intervention and its results against given evaluation criteria, standards of performance and policy issues.

LESSONS LEARNED

General conclusions that are likely to have a potential for wider application and use.

RECOMMENDATIONS

Actionable proposals to the evaluation’s users for improved intervention cycle management and policy.

ANNEXES

Terms of reference, methodology for data gathering and analysis, references, etc.

By following a uniform format, evaluation reports tend to beeasier to read and use. The format also facilitates syntheses ofdifferent reports for broader learning purposes, such as in Sida’sresults analyses for the development of new country strategies.

The format may be included as an annex to the contractwith the consultant, thus providing early instructions how thereport may be prepared. However, note that Sida’s Evaluation

98 ANNEX B

Page 101: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ANNEX B 99

Manual contains further guidance about reporting, and that theevaluator is well advised to take a look at the manual as a whole.The present format is found in Templates/UTV on Sida’sintranet.

Executive summaryThe executive summary provides a synopsis of the evaluationand its purpose, emphasising main findings, evaluative conclu-sions, recommendations and lessons learned. Descriptions ofmethodology should be kept to a minimum.

The summary should be self-contained and self-explanatory.Special care should be taken to prepare the executive summary,as it is may be the only part of the report that some people havetime to read.

IntroductionThe introduction presents the background and overall purposeof the evaluation, including how and by whom it is intended tobe used, as well as the evaluation criteria employed and the keyquestions addressed. It also outlines the structure of the reportand provides guidance to readers.

The evaluated interventionThis chapter describes the main characteristics of the evaluatedintervention and its location, history, organisation and stake-holders. It should cover the focal problem addressed by theevaluated intervention, the objectives of the invention and itslogic of cause and effect. A description of activities carried outand key outputs delivered should be included.

The chapter should also cover the policy and developmentcontext of the evaluated intervention, including the assump-tions about external factors that were part of intervention planning. When preparing the chapter, the evaluators shouldsummarize the findings and conclusions of any earlier evalua-tions of the same intervention.

FindingsFindings are empirical data and inferences from such data thatthe evaluators present as evidence relevant to the evaluationquestions. They are the facts of the matter, in other words.

In the findings chapter, this body of evidence is systematicallypresented so that readers can form their own opinion about thestrengths and weakness of the conclustions of the evaluation.The quality of the findings – their accuracy and relevance –should be assessed with reference to standard criteria of reli-ability and validity.

Page 102: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

Evaluative conclusionsEvaluative conclusions are the evaluators’ concluding assess-ments of the intervention against given evaluation criteria, per-formance standards and policy issues. They provide answers as towhether the intervention is considered good or bad, andwhether the results are found positive or negative.

Note that the distinction between findings and evaluativeconclusions is somewhat artificial. Evaluative conclusions areoften best presented together with the underlying findings onwhich they are based. In many cases, it makes sense to combinethe presentation of findings and evaluative conclusions in onechapter.

Lessons learnedLessons learned are findings and conclusions that can be gener-alised beyond the evaluated intervention.

In formulating lessons, the evaluators are expected to examinethe intervention in a wider perspective and put it in relation tocurrent ideas about good and bad practice.

RecommendationsRecommendations indicate what actions the evaluators believeshould be taken on the basis of the evaluation. Recommendationsto Sida may cover the whole spectrum of aid management,including resource allocation, financing, planning, implemen-tation, and monitoring and evaluation.

Recommendations should always identify their respectiveaddressees and be tailored to the specific needs and interests ofeach addressee. They should be simply stated and geared tofacilitate implementation.

Annex on methodologyThe report should include an annex describing how the evalu-ation was carried out. The annex should cover standardmethodology topics, including research design, sampling anddata collection methods and analytical procedures. It shoulddiscuss the limitations of the selected methods as well as theirstrengths.

100 ANNEX B

Page 103: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 104: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

102 ANNEX C

Annex C

Glossary of Key Terms in Evaluation and Results-BasedManagementThis glossary has been developed by the OECD/DAC Networkon Development Evaluation (formerly The DAC Working Partyon Aid Evaluation). Completed in 2002, it is available in severallanguages, including French and Spanish. All the differentversions can be downloaded from the OECD website(www.oecd.org).

The manual follows the terminology of the glossary on mostpoints. The differences, less than a dozen in all, are noted below,entry by entry. All the inserted notes have been marked with anasterisk (*).

ACCOUNTABILITY Obligation to demonstrate that work hasbeen conducted in compliance with agreed rules and standardsor to report fairly and accurately on performance results vis-à-vis mandated roles and/or plans. This may require a careful,even legally defensible, demonstration that the work is consis-tent with the contract term.

Note: Accountability in development may refer to theobligations of partners to act according to clearly definedresponsibilities, roles and performance expectations, often withrespect to the prudent use of resources. For evaluators, itconnotes the responsibility to provide accurate, fair and crediblemonitoring reports and performance assessments. For publicsector managers and policy-makers, accountability is to tax-payers/citizens.

The Development Assistance Committee (DAC) Network on Development Evaluationis an international forum where representatives of the evaluation departments of bilateral and multilateral development organisations meet to share experienceand to improve evaluation practice and strengthen its use as an instrument fordevelopment co-operation policy. It operates under the aegis of DAC and presentlyconsists of 30 representatives from OECD member countries and multilateraldevelopment agencies (Australia, Austria, Belgium, Canada, Denmark, EuropeanCommission, Finland, France, Germany, Greece, Ireland, Italy, Japan, Luxembourg,Netherlands, New Zealand, Norway, Portugal, Spain, Sweden, Switzerland,United Kingdom, United States, World Bank, Asian Development Bank, AfricanDevelopment Bank, Inter-American Development Bank, European Bank forReconstruction).

A

Page 105: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ACTIVITY Actions taken or work performed through whichinputs, such as funds, technical assistance and other types ofresources are mobilised to produce specific outputs.

Related term: Development intervention.

ANALYTICAL TOOLS Methods used to process and interpretinformation during an evaluation.

APPRAISAL An overall assessment of the relevance, feasibilityand potential sustainability of a development interventionprior to a decision of funding.

Note: In development agencies, banks, etc., the purpose ofappraisal is to enable decision-makers to decide whether theactivity represents an appropriate use of corporate resources.

Related term: Ex-ante evaluation.

ASSUMPTIONS Hypotheses about factors or risks which couldaffect the progress or success of a development intervention.

Note: Assumptions can also be understood as hypothesizedconditions that bear on the validity of the evaluation itself, e.g,about the characteristics of the population when designing asampling procedure for a survey. Assumptions are madeexplicit in theory based evaluations where evaluation trackssystematically the anticipated results chain.

ATTRIBUTION The ascription of a causal link between observed(or expected to be observed) changes and a specific intervention.

Note: Attribution refers to that which is to be credited forthe observed changes or results achieved. It represents the extentto which observed development effects can be attributed to aspecific intervention or to the performance of one or morepartner taking account of other interventions, (anticipated orunanticipated) confounding factors, or external shocks.

AUDIT An independent, objective assurance activity designedto add value and improve an organisation’s operations. It helpsan organisation accomplish its objectives by bringing a system-atic, disciplined approach to assess and improve the effectivenessof risk management, control and governance processes.

Note: A distinction is made between regularity (financial)auditing, which focuses on compliance with applicable statutesand regulations; and performance auditing, which is concernedwith relevance, economy, efficiency and effectiveness. Internalauditing provides an assessment of internal controls undertakenby a unit reporting to management while external auditing isconducted by an independent organisation.

ANNEX C 103

A

Page 106: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

104 ANNEX C

BASE-LINE STUDY An analysis describing the situation priorto a development intervention, against which progress can beassessed or comparisons made.

BENCHMARK Reference point or standard against whichperformance or achievements can be assessed.

Note: A benchmark refers to the performance that has beenachieved in the recent past by other comparable organisations,or what can be reasonably inferred to have been achieved inthe circumstances.

BENEFICIARIES The individuals, groups, or organisations,whether targeted or not, that benefit, directly or indirectly,from the development intervention.

Related terms: Reach, target group.

CLUSTER EVALUATION An evaluation of a set of relatedactivities, projects and/or programs.

CONCLUSIONS Conclusions point out the factors of successand failure of the evaluated intervention, with special attentionpaid to the intended and unintended results and impacts, andmore generally to any other strength or weakness. A conclusiondraws on data collection and analyses undertaken, through atransparent chain of arguments.

COUNTERFACTUAL The situation or condition which hypo-thetically may prevail for individuals, organisations, or groupswere there no development intervention.

COUNTRY PROGRAM EVALUATION/COUNTRY ASSISTANCEEVALUATION Evaluation of one or more donor’s or agency’sportfolio of development interventions, and the assistancestrategy behind them, in a partner country.

DATA COLLECTION TOOLS Methodologies used to identifyinformation sources and collect information during anevaluation.

Note: Examples are informal and formal surveys, direct andparticipatory observation, community interviews, focus groups,expert opinion, case studies, literature search.

DEVELOPMENT INTERVENTION An instrument for partner(donor and non-donor) support aimed to promote development.

Note: Examples are policy advice, projects, programs.

B

C

D

Page 107: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ANNEX C 105

DEVELOPMENT OBJECTIVE Intended impact contributing tophysical, financial, institutional, social, environmental, or otherbenefits to a society, community, or group of people via one ormore development interventions.

ECONOMY Absence of waste for a given output.Note: An activity is economical when the costs of the scarce

resources used approximate the minimum needed to achieveplanned objectives.

EFFECT Intended or unintended change due directly orindirectly to an intervention.

Related terms: Results, outcome.

EFFECTIVENESS The extent to which the developmentintervention’s objectives were achieved, or are expected to beachieved, taking into account their relative importance.

Note: Also used as an aggregate measure of (or judgmentabout) the merit or worth of an activity, i.e. the extent to whichan intervention has attained, or is expected to attain, its majorrelevant objectives efficiently in a sustainable fashion and witha positive institutional development impact.

Related term: Efficacy.

EFFICIENCY A measure of how economically resources/inputs(funds, expertise, time, etc.) are converted to results.

EVALUABILITY Extent to which an activity or a program canbe evaluated in a reliable and credible fashion.

Note: Evaluability assessment calls for the early review of aproposed activity in order to ascertain whether its objectivesare adequately defined and its results verifiable.

* Ideally, an evaluability assessment should be made when a development

intervention is planned. However, evaluability must also be assessed again

as a prelude to evaluation.

D

E

Page 108: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

EVALUATION The systematic and objective assessment ofan on-going or completed project, programme or policy, itsdesign, implementation and results. The aim is to determine therelevance and fulfilment of objectives, development efficiency,effectiveness, impact and sustainability. An evaluation shouldprovide information that is credible and useful, enabling theincorporation of lessons learned into the decision-makingprocess of both recipients and donors.

Evaluation also refers to the process of determining theworth or significance of an activity, policy or program. Anassessment, as systematic and objective as possible, of a planned,on-going, or completed development intervention.

Note: Evaluation in some instances involves the definition ofappropriate standards, the examination of performanceagainst those standards, an assessment of actual and expectedresults and the identification of relevant lessons.

Related term: Review.

* The definition of evaluation in Sida’s Evaluation Policy differs only

slightly from the DAC definition: An evaluation is a careful and systematic

retrospective assessment of the design, implementation, and results of

development activities.

EX-ANTE EVALUATION An evaluation that is performed beforeimplementation of a development intervention.

Related terms: Appraisal, quality at entry.

EX-POST EVALUATION Evaluation of a development inter-vention after it has been completed.

Note: It may be undertaken directly after or long aftercompletion. The intention is to identify the factors of success orfailure, to assess the sustainability of results and impacts, andto draw conclusions that may inform other interventions.

EXTERNAL EVALUATION The evaluation of a developmentintervention conducted by entities and/or individuals outsidethe donor and implementing organisations.

FEEDBACK The transmission of findings generated throughthe evaluation process to parties for whom it is relevant anduseful so as to facilitate learning. This may involve the collectionand dissemination of findings, conclusions, recommendationsand lessons from experience.

FINDING A finding uses evidence from one or more evaluationsto allow for a factual statement.

E

F

106 ANNEX C

Page 109: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

G

I

F FORMATIVE EVALUATION Evaluation intended to improveperformance, most often conducted during the implementationphase of projects or programs.

Note: Formative evaluations may also be conducted for otherreasons such as compliance, legal requirements or as part of alarger evaluation initiative.

Related term: Process evaluation.

GOAL The higher-order objective to which a developmentintervention is intended to contribute.

Related term: Development objective.

IMPACTS Positive and negative, primary and secondary long-term effects produced by a development intervention, directlyor indirectly, intended or unintended.

* As noted in Chapter 2, the word is widely used in a more comprehensive

sense that includes both short and long-term effects. In this manual, it is

used in the broader as well as in the more narrow sense defined by the

Glossary.

INDEPENDENT EVALUATION An evaluation carried out byentities and persons free of the control of those responsible forthe design and implementation of the development intervention.

Note: The credibility of an evaluation depends in part onhow independently it has been carried out. Independenceimplies freedom from political influence and organisationalpressure. It is characterised by full access to information andby full autonomy in carrying out investigations and reportingfindings.

* This manual distinguishes between two types of independent evaluation.

In the one case the evaluators are independent of the evaluated activities and

have no stake in the outcome of the study. In the other case, there is a

further requirement that the evaluation is also commissioned by an

organisation that is independent of the evaluated activities.

INDICATOR Quantitative or qualitative factor or variable thatprovides a simple and reliable means to measure achievement,to reflect the changes connected to an intervention, or to helpassess the performance of a development actor.

INPUTS The financial, human, and material resources used forthe development intervention.

ANNEX C 107

Page 110: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

108 ANNEX C

INSTITUTIONAL DEVELOPMENT IMPACT The extent to whichan intervention improves or weakens the ability of a country orregion to make more efficient, equitable, and sustainable use ofits human, financial, and natural resources, for example through:(a) better definition, stability, transparency, enforceability andpredictability of institutional arrangements and/or (b) betteralignment of the mission and capacity of an organisation withits mandate, which derives from these institutional arrangements.Such impacts can include intended and unintended effects ofan action.

INTERNAL EVALUATION Evaluation of a developmentintervention conducted by a unit and/or individuals reportingto the management of the donor, partner, or implementingorganisation.

Related term: Self-evaluation.

JOINT EVALUATION An evaluation to which different donoragencies and/or partners participate.

Note: There are various degrees of “jointness” depending on the extent to which individual partners co-operate in theevaluation process, merge their evaluation resources andcombine their evaluation reporting. Joint evaluations can helpovercome attribution problems in assessing the effectiveness ofprograms and strategies, the complementarity of effortssupported by different partners, the quality of aid co-ordination, etc.

LESSONS LEARNED Generalisations based on evaluationexperiences with projects, programs, or policies that abstractfrom the specific circumstances to broader situations. Frequently,lessons highlight strengths or weaknesses in preparation, design,and implementation that affect performance, outcome, andimpact.

* As the term is understood in this manual, the degree of generalisation

of a lesson varies from case to case. As the conditions for development

co-operation vary, illuminating attempts at generalisation are often restricted

to a particular type of context or mode of intervention.

I

J

L

Page 111: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ANNEX C 109

LOGICAL FRAMEWORK (LOGFRAME) Management tool usedto improve the design of interventions, most often at the projectlevel. It involves identifying strategic elements (inputs, outputs,outcomes, impact) and their causal relationships, indicators, andthe assumptions or risks that may influence success and failure.It thus facilitates planning, execution and evaluation of adevelopment intervention.

Related term: Results-based management.

* It should be noted that logframe analysis (LFA) is one of several closely

related types of analyses that focus on the chain of cause and effect under-

lying the evaluated intervention. Programme logic models, theories of action,

performance frameworks, project theories, and development hypotheses are

all members of the same family as the logframe. In this manual, the term

intervention logic serves as a blanket term.

META-EVALUATION The term is used for evaluations designedto aggregate findings from a series of evaluations. It can alsobe used to denote the evaluation of an evaluation to judge itsquality and/or assess the performance of the evaluators.

MID-TERM EVALUATION Evaluation performed towards themiddle of the period of implementation of the intervention.

Related term: Formative evaluation.

MONITORING A continuing function that uses systematic col-lection of data on specified indicators to provide managementand the main stakeholders of an ongoing development inter-vention with indications of the extent of progress and achieve-ment of objectives and progress in the use of allocated funds.

Related term: Performance monitoring, indicator.

OUTCOME The likely or achieved short-term and medium-termeffects of an intervention’s outputs.

Related terms: Result, output, impact, effect.

* Among evaluators the word outcome is also frequently used in a general

sense where it is more or less synonymous with the word effect. When it is

used in this sense, distinctions are made between short, medium, and

long-term outcomes.

OUTPUTS The products, capital goods and services which resultfrom a development intervention; may also include changesresulting from the intervention which are relevant to theachievement of outcomes.

L

M

O

Page 112: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

110 ANNEX C

PARTICIPATORY EVALUATION Evaluation methods throughwhich representatives of agencies and stakeholders (includingbeneficiaries) work together in designing, carrying out andinterpreting an evaluation.

* In this manual we distinguish between participatory evaluations and

participatory evaluation methods. An evaluation may use participatory

methods, and still not qualify as a fully participatory evaluation. This

distinction is further clarified in Chapter 1.

PARTNERS The individuals and/or organisations thatcollaborate to achieve mutually agreed upon objectives.

Note: The concept of partnership connotes shared goals,common responsibility for outcomes, distinct accountabilitiesand reciprocal obligations. Partners may include governments,civil society, non-governmental organisations, universities,professional and business associations, multilateral organisations,private companies, etc.

PERFORMANCE The degree to which a developmentintervention or a development partner operates according tospecific criteria/standards/guidelines or achieves results inaccordance with stated goals or plans.

PERFORMANCE INDICATOR A variable that allows theverification of changes in the development intervention orshows results relative to what was planned.

Related terms: Performance monitoring, performancemeasurement.

PERFORMANCE MEASUREMENT A system for assessingperformance of development interventions against stated goals.

Related terms: Performance monitoring, indicator.

PERFORMANCE MONITORING A continuous process of col-lecting and analysing data to compare how well a project, pro-gram, or policy is being implemented against expected results.

PROCESS EVALUATION An evaluation of the internal dynamicsof implementing organisations, their policy instruments, theirservice delivery mechanisms, their management practices, andthe linkages among these.

Related term: Formative evaluation.

* As the term is understood in this manual, a process evaluation may also

deal with outputs and other intermediary results.

P

Page 113: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ANNEX C 111

PROGRAM EVALUATION Evaluation of a set of interventions,marshaled to attain specific global, regional, country, or sectordevelopment objectives.

Note: A development program is a time bound interventioninvolving multiple activities that may cut across sectors, themesand/or geographic areas.

Related term: Country program/strategy evaluation.

PROJECT EVALUATION Evaluation of an individual develop-ment intervention designed to achieve specific objectives withinspecified resources and implementation schedules, often withinthe framework of a broader program.

Note: Cost benefit analysis is a major instrument of projectevaluation for projects with measurable benefits. When benefitscannot be quantified, cost effectiveness is a suitable approach.

* As the concept is understood in this manual, there are many approaches

to project evaluation. Cost-benefit analysis and analyses of cost-effective-

ness are important tools for economic evaluation focussing on questions of

efficiency.

PROJECT OR PROGRAM OBJECTIVE The intended physical,financial, institutional, social, environmental, or otherdevelopment results to which a project or program is expectedto contribute.

PURPOSE The publicly stated objectives of the developmentprogram or project.

QUALITY ASSURANCE Quality assurance encompasses anyactivity that is concerned with assessing and improving the meritor the worth of a development intervention or its compliancewith given standards.

Note: Examples of quality assurance activities includeappraisal, results-based management, reviews during imple-mentation, evaluations, etc. Quality assurance may also referto the assessment of the quality of a portfolio and its development effectiveness.

REACH The beneficiaries and other stakeholders of adevelopment intervention.

Related term: Beneficiaries.

RECOMMENDATIONS Proposals aimed at enhancing theeffectiveness, quality, or efficiency of a development intervention;at redesigning the objectives; and/or at the reallocation ofresources. Recommendations should be linked to conclusions.

P

Q

R

Page 114: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

112 ANNEX C

RELEVANCE The extent to which the objectives of adevelopment intervention are consistent with beneficiaries’requirements, country needs, global priorities and partners’and donors’ policies.

Note: Retrospectively, the question of relevance often becomesa question as to whether the objectives of an intervention orits design are still appropriate given changed circumstances.

RELIABILITY Consistency or dependability of data andevaluation judgements, with reference to the quality of theinstruments, procedures and analyses used to collect andinterpret evaluation data.

Note: Evaluation information is reliable when repeatedobservations using similar instruments under similar conditionsproduce similar results.

RESULT The output, outcome or impact (intended orunintended, positive and/or negative) of a developmentintervention.

Related terms: Outcome, effect, impact.

RESULTS CHAIN The causal sequence for a developmentintervention that stipulates the necessary sequence to achievedesired objectives beginning with inputs, moving throughactivities and outputs, and culminating in outcomes, impacts,and feedback. In some agencies, reach is part of the resultschain.

Related terms: Assumption, results framework.

RESULTS FRAMEWORK The program logic that explains howthe development objective is to be achieved, including causalrelationships and underlying assumptions.

Related terms: Results chain, logical framework.

RESULTS-BASED MANAGEMENT (RBM) A managementstrategy focusing on performance and achievement of outputs,outcomes and impacts.

Related term: Logical framework

REVIEW An assessment of the performance of an intervention,periodically or on an ad hoc basis.

Note: Frequently “evaluation” is used for a morecomprehensive and/or more in-depth assessment than “review”.Reviews tend to emphasise operational aspects. Sometimes theterms “review” and “evaluation” are used as synonyms.

Related term: Evaluation.

R

Page 115: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

ANNEX C 113

RISK ANALYSIS An analysis or an assessment of factors (called assumptions in the logframe) affect or are likely to affectthe successful achievement of an intervention’s objectives.A detailed examination of the potential unwanted and negativeconsequences to human life, health, property, or the environmentposed by development interventions; a systematic process toprovide information regarding such undesirable consequences;the process of quantification of the probabilities and expectedimpacts for identified risks.

SECTOR PROGRAM EVALUATION Evaluation of a cluster ofdevelopment interventions within one country or acrosscountries, all of which contribute to the achievement of aspecific development goal.

Note: A sector includes development activities commonlygrouped together for the purpose of public action such ashealth, education, agriculture, transport etc.

SELF-EVALUATION An evaluation by those who are entrustedwith the design and delivery of a development intervention.

STAKEHOLDERS Agencies, organisations, groups or individualswho have a direct or indirect interest in the developmentintervention or its evaluation.

SUMMATIVE EVALUATION A study conducted at the end of anintervention (or a phase of that intervention) to determine theextent to which anticipated outcomes were produced.Summative evaluation is intended to provide informationabout the worth of the program.

Related term: Impact evaluation.

SUSTAINABILITY The continuation of benefits from adevelopment intervention after major development assistancehas been completed. The probability of continued long-termbenefits. The resilience to risk of the net benefit flows over time.

TARGET GROUP The specific individuals or organisations forwhose benefit the development intervention is undertaken.

TERMS OF REFERENCE Written document presenting thepurpose and scope of the evaluation, the methods to be used,the standard against which performance is to be assessed oranalyses are to be conducted, the resources and time allocated,and reporting requirements. Two other expressions sometimesused with the same meaning are “scope of work” and“evaluation mandate”.

R

S

T

Page 116: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

THEMATIC EVALUATION Evaluation of a selection of develop-ment interventions, all of which address a specific developmentpriority that cuts across countries, regions, and sectors.

TRIANGULATION The use of three or more theories, sources or types of information, or types of analysis to verify andsubstantiate an assessment.

Note: By combining multiple data-sources, methods, analysesor theories, evaluators seek to overcome the bias that comesfrom single informants, single-methods, single observer or singletheory studies.

VALIDITY The extent to which the data collection strategies andinstruments measure what they purport to measure.

114 ANNEX C

T

V

Page 117: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 118: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 119: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability
Page 120: Looking Back, Moving Forward - OECDMoving Forward Sida Evaluation Manual Chapter 1: What is Evaluation? 1.1 The concept of evaluation 9 1.2 Monitoring and evaluation 10 1.3 Accountability

SWEDISH INTERNATIONAL DEVELOPMENT COOPERATION AGENCY

Address: SE-105 25 StockholmVisiting address: Sveavägen 20Phone: +46 (0)8-698 50 00Fax: +46 (0)8-698 56 15www.sida.se [email protected]