31
1 1 ASSUMPTIONAL ANALYSIS, LOG FRAME ANALYSIS AND OTHER METHODS OF RECONSTRUCTING AND EVALUATING PROGRAM LOGIC Working Paper Presession Thursday October 12th, Lausanne Frans L. Leeuw Utrecht University, The Netherlands & Education Review Office of the Netherlands ( please do not quote without permission) Frans L. Leeuw is Chief Review Officer of higher education of the Education Review Office and Professor of Evaluation Research, Dpt of Sociology, Utrecht University, the Netherlands. Former he was Dean of the Humanities Department of the Open University of the Netherlands, Director at the Netherlands National Audit Office awd associate professor at Leyden University (policy research). He is president of the EES, advisor to the World Bank/OED/WBI and member of the Impact Assessment and Evaluation Group of the CGIAR. He also is member of several editorial (advisory) boards. Direct all correspondence to: [email protected] or: [email protected]

ASSUMPTIONAL ANALYSIS, LOG FRAME ANALYSIS AND OTHER … · 2017. 5. 23. · behavioral assumptions that underlie a public policy which have been reformulated in the form of premises

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

  • 1

    1

    ASSUMPTIONAL ANALYSIS, LOG FRAME ANALYSIS

    AND OTHER METHODS OF RECONSTRUCTING AND

    EVALUATING PROGRAM LOGIC

    Working Paper

    Presession Thursday

    October 12th, Lausanne

    Frans L. Leeuw

    Utrecht University, The Netherlands

    & Education Review Office of the Netherlands

    ( please do not quote without permission) Frans L. Leeuw is Chief Review Officer of higher education of the Education Review Office and Professor of Evaluation Research, Dpt of Sociology, Utrecht University, the Netherlands. Former he was Dean of the Humanities Department of the Open University of the Netherlands, Director at the Netherlands National Audit Office awd associate professor at Leyden University (policy research). He is president of the EES, advisor to the World Bank/OED/WBI and member of the Impact Assessment and Evaluation Group of the CGIAR. He also is member of several editorial (advisory) boards. Direct all correspondence to: [email protected] or: [email protected]

  • 2

    2

    1. WHY RECONSTRUCT AND EVALUATE ‘PROGRAM LOGIC’?

    1.1. Reconstructing and evaluating program logics in the policy sciences

    Policy instruments like subsidies, levies, information campaigns, adoption strategies, vouchers, regulations, ‘oversight activities’ and auditing as well as policy programs (i.e.’packages of instruments’) are the starting point for any evaluative activity focused on reconstructing and assessing underlying program logics. With a history of over 50 years1, policy researchers and evaluators have paid attention to the (social and behavioral) premises or mechanisms that lie behind instruments and programs in order to understand (and predict) why programs and instruments are successful. The US General Accounting Office calls this the “reconstruction of underlying models of (proposed) programs” (GAO, 1991: 22) and distinguishes between conceptual and operational reconstruction. A conceptual reconstruction concerns the social and behavioral logic behind the program, an operational reconstruction focuses on why which actors are assumed to do what in order to make the program a success. Leeuw (1991:74) refers to the concept of the policy theory, which is a “system of social and behavioral assumptions that underlie a public policy which have been reformulated in the form of premises (or propositions). These premises reflect beliefs of policy makers about the cognition, attitudes and behaviors of the policy’s target groups (…) But they also refer to more structural factors on which policy makers have been making assumptions”. He goes on to show that “there is evidence that knowledge about the content of [policy] theories helps us to understand why policies sometimes turn into failures or disasters”. Vedung (1997: l38) speaks about ‘intervention theories’. An intervention theory ‘contains all the (assumed) empirical and normative presuppositions and assumptions embodied in the policy program by its initial framers’( ibid, pag. 138)2. In a similar line the European Community’s Manual for Project Cycle Management (1993) pays attention to assumptions underlying interventions that are either implemented by the EU itself or by its member states. Strongly linked to this approach is the Logical Framework approach. The first core idea of all these approaches is that reconstructing and assessing the underlying logic of a program [activity] is necessary for obtaining information about the future opportunities for the program. The more sound the premises / assumptions are on which a program or an instrument are based, the greater the chance that the program will succeed. A second core idea is that these underlying logics have to be reconstructed, because they usually are not spelled out by policy makers, politicians or bureaucrats. A third core idea is that there is no a priori evidence that these underlying assumptions, or, if you will, pet theories are valid, i.e. are logically consistent and empirically correct.

    1 The sociologist Karl Mannheim in the 1930’s published ‘Man and Society in an age of reconstruction’, in which he held a plea to articulate assumptions underlying what was then called ‘social planning’. 2 Chen (1990) also refers to this.

  • 3

    3

    To reconstruct and assess these articulated assumptions/’theories’, a methodology is required. We’ll come to that a little later.

    1.2.Reconstructing and evaluating underlying assumptions in the organizational sciences3

    Policy instruments and programs are developed and implemented by organizations. Organizations are driven by (combined) actions of numerous persons. Underlying these actions are ‘assumptions’ or ‘mental models’. Not only in the policy sciences attention is paid to underlying assumptions, but also within organization studies, including the psychology of organizations. Here the focus is on ‘managerial and organizational cognitions’ (Eden & Spencer, 1998). One of the central questions is what the relationships are between these cognitions and the outcomes of organizations. Managers, shareholders, and workers have ‘cognitions’ or ‘mental maps’ about their organization and its environment. These mental maps, mental schemes or cognitive structures 4 on what is going on in their organization partly determine their behavior. Their content concerns the organizational strategies, their chances of succes, the role power plays, their own roles and the relationships with the outside world. Following Weick and Bougon (1986: 131) these schemes ‘assimilate uncertain aspects to existing structures’ or, as Donaldson and Lorsch (1984) say: ‘they translate a world that can be overwhelmingly ambigious into comprehensible and familair terms’. Empirical studies reported by Eden and Spender (1998) show how crucial these mental maps are when trying to explain organizational outcomes. It is also shown how important the articulation of these mental maps is for organizations to become ‘learners’. This partly is reflected in the idea that double loop learning always involves a critical assessment of the assumptions (i.e. ‘mental maps’) underlying organizational activities. Following Argyris and Schön, it is wise to distinguish between theories-in-use versus espoused theories. Eden and Spender (1998: 15) rightly argue that parts of these ‘maps’ or ‘theories’ are implicit and almost are ‘tacit knowledge’ both on an individual level as well as on a collective level. By articulating them, it is possible to compare them with evidence from the scientific organization studies5. Also in this field attention is paid to the methodology how to reconstruct (or ‘elicit’ and assess) mental maps (Van der Heiden & Eden, 1998: 66). ‘Mapping strategic thought’, cognitive mapping techniques and the Self-Q-method are referred to.

    3 We will not deal with similar approaches in marketing research (where “rules-of-thumb” of marketeers are reconstructed in terms of if-then-proposition and to the field of strategy development/strategic planning where the “oral mapping process” is used to identity strategic issues and develop effective strategies (Bryson, 1995: 257 ff). 4Other terms are ‘cognitive models’, ‘scripts’, ‘beliefs’, or ‘assumptions’.

    5 Between the policy sciences’ approach and the psychologists’ approach there are studies articulating ‘institutional logics’. These logics are sets of ‘material practices and symbolic constructions that constitute an institutional order’s organizing principles’ (DiMaggio, 1997: 277). There is a link with what in neo-institutional economics is referred to as the (QUERTY-) path-dependency.

  • 4

    4

    2. THREE EXAMPLES OF RECONSTRUCTED PROGRAM LOGICS 2.1. Anticorruption initiatives of the World Bank Institute

    The World Bank Institute (WBI) has developed the concept of national integrity systems as a means to identify and strengthen those institutions with a mandate to fight corruption. These institutions, known as the "pillars of integrity," include executive branch; watchdog agencies; Prliament; civil society; media and the judiciary. Central in this anticorruption initiative are the following types of actions: Integrity Workshops and Media Workshops. The main purpose of integrity workshops is to formulate and agree upon an anti-corruption program and in the process raise awareness of the costs of corruption and discuss the roles the various pillars of integrity play in the fight against corruption. Workshops are also intended to create a partnership between members of the integrity pillars and to develop an outline of a national integrity system that will be geared to helping curb corruption. Workshops must become forums for establishing policy dialogues focused on developing programs and activities to fight corruption. As a result, participants' awareness is believed to be enhanced, which facilitates the further development and implementation of the program, which basically consists of public awareness campaigns, following up cases where corruption is reported (enforcement), and implementing legal and institutional change. With regard to the media workshops, the media are seen as key players in informing the public about corruption and exposing corrupt practices. The workshops focus on awareness raising and on discussions of the media's role in the fight against corruption, but also on improving professional techniques. Later, more advanced workshops focused on investigative journalism courses and also concentrate on the identification of key repositories of public information in the country. These activities have a programmatic impact when a program for fighting corruption and building a national integrity system is discussed and agreed upon; the need to curb corruption and to build a national integrity system is put on the agenda of policy officials of governments and of representatives of civil society. The social impact to be achieved is to foster informed public discussion and continuing political debate on the issue of integrity within society at large, and particularly among the political leadership. The organizational impact is to foster a sense of ownership and commitment to a national integrity program within government and civil society organizations.

    The reconstructed program logic goes as follows (Leeuw et al, 1999). FIGURE 1: Schematic representation of core elements of EDI’s underlying program logic

    An anticorruption program emphasizing (participatory) workshops

    • will foster policy dialogues; • will help establish a ‘sharing and learning’ process of ‘best practices’ and ‘good examples’ that will

    have behavioral impacts (like signing integrity pledges);

    • which learning process will be more than ad hoc or single shot while it will also help steer ‘action research’;

    • will empower participants;

  • 5

    5

    • will involve partnerships and networks with different stakeholders within civil society and will therefore establish (or strengthen) ‘social capital’ between partners fighting corruption;

    • will disclosing knowledge who is to be trusted in fighting corruption and who not;

    when these activities help realize ‘ quick wins’ , that will encourage others to also become involved in the fight against corruption; when these activities also help to establish ‘islands of integrity’ that can have an exemplary function, they will indeed have such a function;

    developing ‘ local ownership’ when dealing with anti-corruption activities

    a trickle down effect from these workshops to other segments of society will take place

  • 6

    6

    then this will lead to

    • increased public awareness on the con’s of corruption; • increased awareness on the con’s of corruption within civil society; • institution building through establishing or strengthening the different pillars of integrity

    Together with:

    • a transparent society and a transparent and accountable state; • an exit strategy for the World Bank

    this will help establish (or strengthen) a national integrity system

    which will help establish Good Governance

    WHICH WILL REDUCE

    CORRUPTION

    This theory consists of 26 if-then-propositions that were reviewed on the basis of existing (social, economic and behavioral) research findings, field interviews and document analysis (Leeuw et al, 1998; 1999). 2.2 Quality assessment and evaluation in higher education

    Harvey (1999) recently described the general way in which (institutional) evaluations within higher education take place. “Despite the very varied objects of evaluation and the array of different types of agencies [in many different countries, fll], there is a surprising conformance in the methods that are adopted. Approaches to evaluation in higher education, as has frequently been pointed out are heavily dependent on three basic elements:

    • self-assessment (or submission); • peer evaluation; • statistical or performance indicators.

  • 7

    7

    The results are prepared as a report that usually becomes a public document, albeit that a more detailed version may remain confidential. Typically, the procedure is for the institution or programme of study (or subject area) to produce a self-evaluation report or some other form of submission for assessment, such as a research profile. The qualitative self-evaluation is often complemented by statistical data. The report (and the appropriate statistical data) are scrutinised by an external body. Sometimes more information is requested either by the co-ordinating body or the team of ‘respected’ peers who will subsequently visit. This additional material may be received in advance or be available during the visit. The peer-review panel visits the institution. Usually such a visit lasts between one and four days. They attempt to relate the self-assessment document to what they see or, in practice, hear. Often, they see relatively little as they spend most time closeted in a room having discussions with group after group of ‘selected’ discussants. In some cases the peers may observe facilities or even the teaching and learning proces itself, although the latter is rare” (Harvey, 1999). He goes on to say the following: “In the UK, millions of pounds are spent every year to discover that, on the basis of the teaching quality assessments, 0.6% of courses are failing. Similarly, in many UK institutions the institutional Quality Audit process (formally undertaken by HEQC) is entirely orchestrated. Typically, auditors ‘hold court’ in the University Senate Room and see a stream of visitors, usually in small groups. These groups are summoned early by the university senior managers, briefed before they go in to see the auditors and de-briefed when they come out. The auditors hear a story that reflects the formal organisational process. Formal structures, though, are significantly removed from the reality of the living and dynamic organisation that is the university. If the audit process wants to know what really goes on, then an entirely different approach is necessary: one that involves grubbing about in departments’. Goodlad (2000: 71 ff) raises some similar issues in an essay on ‘benchmarks and templates: some notes and queries from a sceptic’. He puts forward the point that “huge quantities of people’s time and effort has been devoted to quality assurance mechanisms, often with no reference to what exactly is to be assured”. Whether or not these allegations are right or wrong, they point to a need to articulate underlying assumptions. What are some of these assumptions? (Leeuw, 2001).

    • A peer review (‘visitation’) and accreditation will have their impact on quality, primarily

    because of the (media) publicity that is involved by making reports public; • when visitation and accreditation take place, institutes of higher education learn much

    more about themselves; this facilitates the development and implementation of more efficient policies and programs;

    • when some (regions of) countries are more of a frontrunner in higher education evaluation and accreditation than other parts/countries, a very important explanatory factor is the role of certain individuals as ‘leaders in the field’;

    • quality control mechanisms will stimulate both single and double loop organizational learning by the actors and organizations involved and will not have unintended side-effects;

    • higher education institutions will be compared with one another, but these comparisons are at such a level of generality that they have little or no meaning. Careers advisers,

  • 8

    8

    parents, partners,students will make generalizations about institutions based on memory or perceived reputation or direct experience6;

    • The process of ‘visitation’/peer review is getting more and more orchestrated; the more that is the case, the less the checklists of the reviewers will focus on quality7 and the more they will focus on procedures and templates

    • Feedback of on site visits to the institution after the visit and orally is not effective, nor recommendable, because the auditors/review committee very probable is too much under the influence of the atmosphere and the (lack of) quality of the lunch and also because there can be a debate on what is said and what is not said.

    2.3. Auditing: the underlying auit feedback-theory Barzelay (1996) has described underlying assumptions of auditing. Underlying compliance\financial audits is the ‘theory’ that the government resembles a ‘machine bureaucracy’. This bureaucracy is thought to function well when officials apply legal norms and technical standards to matters within their assigned areas of authority and responsibility. ‘The role orientation of the organizations that conduct traditional audits is to be institutionally aloof, or independent, from both political authorities and bureaucracy’. Underlying performance audits is the ‘theory’ that government acts as an ‘adaptive organism: this image portrays managers as agents performing important functions, including adapting organizations to shifts in their mandates and resources... The concept of performance audit is characterized primarily by the view that the public sector functions well when managerial rationality is applied to the perennial task of adjusting means to ends and, in particular, to accomplishing results with resources’’. Here the role orientation is also that of independence and to render judgments about the design and operation of governmental organizations. Also, the auditor’s implicit assumptions on why his work will make any difference, has been articulated (Leeuw, 1996; 1998). This is called the ‘auditors feedback-theory’, and it can be stated as follows:

    • feedback from the auditor to the auditee is needed when it is shown that the standards or goals of the audited organization are not (or inefficiently) reached;

    • the auditee will listen to the feedback; • he subsequently will take follow-up action; • which will lead to the realization of the formal goals, while • this will not lead to unintended and undesired side-effects.

    This 'theory' assumes that the auditee refrains from strategic actions (merely to make himself look good), elicited by the fact that he is familiar with the standards or measurements auditors apply. The theory also assumes that the likelihood of unintended and undesired side-effects is small to zero. With regard to the question what the empirical content or validity of this 'audit feedback theory' is, we refer to Barzelay (1996), Leeuw (1996) and Meyer and O'Shaughnessy (1994).

    6 This one is a direct citation from Woolf (2000:94). 7See Van Berkel (2000).

  • 9

    9

    3. METHODOLOGY: HOW TO RECONSTRUCT AND EVALUATE UNDERLYING PROGRAM/PROJECT LOGICS

    Several approaches are available.

    A. The Log(ical) Framework approach (also referred to as the Project Cycle Management approach);

    B. The Devil’s advocate approach; C. SAST: Strategic Assumption Surfacing and Testing; D. Elicitation Methodology; E. Reconstructing and evaluating policy theories.

    AD 1: THE LOGICAL FRAMEWORK-APPROACH

    Backgrounds According to Jackson (1997) the logical framework approach provides a set of tools that can be used for planning, designing, implementing and evaluating projects. Logframes provide a structured, logical approach to setting priorities and determining the intended results and activities of a project.

    The Logframe Approach (LFA’s) is currenlty used by a.o. the EU in its Project Cycle Management8, by US AID and other developmental aid organisations and by the World Bank. The LFA-tradition goes back to the late sixties (Solem, 1987; Sartorius, 1996) and in particular to the U.S. Agency for International Development (USAID). ‘…In 1971 USAID undertook a worldwide effort to train its field staff on the LFA. This was the First Generation of the LFA. …By the early-to-mid 1980’s, with encouraging results, the Germans had begun to use the LFA as a participatory planning tool involving project benificiaries and other key stakeholders. ..This Second Generation LFA recognizes the importance of both the content or substance of the design [of the project] and the team process that is undertaken to attain it’. Later in the 1980’s ‘quicker and smarter’ computer software packages were developed to carry out LFA’s ( Sartorius, 1996: 54). This also is one of the characteristics of the Third Generation LFA; other characteristics are a better integration of LFA with other project implementation tools, a better understanding of practical indicators for project performance and supporting M(onitoring) and E(valuation) methods, and a better understanding of the critical success factors required for sustainable and effective LFA use within institutions.

    (Third Generation) Logframe analysis therefore is an attempt to think in an integrated, systematic and precise way about: a) project objectives; b) the causal linkages between these different levels; c) the assumptions about the other factors that are needed for the connections between the different levels to be valid;

    8 Articulating assumptions, as part of the EU Project management cycle, is limited to assumptions concerning factors that are important for the success of EU-projects, but lie outside its scope. Assumptions are answers to the question: ‘what external factors are not influenced by the project, but may affect its implementation and longterm sustainability’ (EU,1993:29).

  • 10

    10

    d) how to asses the degree of fullfilment of the various levels of targets and objectives. Element a) (a hierarchy of objectives) is the heart of the exercise; the other elements try to operationalize and rationalize it. Elements b) and c) constitute the so-called “vertical logic” of the resulting matrix, and part d) concerns the “horizontal logic”. We describe a LFA focused on the field of international agricultural research and technology and its impact. This field is covered by the CGIAR, the Consultative Group on International Agricultural Research, a 400 million dollar consortium of l6 research institutions world-wide (focused on a.o. topics like food policy research, rice research etc) sponsored by a.o. the World Bank. The CGIAR and in particular it’s SPIA (Standing Panel on Impact Assessment) is heavily involved in LFA-activities related to evaluating the impact of this CGIAR. Within CGIAR on three levels LFA is currenlty being implemented: system, centres and project level. What follows is largely describing the System-level log frame.

  • 11

    11

    FIGURE 2: CGIAR SYSTEM LOG FRAME

    RELATIONS AMONG LOG FRAMES:SYSTEM, CENTRE, AND PROJECT

    CGIAR System Logframe

    Mission overall raison detre for the CGIAR System

    Goal Indication overall benefits for target Description of developmentpopulation / environment hypothesis

    CentreLogframes

    Intermediate Goal Indicators direct benefits for beneficiaries to specify / measure achievement

    of intermediate goal

    Purpose Indicators Project utilization of outputs by direct to specify / measure achievement Logframesclients of CGIAR System of purpose

    Outputs Indicators products (tangible/intangible to specify / measure achievementdelivered by the System of outputs

  • 12

    12

    The CGIAR has coined the following terminology. Goals refer to the overall benefits for the target population, defining the overaching goals of the CG. Intermediate Goals are direct benefits resulting from the uptake of innovations which include outputs from the CG. Purposes are the utilization of the CGIAR outputs by those who receive them, while Outputs are defined products delivered by projects, for which the CG is responsible, but which are generally produced together with partners. Indicators are the performance standards with observable characteristics (which permit monitoring), Milestones are ‘key intermediate targets’ and Assumptions are ‘conditions which strongly influence the attainment of outputs, purposes, and goals but which are outside the influence of the CGIAR’ (TAC Secretariat, 1998: 4). Steps within a logical framework analysis The log-frame approach usually has 5 major steps. Step 1) Analysing the problem and developing a “Problem Tree” The analysis phase usually begins with an analysis of problems. The problem analysis is undertaken by identifying the main problems and developing a “problem tree” through an analysis of cause and effects. Brainstorming techniques are used to identify the main problems. ExampleFigure 2 A simple problem tree (from Jackson, 1997)

    Effect Loss of biodiversity

    Decreasing number of elephants Decreasing number of varieties of maize

    Human/ Overpopulation Hunting/ No adequate Monopoly Pesticideelephant by people poaching legislation of seedconflicts trade

    Cause

    Step 2) Objectives analysis The “problem tree” is transformed into an “objectives tree” by restating the problems as objectives. An “objectives tree” is the positive “mirror image” of the problem tree. Step 3) Testing the logic of the Problem Tree There are a number of tools that can be used to test the logic of the objectives tree. Step 3.1 The intent structure analysis

  • 13

    13

    Lee-Smith (1997) (cited by Jackson, 1997:6ff) describes this as an “ends-means” diagram that portrays the values, goals, objectives and detailed actions of components of an organisation, program or project. As example of an intent structure is shown below. The logic of the tree is tested by starting at the top of the hierarchy and asking the questions how each level in the hierarchy is to be achieved, and/or by starting at the bottom of the hierarchy and asking the question “why is this objective/action being undertaken”.

    Figure 3 The Intent Structure (adapted from Lee-Smith, 1997)

    Value or vision end

    Why is this How is thisto be done? to be done?

    The level above isthe end for which Overall Objectivethe level below isundertaken

    Why is this How is thisto be done? to be done?

    Specific Objectives or Purposes

    The level below isthe means by Why is this How is thiswhich the level to be done? to be done?above is achieved

    Expected Results

    Why is this How is thisto be done? to be done?

    Specific Activities Means

    Step 3.2 Force Field Analysis Force field analysis is an approach that is used to develop a list of the factors that may promote or inhibit reaching the goals and objectives of the project. The aim of force field analysis is to provide a model for encouraging the participants to: • examine current characteristics of the present state or situation; • develop a list of positive and negative forces influencing the achievement of the goals and objectives; • discuss the means to strengthening the positive forces and overcoming the weak forces sought. A graphical representation of a force field analysis is shown below.

  • 14

    14

    Figure 4 Force Field Analysis

    Goals Negative Forces

    Positive Forces

    Current Problems

    Step 3.3 SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis SWOT is widely used as a tool for exploring the constraints and opportunities of a proposal. It can be used to test the completeness of a goal. Strengths and weakness refer to those strengths and weaknesses within the project. Opportunities and threats refer to the opportunities for and the threats to the project achieving the goal. Step 4) Assumptional Analysis The aim of specifying the assumptions is to identify the external factors that will affect the success of the project. Once assumptions have been identified, they are stated in terms of the desired situation. An assumptions algorithm is shown in figure 5.

  • 15

    15

    Figure 5 The Assumption Algorithm (ITAD, 1996)

    Is the external factor important?

    YES NO

    Will it be Almost Do not includerealised? certainly in logframe

    Likely Include as an assumption

    Unlikely Is it possible to redesign the project in order to influence the external factor?

    YES NO

    Redesign the project: add activities The project is not technically or results; or reformulate the project feasible purpose

    Sometimes “rules-of-thumbs” are given regarding the way in which a LFA is done in practice (often through workshops). Then suggestions are formulated about the number of participants, the way in which problems are diagnosed and the ways in which the report is drafted and presented.

  • 16

    16

    Step 5) Developing verifiable indicators ‘For each output and activity indicators need to be developed’ (Jackson, 1997:9). These indicators usually have to meet criteria like measurability, feasibility, relevance and accuracy, sensitivity and timeliness. Also criteria like ‘gender-orientedness’ and ‘equity-orientedness’ are used in the literature.

    AD II: RECONSTRUCTING AND EVALUATING UNDERLYING POLICY AND PROGRAM THEORIES

    Methodology What are the rules of method that are used when articulating a policy theory?

    1. Search in (official) documents/interviews for statements that indicate: • why it is believed necessary to solve the (research/social/ organizational/policy)

    problem; • what are the goals of the policy program/instrument under review.

    2. These statements point to (social/behavioral/economic) mechanisms that are believed to be

    crucial in solving the problem. Statements that have the following form are especially relevant:

    • “it is evident that x…….will work”, • “in our opinion the best way to go about this problem is to …”, • “the only way to solve this problem is to …”, • “our institution’s x years of experience tells us that …” • Compiling a survey of these statements and linking the solutions/approaches/ policy instruments with the overall goals of the program under review

    3. Reformulating these statements in conditional “if-then”-propositions or propositions of a similar structure (“the more.., the more”);

    4. Searching for warrants, i.e. missing links through Argumentational analysis. According to Toulmin (1958) and Mason & Mitroff (1981) a

    warrant is the “because” part of an argument: it says that B follows from A because of a (universally) accepted principle. For example: “the market will grow at 7% per annum” follows from “the market has grown by at least 7% a year for each year of the last 10 years”, because of the principle: “past market growth is a good indicator of future market growth”. The “because” part of such an argumentation is often not made explicit. Consequently, these warrants must be inferred by the person performing the analysis (Dunn, 1981)

    5. Reformulating these warrants in terms of conditional “if-then” (or similar) propositions;

  • 17

    17

    6. Assessing the validity of the propositions by looking into9: • the logical consistency of the set of assumptions10; • its empirical content, i.e. to what extent does the policy theory correspond with the state of the art within the social/behavioral/economic sciences? Here one compares statements that form a part of the policy theory with statements that belong to explanatory theories with a high empirical content; • to which level the policy theory focuses on variables/factors that can be ‘manipulated’ or ‘steered’ through policies. Currently, it makes hardly any sense if a crime prevention policy theory pays much attention to the effectiveness of genetic manipulation of potential criminals.

    AD III: THE DEVIL’S ADVOCATE (DA) APPROACH Starting point is Schwenk’s statement that ‘research in the field of organizational decision-making has demonstrated that conflict, if properly introduced and managed, can improve the quality of decisions’ (Schwenk, 1984: 153). He follows Janis who recommended the use of devil’s advocates by policy makers, using Robert Kennedy’s role in the Cuban Missile Crisis. Janis suggested that ‘in situations where there is a great deal of agreement among policy-makers [implicit or explicit, FLL) and there is a danger of premature consensus, the chief executive should assign one or more group members to the role of devil’s advocate. Methodology: • The DA-approach ‘should begin with the formal statement of a proposed course of action

    and the analysis underlying the proposal; • He (the DA [which can be a group] should then examine the proposal for inconsistencies,

    inaccuracies, and irrelevancies and prepare a critique of the proposal based on this examination;

    • If the proposal is found to be unsound, the DA should develop a re-analysis of the problem and alternative recommendations;

    • A kind of confrontation session between an advocate of the original proposal and the DA’s proposal is then held with key organizational decision-makers and observers;

    • Based on this confrontation, the decision-makers can then accept the proposal, modify it or develop a completely new proposal’ (Schwenk,1984: l54-l55).

    Conditions for making this approach work are: • to rotate the role of DA among junior executives with upper management potential; • the willingness of decision-makers to play with material from different angles an in the context of popular as well as unpopular hypotheses; • to prevent that there is a maldistribution of resources (power, money, status) among the proponents of the different (DA) views; • that there should be no direct involvement of the top-level management in the debates and

    9 In the research literature more criteria are mentioned. 10 This is also called ‘cotenability’ (Tetlock & Belkin, 1996: 19).

  • 18

    18

    • that there is adequate time for the sessions.

    AD IV: STRATEGIC ASSUMPTION SURFACING AND TESTING (SAST) SAST or ‘assumptional analysis’ can be found in a series of studies (Mitroff & Emshoff, 1979; Killman, 1979; Jackson, 1989) but has as its core knowledge basis Mason and Mitroff’s book on ‘Challenging Strategic Planning Assumptions’ (1981). Central in the approach are four major stages including some rules of method: • stage 1: group formation; • stage 2: assumption surfacing; • stage 3: dialectical debate11; • stage 4: synthesis. We follow Jackson (1989: 14 ff). Stage 1: Group Formation “The aim of this stage is to structure groups so that the productive operation of the later stages of the methodology is facilitated. As wide a cross section of individuals as possible who have an interest in the relevant policy question should be involved. They are divided into groups, care being taken to maximize convergence of viewpoints within groups and to maximize divergence of perspectives between groups. Stage 2: Assumption Surfacing During this stage the different groups seperately unearth the most significant assumptions that underpin their preferred policies and strategies. Two techniques assume particular importance in assisting this process. The first, “stakeholder analysis, “asks each group to identify the key 11 This concept of the dialectical debate is the reason why in the literature SAST is also referred to as the DI-approach: dialectical inquiry. 12Other terms are ‘cognitive models’, ‘scripts’, ‘beliefs’, or ‘assumptions’. 13 Between the policy sciences’ approach and the psychologists’ approach there are studies articulating ‘institutional logics’. These logics are sets of ‘material practices and symbolic constructions that constitute an institutional order’s organizing principles’ (DiMaggio, 1997: 277). There is a link with what in neo-institutional economics is referred to as the (QUERTY-) path-dependency. 14 Also, the auditor’s implicit assumptions on why his work will make any difference, has been articulated (Leeuw, 1996; 1998). This is called the ‘auditors feedback-theory’, and it can be stated as follows: • feedback from the auditor to the auditee is needed when it is shown that the standards or goals of the audited organization are not (or inefficiently) reached; • the auditee will listen to the feedback; • he subsequently will take follow-up action; • which will lead to the realization of the formal goals, while • this will not lead to unintended and undesired side-effects. This 'theory' assumes that the auditee refrains from strategic actions (merely to make himself look good), elicited by the fact that he is familiar with the standards or measurements auditors apply. The theory also assumes that the likelihood of unintended and undesired side-effects is small to zero. With regard to the question what the empirical content or validity of this 'audit feedback theory' is, we refer to Barzelay (1996), Leeuw (1996) and Meyer and O'Shaughnessy (1994). 15

  • 19

    19

    in assisting this process. The first, “stakeholder analysis, “asks each group to identify the key individuals or groups on whom the success or failure of their preferred strategy would depend. This involves asking questions such as who is affected by the strategy? Who has an interest in it? Who can affect its adoption, execution, or implementation? and Who cares about it? For the stakeholders identified, each group then lists what assumptions it is making about each of them in believing that its preferred strategy will succeed. The second technique is “assumption rating”. For each of the listed assumptions each group asks itself the following: how important is this assumption in terms of its influence on the success or failure of the strategy? And: how certain are we that the assumption is justified? The results are recorded on a chart such as that shown below. Each group should now be able to identify a number of key assumptions - usually in the most important/least certain quadrant of the chart - upon which the success of its strategy rests.

    Figure 7: Assumption rating chart

    Most Certain

    Least MostImportant Important

    Least Certain

    Stage 3: Dialectical Debate The groups are brought back together and each group makes the best possible case for its preferred strategy, while identifying its key assumptions. Points of information only are allowed from other groups at this time. There is then an open, dialectical debate focusing on which assumptions are different between groups, which are rated differently, and which of the other groups assumptions each group finds most troubling. Each group should develop a full understanding of the preferred strategies of the others and their key assumptions.

  • 20

    20

    Stage 4: Synthesis An attempt at synthesis is then undertaken. Assumptions are negotiated and modifications to key assumptions made. Agreed assumptions are noted, and with luck, these can form the basis for consensus around a new strategy that bridges the gap between the old strategies and goes beyond them as well. If no synthesis can be achieved, points of disagreement are noted and the question of what research might might be done to resolve these differences is discussed”(Jackson, 1989: ll4 ff).

    AD E: ELICITATION METHODOLOGY Background Projects and programs are always developed and implemented by organizations. Organizations are driven by (combined) actions of numerous persons. Underlying these actions are ‘mental models’. Therefore it can be expected that not only within the policy sciences attention is paid to underlying assumptions, but also within organization studies, including the psychology of organizations. Here the focus is on ‘managerial and organizational cognitions’ (Eden & Spencer, 1998). One of the central questions is what the relationships are between these cognitions and the outcomes of organizations. Managers, shareholders, and workers have ‘cognitions’ (or ‘mental maps’) about their organization and its environment. These maps12 on what is going on in their organization partly determine their behavior. Their content concerns the organizational strategies, their chances of succes, the role power plays, their own roles and the relationships with the outside world. Following Weick and Bougon (1986: 131) these schemes ‘assimilate uncertain aspects to existing structures’ or, as Donaldson and Lorsch (1984) say: ‘they translate a world that can be overwhelmingly ambigious into comprehensible and familair terms’. Empirical studies reported by Eden and Spender (1998) show how crucial these mental maps are when trying to explain organizational outcomes. It is also shown how important the articulation of these mental maps is for organizations to become ‘learners’. This partly is reflected in the idea that double loop learning always involves a critical assessment of the assumptions (i.e. ‘mental maps’) underlying organizational activities. Following Argyris and Schön, it is wise to distinguish between theories-in-use versus espoused theories. Eden and Spender (1998: 15) rightly argue that parts of these ‘maps’ or ‘theories’ are implicit and almost are ‘tacit knowledge’ both on an individual level as well as on a collective level. By articulating them, it is possible to compare them with evidence from the scientific organization studies13. Van der Heyden and Eden (1998: 66 ff) link their way of elicitating underlying assumptions to organizational learning. “The most crucial part of developing [organizational] learning …is that of eliciting the manager’s ‘taken-for-granted’-mental models of their world… These are often kept tacit in order to avoid confrontation and to maintain flexibility in the negotiation that strategy development entails’ (p.. 66). The ‘task of strategic elicitation within the context of organizational learning is to get a deep knowledge and embedded wisdom—the theories in use, rather than the espoused theories or the mere rethoric of strategy’. Methodology

  • 21

    21

    Also in this field attention is paid to the methodology how to reconstruct (or ‘elicit’ and assess) mental maps (Van der Heiden & Eden, 1998: 66). Following insights from cognitive mapping techniques and the Self-Q-method, the following rules of method are suggested: 1. look at the concrete record of strategic intentions, through for example a study of the

    documentation which is designed to direct behaviour; 2. look at decision-making in action, get involved in the organization (an anthropological

    observer approach).Watch decision-makers, listen to stories; 3. work with managers on strategic breakdown situations. Become immersed in the thinking

    and the social process of ‘strategic fire fighting’. Here they recommend to use ‘Group Decision Support Systems’ such as SODA (‘Strategic Options Development and Analysis’);

    4. use well-designed trigger questions in interview situations so that ‘theories in use’ can be detected. Follow interviews with feedback to individuals and to the team. The ‘elicitation cycle’ is built on responses to designed trigger questions. The process uses dialectic and non-conforming statements and is iterative. Van der Heyden and Eden go on to specificy 5 ‘techniques’:

    • create an open-ended atmosphere in the interview; • do away with formal language and create a ‘playful’ atmosphere in which it is easier to deviate from the formal phraseology and the ‘official script’; • it can be useful to ‘set the interviewees up against themselves’; • the objective is to create dialectical tension by asking the interviewees to adopt unusual roles; • the interviewer should listen very carefully for internal inconsistencies in what is being said.

    5. PROBLEMS TO BE SOLVED The five approaches outlined above are the stock of knowledge for articulating and assessing underlying program and policy logics. Compared to a situation where there were hardly any methodologies available in this field, this stock of knowledge can be seen as an examplar of growth of knowledge. However, there remains a number of problems to overcome. The first difficulty is that assumptional analysis as part of the Logical framework Analyis (LFA) is limited to factors outside the scope of projects or programs. Though outside factors are important, the same is undoubtedly true for assumptions about ‘inside factors’ within organizations. Those are not taking into account. Examples are assumptions about budget maximization mechanisms, the ‘sunk cost’-phenomenon and the importance of ‘managerial cognitions about effective strategies’ versus the cognitions of others within the organization. It is unclear why assumptions on these and other ‘inside the system’- factors are left out. Secondly, unclear is whether or not the articulated assumptions are indeed critically assessed and to which criteria. One should be reminded of the critique uttered by Gasper (1997: 1) that an area of clear weakness with regard to LFA is to be found around ‘the neglect of the assumptions column [in the Framework]’. A third difficulty concerns the possibility of turning LOG-Frameworks into LOCK-frameworks. A critical review by Gasper (1997) of the pro’s and con’s of logframe-analysis points to this (see also Oakley, 1998). When log-framework analysis develops into “lock”-framework-analysis, there is a danger of rigidization and that the focus moves away from articulating and

  • 22

    22

    analysis, there is a danger of rigidization and that the focus moves away from articulating and debating substantive assumptions to mere procedural and administrative ones. “Box-filling” is another expression pointing to this danger (Gasper, 1997:15; 31). Gasper more generally refers to four ‘ism-difficulties’:

    • Objectives-ism: a strong emphasis on explicit, unified statements of project and policy/program objectives; • Means-Ends-ism: organization of these objectives into a hierarchical, and pyramidal system; • Indicator-ism: strong emphasis on measuring the attainment of objectives and • Project-ism (integration of these elements in the notion of a project).

    After reviewing different case studies in the field of Developmental Aid programs, Gasper (1997: 30) is of the opinion that ‘the log-frame rose, spread and declined during ..the 70-s, 80’s and 90’s…..The ‘something is better than nothing’-criterion’ (= LFA is ‘something’, non-LFA is ‘nothing’, FLL) remains valid, but we will be looking also for more than that, both in LFA performance and in the situational refinement of its assessment’. Termed in a more sociological way, Gasper shows that well-articulated and well-intended activities may lead to unintended and even negative [side-] effects that run the risk of doing away with the utility of the original goal of the activities. Power (1996), Pollit & Summa (1998) and Leeuw (1996; l999) have shown a similar phenomenon at work in the field of auditing14. Other problems to be solved we have listed in the next table=appendix. TABLE / APPENDIX 1 HERE

  • 23

    23

  • 24

    24

    The final paragraph focuses on some recent developments that can remedy some of theses problems. 5. NEW DEVELOPMENTS: COMPUTER-SUPPORTED COLLABORATIVE ARGUMENTATION One, if not the most challenging new development that can help solve some of the above mentioned problems, comes from a relatively new field of interdisciplinary studies. I follow van Bruggen et al (2000), who is working on a Ph.D- project in this field. “Computer-Supported Collaborative Argumentation” (CSCA) is this new field. It brings together research from the areas of Computer-Supported Collaborative Learning (CSCL) and Design Argumentation, and in particular Design Rationale (DR). For ‘argumentation’ one can read: assumptions underlying programs or policies. This makes why this new interdisciplinary field is of great importance for analysing program logics. The new field is an ict-driven approach to reconstruct argumentations underlying decisions of people and organizations. Virtual-graphic representations are crucial; they function as (external) representations of the structure of the underlying assumptions. By using ict, the transparancy of the reconstruction process ánd the learning possibilities are greatly enhanced. Bell (1997) makes a distinction between discussion-based tools and knowledge representation tools. Discussion-based tools support dialogical argumentation of a group. Examples are CSILE (Scardamalia, Bereiter, Maclean, Swallow & Woodruff, 1989; Scardamalia & Bereiter, 1994) and the Collaboratory Notebook (Edelson, Gomez, Polman, Gordin & Fishman, 1994; Edelson, O'Neill, Gomez & D'Amico, ). Knowledge representation tools not only support the dialogical argumentation of the participants, but also support the represent the argumentation by the individuals as well. Furthermore the tools have to support argumentation whose structure and content correspond to, in this case, a valid scientific argumentation. Examples of this can be found in systems such as Knowledge integration environment(KIE), SenseMaker (Bell, 1997) and Belvédère (Paolucci, Suthers & Weiner, 1995; Suthers & Weiner, 1995; Suthers, Toth & Weiner, 1997).

    Knowledge integration environment (KIE)

    The following examples are taken from the Knowledge Integration Environment (Bell, 1997) an environment in which learners are to investigate rival hypotheses. The KIE guide directs the learners to claims and evidence.

  • 25

    25

    Figure 1: The KIE guide (http://www.kie.berkeley.edu/KIE/software/Guidelarge.gif)

    Sensemaker In Sensemaker hypotheses and supporting evidence are brought together in so-called ‘claim frames’. Sensemaker only uses ‘theory’; it does not use objects like hypothesis or data. Moreover Sensemaker does not makes contradictions between evidence visible.

  • 26

    26

    Figure 2: Sensemaker interface (http://www.kie.berkeley.edu/KIE/software/sensemaker_large.gif)

    Belvedere Belvédère is a synchronous system that supports collaborative inquiries. Learners are confronted with challenge problems (e.g., What caused the extinction of the Dinosaurs; why doess an anticorruption policy focused on the judiciary in Bolivia does not work?) that need a (scientific) explanation. The Belvédère environment offers access to web-based material (that will also guide learners through the steps of the inquiry), a chat window, and a shared visual workspace where learners construct scientific explanations in so-called ‘evidence maps’. The environment has a coach that comments on the structure of the evidence maps and that makes suggestions for improvements. The characteristics of the notation of the evidence maps are what we concentrate on here.

  • 27

    27

    Figure 3: Belvédère version 2 interface

    The ontology of the evidence maps in Belvédère is defined in the objects and relations that

    students may use when they create evidence maps. In the current version of Belvédère, the objects are ‘principle’, ‘hypothesis’, ‘data’ and ‘unspecified’. The relations are reduced to a basic set of ‘for’, ‘against’ and ‘and’. Participants can express how strong their beliefs in the objects and relations are. This set of objects and relations obviously limits the scope of what Belvédère can express. Suthers (1998b) mentions several possible extensions to the representations in Belvédère such as concept mapping, plan diagrams, causal loop diagrams, mathematical models, as well as ‘language arts’ with which students can construct diagrams according to their writing goals. Some of these extensions add support to the representation of the domain (concept mapping, mathematical models, causal loop), i.e. widen the scope of the ontology, while others are added to support student activities within these domains.

    Suthers (1995) notes that learners are forced by the representational system to indicate which type of object they add (the specificity of the representation forces disambiguation). This often leads to epistemological discussions between learners that will not, however, be represented in the evidence maps. He notes that a weaker representational structure could evade the issue, but that this would also obscure the need to discuss these important points. A weaker representation would leave room for different interpretations of the representation.

    There were a number of drawbacks associated with the earlier representational scheme of Belvédère. The early versions of Belvédère had more objects and relations and offered a very explicit Toulminian perspective on argumentation. In subsequent versions of Belvédère, the number of objects and especially the different types of relations were reduced. The reduction in the number of objects and relations was motivated logically because they were considered redundant. It was also noted, however, that the detailed level at which relations could be presented could lead to interference with the task. It seemed to cause participants to deal with non-goal tasks (Suthers et al., 1997).

  • 28

    28

    Figure 4: One of the earliest Belvédère interfaces

    These three new approaches that can be linked under the heading of ‘ict-driven knowledge management’ can make the process of reconstruction and evaluation of underlying program logics more transparant, more open to ‘dialogue’ and probably more leading to impact. The reason is that by reconstructing assumptions collaboratively, where the results are immediately shown at the computer screens, allows participants involved in –for example—a devil’s advocate approach or the SAST, to learn from each others. However, nevermind the importance of the use of information & communication technology for carrying out activities focused on solving current problems, several other suggestions can also be made: • commit (top) management (CG/Centres) to the activities described; • report in time. Try to predict how much time the processess will take and live up to the

    arrangements made; • don’t overstress the importance of what is learned.

  • 29

    29

    6. REFERENCES FOR FURTHER STUDY AND PRACTICE Barzelay, Mike, Performance auditing and the New Public Managment: changing roles and strategies of central audit institutions, in: OECD, Performance auditing and the modernisation of Government, Paris, 1996: 15-57 Becker, Henk, Social impact assessment, UCL Press, London, 1997. Bruggen, Jan van and Paul Kirschern, External representations of argumentation, paper, OTEC, * Open University of the Netherlands & University of Maastricht Chen, H., Theory-driven evaluation, Sage, London, 1990 DiMaggio, Paul J., Culture and cognition, in: Annual Review of Sociology, 23 (1997): 263-287 Donaldson, G. and Lorsch, J.W., Decision-making at the Top. Basic Books, New York, 1984 Eden, C. & J.-C. Spender (eds.), Managerial and organizational cognition. Theory, methods and research, Sage, London, 1998. Gasper, Des, Logical frameworks: a critical assessment, Working Paper # 264, Institute of Social Studies, The Hague, December 1997. Jackson, Bill, Designing projects and project evaluations using the logical framework approach, in: www.iucn.org/themes/ssp/lfa.htm (1997) Jackson, M.C., Assumptional analysis: an elucidation and appraisal systems practitioners, in: Systems Practice, 2 (1989): 11-28. Killmann, R.H., A dialectical approach to formulating and testing social science theories: assumptional analysis, in: Human Relations, 36 (1983): 1-22 Mason, I. and I.Mitroff, Challenging strategic planning assumptions, Wiley, New York, 1981. Leeuw, Frans, Aspects méthodologiques de la reconstruction et de l'evaluation des théories de compor-tement qui sous-tendent une politique demographique, in: Politiques de Population, 4 (1990): 5-43 Leeuw, Frans L., Policy theories, knowledge utilization, and evaluation, in: Knowledge and Policy, 4 (1991): 73-92. Leeuw, Frans L. Doelmatigheidsonderzoek van de Rekenkamer als regelgeleide organisatiekunde met een rechtssociologisch tintje? In: Recht der Werkelijkheid , 14 (1998). Meyer, K. and O'shaugnessy,K (1993), pp. 249-279, Organizational design and the performance paradox, in: R.Swedberg (Ed.), Explorations in economic sociology, Russel Sage Foundation, New York. Meyer, K. (1994), pp. 556-581,Measuring performance in economic organizations, in: Smelser, N.J. en R.Swedberg (eds.),The Handbook of economic sociology, Princeton, Princeton University Press. Midgley, G. Evaluating services for people with disabilities: a critical systems perspective, in: Evaluation, 2 (1996): 67-85

  • 30

    30

    Oakley, P. et al., Outcomes and impact: evaluating change in social development, Intrac NGO Management and Policy Series no. 6, 1998, Oxford Rossi, P., H.Freeman & M.Lipsey, Evaluation, a systematic approach, Thousand Oaks, 1999 15 Sartorius, R. , The third generation logical framework approach: dynamic management for agricultural research in projects, in: European Journal of Agricultural Education and Extension, 2 (1996): 49-62 Schwenk, C., Devil’s advocate in managerial decision-making, in: Journal of Management Studies, 21 (1984): 153-168. Solem, R.R., The logical framework approach to project design. Review and evaluation in A.I.D.,: genesis, impact, problems, and opportunities, Working Paper # 99, Washington DC, A.I.D.. Suthers, D., Representations for scaffolding collaborative inquiry on Ill-structured problems, Paper, EAREA Annual Meeting , San Diego,California. Tetlock, P. and A.Belkin (eds.), Counterfactual thought experiments in world politics; Princeton U Press, 1997 Toulmin, Stephen, The uses of Argument, Cambridge,1958 Tung, L.L. & A.Heminger, The effects of dialectical inquiry, devil’s advocacy and consensus inquiry methods in a GSS environment, in: Information & Management, 25 (1993): 33-41. Van der Heijden, Kees & C.Eden, The theory and praxis of reflectictive learning in strategy making, in: Eden, C. & J.-C. Spender (eds.), Managerial and organizational cognition. Theory, methods and research, Sage, London, 1998. Vedung, Evert, Public policy and program evaluation, New Brunswick & London, 1997. Weick,K.E. & Bougon, M.G., Organizations as cognitive maps. Charting ways to success and failure, in: H.P.Sims et al. (eds.), The Thinking Organization: dynamics of organizational and social cognition, Jossey-Bass, San Francisco, 1986.

  • 31

    31