Monitoring and Evaluation (M&E) of Projects by the Ministry of Public Works, Work Services Group

Preview:

Citation preview

Monitoring and Evaluation (M&E) of Projects by theMinistry of Public Works, Work Services Group

Ministry of Public Works, Guyana5th International Engineering Conference January 2015

Presenters:Ms. Lloyda RollinsMs. Jennifer Rahim

Monitoring and Evaluation in Brief

Monitoring is the routine, daily assessment of ongoing

activities and progress on projects of all types

Evaluation is the periodic assessment of overall

achievements of projects

Monitoring looks at what is being done, whereas

evaluation examines what has been achieved or what

impact has been made

M&E can help develop the confidence of organizations

in making decisions in the following areas:

resource allocation and uses;

programme (and project) direction; and

meeting the needs of intended recipients

It is by means of M&E that organizations can

determine the impact of its programmes (and

projects), through a comprehensive analysis of the

intended and unintended outcomes

It also provides information about the performance of a

government, of individual ministries and agencies,

and of managers and their staff as well as it provides

information on the performance of donors that

support the work of government.

Reasons for Monitoring & Evaluating ProjectsWith the growing number of large projects in Guyana,

the cost for execution and the continuous delays of

projects, donor agencies such as the Inter-American

Development Bank and the Caribbean Development

Bank has sanction the need for better control and

management of the project because of the large

investment they make. This has caused agencies

such as the Work Services Group to increase its focus

on improving the efficiency of projects and its

expenditures through monitoring and control.

What is Monitoring & EvaluationMonitoring is not policing or imposing but rather

it is the continuous collection of data and information

on specified indicators to assess the implementation

of a project in relation to activity schedules and

expenditure of allocated funds, and progress and

achievements in relation to its intended outcomes.

Monitoring involves day-to-day follow-up of project

activities during implementation to measure progress

and identify deviations

- requires routine follow-up to ensure activities are

proceeding as planned and are on schedule

- needs continuous assessment of activities and

results

- answers the question, “what are we doing?

Monitoring activities provide answers to the

following questions:

Is the programme (or project) achieving its

goal and objectives?

Is the programme (or project) being

implemented as intended?

What factors are facilitating/hindering

success?

What are the unintended outcomes?

What are the lessons learned up to this point?

Are stakeholders’ priorities being addressed?

Monitoring involves:

Reviewing progress towards the achievement of

prescribed programme (or project) objectives

Setting up systems to collect data for each

indicator and for each objective

Documenting the contextual issues which impact

on programme (or project) implementation

Using real-time information to manage a

programme (or project)

What is Evaluation

Evaluation is the periodic assessment of the

design implementation, outcome, and

impact of a programme (or project). It

should assess the relevance and

achievement of the intended outcome, and

implementation performance in terms of

effectiveness and efficiency, and the nature,

distribution, and sustainability of impact

Evaluation

is a systematic way of learning from experience to

improve current activities and promote better

planning for future action

is designed specifically with the intention to attribute

changes to the project itself

answers the question, “what have we achieved and

what impact have we had?”

Evaluations promote a culture of learning, which is

focused on service improvement through evidence-

based practices

Evaluations promote replication of successful

interventions (using evidence-based practices)

Evaluations determine the impact of programmes (and

projects) by reporting on the intended, as well as

unintended outcomes

Why Should We Monitor & Evaluate Projects Why

When

MonitorMonitor EvaluateEvaluate

•Review Progress on set targets, indicators, objectives

•Identify gaps in planning and implementation

•Make day-to-day decisions

•Provide information for evaluation

Judge, and value

Asses Major decision

Provide information for planning

During implementation

Continuous

Before or after

Periodic

How Should We Monitor and Evaluate Projects

In order to Monitor and Evaluate, you must

have performance indicators.

Indicators are realistic, specific, observable

and measurable characteristic that can be

used to show changes or progress a

programme or project while achieving a

specific outcome.

Indicators provide information/data that answer M&E

questions

Indicators provide clues, signs and markers that inform

how close projects (and programmes) may be to their

intended paths

Indicators are used to assess inputs, outputs, outcomes

and impacts

Input indicators include financial, materials, equipment,

human and technical resources required for the project

Output indicators provide information/data on project

activities that are completed

Outcome indicators provide information/data on

improvements expected from the project activities

Impact indicators provide information/data on the longer-

term, holistic improvements expected from the project

(and programme) activities

Requirements for Monitoring & Evaluating Projects

Level Description Time-frame

Impacts (Goal)

Measurable changes over time as a result of the projects

Related to long-term outcomes

Outcomes (Objectives)

Changes in behaviours or skills as a result of the implemented project. Outcomes lead to impacts

Usually mid- to long-term

Outputs (Deliverables)

Activities or services that the project is providing. Outputs lead to outcomes

Milestone dates within the project duration

Inputs Resources that are put into the project (e.g. person-months of consulting time, cost of materials, equipment, etc). Lead to achievement of outputs.

Throughout the project duration

Logical Framework Model for Monitoring and Evaluation

Resources

(Inputs)

Activities

Outputs

Outcomes

Impact

• What we aim to changeImpact

• What wish to achieveOutcome

• What we produce or deliverOutputs

Activities

Inputs (Resources)

• What we do• What we do

• What we use to do work• What we use to do work

Monitoring and Evaluation DataIn order for monitoring and evaluation to be effective

there must be data to measure the desired indicators.

Data required can be obtained from existing sources or

may require new sources via project-related M&E

activities

Existing data sources may not always be accessible;

confidentiality of data may be an issue; data may be

imprecise, incomplete or of poor quality. Existing data

sources are generally inexpensive (free) and show

historic trends

New data sources may require expensive, time

consuming methods (e.g. surveys), but should

provide the precise data required.

All data collected must be accurate and of the best

quality so as to negate any doubts of the intended

results. There must be confidence in the data

collected.

It must be monitored at every step of the data

collection, analysis and reporting process.

It must inspired

• Data should reflect stable and consistent data collection processesReliable

• Feasible to access, given the available resources

• Routinely collected (when possible)

Easy to collect

• Data collected should be relevant to the purposes for which it is to be used

Relevant

• Offer confidence in the quality of information gathered (believable and reliable)

Verifiable

• Data should be collected as quickly as possible after the activity and must be available within a short period of time

Timely

How Can Data Be Collected

Quantitative Methods

Surveys

Exit interviews

Record abstraction

Checklists

Observations

Qualitative Methods

Key informant interviews

Focus group discussions

Quantitative Methods

Surveys – data on a group of individuals is collected; snapshot at

a defined point in time; measure community satisfaction, travel

time surveys, etc

Exit interviews – refers to conducting interviews with key

beneficiaries following completion of activities/services

Record abstraction – collection of data from existing sources (e.g.

Traffic police accident data)

Checklists – list of activities that should be performed during

project implementation, including milestone dates

Observations – watching and recording behavioural patterns

and any changes as a result of project activities

Qualitative Methods

Informant interviews – interviews with selected,

knowledgeable individuals about specific aspects of the

project (can be used to complement quantitative data

collected)

Focus group discussions – interviews of a small group of

persons to gain in-depth understanding of attitudes,

perceptions, situations, etc

Application of Monitoring & Evaluation of Projects by the WSG

WSG/MPW is developing and using M&E systems as part

of the CDB-financed Fourth Road Project West Coast

Demerara and the IADB- financed East Bank

Demerara Four Lane Project, West Bank Demerara,

Canal Polder 1 & 2, East Bank Berbice, Sheriff Street

Mandela and Grove to Timehri.

4th Road Project includes:

WCDR road improvements (31 km) from Vreed-en-

Hoop to Hydronie (4 yrs)

Road Safety Education Programme in Schools (2 yrs)

Road Safety Community and Driver Education

Programme (1 yr)

Road Safety Public Relations Programme (2 yrs)

Goal (Impact – long-term)

To strengthen road safety awareness in the curriculum

and increase awareness of children and young people

attending schools through the development of a

school RSE programme.

Objectives (Outcomes – long-term

institutionalisation aimed at sustainability)

To raise awareness of road safety and RSE

To establish RSE in the curriculum

To develop RSE capacity of teachers in the target schools

through RSE curriculum seminars

To improve the content and delivery of the existing GNRSC

school road safety patrol programme and the Traffic Police

RSE initiatives

Deliverables (Outputs – short-term, during project 24mths)

Completed project scoping study (review of existing RS

courses, id of needed improvements, etc)

Report on how RSE can be integrated into existing

curriculum

Sample lesson plans and training materials for 4 types of

schools;

Delivery of classroom and on/off-road practical training

for students in target schools

Assessment reports on recipients’ knowledge, awareness,

attitudes and perception of road safety before and

after the project , including any changes in behaviour

Reports on the use of RSE principles and techniques by

teachers and students

A national RSE guidance document and best practices

guide

Report on teacher training recommendations to ensure

sustainability of RSE in schools

Project Logical Framework (LogFrame)

Show Project Goal; Objectives (Outcomes); Deliverables

(Outputs/Expected Results) in four Columns:

Narrative Summary

Measures of Achievement (Performance Indicators)

Means (Sources) of Verification

Critical Assumptions (Risks) for Achieving Expected Results

Narrative Summary

Performance Indicators (Measures of Achievement)

Means (Sources) of Verification

Critical Assumptions

Goal or ImpactTo improve safe road use by students

Expected Results (Outputs or Deliverables)Teachers trained in RSE

New road safety syllabus agreed for teacher training qualifications

Road safety included in selected schools curriculum

MOE supports including road safety in Region 3 schools curriculum

Building Capacity

Introductory and advance training in M&E for staff

through workshops and seminar.

Conclusion

There are incentives for implementing monitoring

and evaluation systems into any organisation.

Such incentives can be achieved through the use

of “carrots, sticks, and sermons” (Mackay,

2012).

An example of a carrot is the delivery of greater

autonomy to managers who demonstrate (through

reliable M&E information) better performance of their

programs, projects, or institutions.

An example of a stick is to set challenging (but

realistic) performance goals to be met by each ministry

and program manager.

An example of a sermon is a high-level declaration of

support for M&E from an influential actor in

government, such as the president or an important

minister.

Carrot Stick Sermon

Conduct “How are we doing” team meetings

Highlight good/bad results (using M&E)

High-level statements of endorsement

Awards or prize for managing results

Set performance Awareness-raising seminars

Staff incentives, e.g. recruitment, promotion

Require performance “exception reporting”

Pilot rapid evaluations, impact evaluations

Output or outcome – based performance triggers

Include information on results when appraising managers

Highlight examples of useful, influential M&E

Source: Keith Mackay, How to Build M&E Systems to Support Better Government, World Bank 2007

With the successful implementation of M&E, WSG will

have better control of Scope, Cost and Time on each

project.

The End

Thank YouNo Question Please

Recommended