Upload
tobias-peters
View
225
Download
2
Tags:
Embed Size (px)
Citation preview
Program Evaluation
Using qualitative & qualitative methods
Program evaluations measure:
Program effectiveness, efficiency, quality, and
participant satisfaction with the program.
Program evaluation can also measure:
How or why a program is effective or is not effective
Program evaluation looks at the program or
component of a program.
It is not used to measure the performance of individual
workers or teams. Consequently it differs from
performance evaluation.
The program’s goals & objectives serve as the starting place for program evaluations.
Objectives must be measurable, time-limited, and contain an evaluation mechanism.
Be developed in relation to a specific program or intervention plan.
Specify processes and tasks to be completed. Incorporate the program’s theory of action –
describe how the program works and what it is expected to do (outcomes).
To start an evaluation, the evaluator must find out what program participants identify as the goal (evaluability assessment).
Theory of action for a hunger program might be:
Advisory Committee is formed to improve food bank services
This improves service delivery
More food is provided
Families miss fewer meals
There is less hunger
Evaluations can measure process or outcomes
Qualitative methods are used to answer how and why questions (process)
Quantitative methods are used to answer what questions- what outcomes were produced; was the program successful, effective, or efficient. (outcome)
Differences between the two
methods: Method Quantitative Qualitative
Logic Deductive Inductive
Values/Bias Objective Subjective
Role of the Researcher
Expert Partner with Research Subjects
Source of Research Questions
Theory/Previous Research
Can be grounded in experiences of researchers and participants
Methodology Structured measurement instruments
Semi-structured surveys, interviews, or observation
Quantitative & Qualitative approaches include:
Experimental Designs Quasi-Experimental
Designs Pre & Post test studies Time Series Analysis Social Indicator Analysis Longitudinal Study Survey Client Satisfaction Survey Goal Attainment Program Monitoring
Ethnographic Study Feminist Research Constructivist
Evaluation Process Analysis Implementation
Analysis Focus Groups
Most common types: Outcome evaluation (quantitative - may or may
not use control groups to measure effectiveness). Goal attainment (have objectives been achieved). Process evaluation (qualitative - looks at how or
why a program works or doesn’t work). Implementation analysis (mixed methods – was
the program implemented in the manner intended).
Program monitoring (mixed methods – is the program meeting its goals – conducted while the program is in progress).
Outcome Evaluations can include: Random Experimental Designs Comparisons of the pre and post-test scores for each participant
on one or more outcome indicators. Using all members of pre-existing groups to serve as
experimental and control groups. Using social indicator data collected by government agencies
(for example, using U.S. Census data on poverty rates in a specific community to determine if an economic development program has been successful in increasing the income of neighborhood residents).
Time series analysis, using repeated measures over a number of time periods to track social indicators or caseload data)
Using statistical controls to hold constant the effects of confounding variables (for example, such as cross-tabulation or regression analysis).
Using a quasi-experimental design in which participants are separated into groups and different levels of the intervention are compared (Chambers et al., 1992; Royce & Thyer, 1996).
Time Series Analysis Examines Data Trends: School Breakfast Program
0
100
200
300
400
500
600
Year 1 Year 2 Year 3 Year 4
Total AbsencesReferrals to Nurse
Client satisfaction surveys are often used as one component of a program
evaluation. Can provide valuable information
about how clientele perceive the program and may suggest how the program can be changed to make it more effective or accessible.
Client satisfaction surveys also have methodological limitations.
Limitations include: It is difficult to define and measure “satisfaction.” Few standardized satisfaction instruments, that have
been tested for validity and reliability exist. Most surveys find that 80-90% of participants are
satisfied with the program. Most researchers are skeptical that such levels of satisfaction exist. Hence, most satisfaction surveys are believed to be unreliable.
Since agencies want to believe their programs are good, the wording may be biased.
Clients who are dependent on the program for services or who fear retaliation may not provide accurate responses.
Problems with client satisfaction surveys
can be addressed. Pre-testing to ensure face validity and
reliability. Asking respondents to indicate their
satisfaction level with various components of the program.
Ensuring that administration of the survey is separated from service delivery and that confidentiality of clients/consumers is protected.
Process and Most Implementation
Evaluations Assume that the program is a “black box” – with
input- throughput – and output. Use some mixture of interviews, document
analysis, observations, or semi-structured surveys. Gather information from a variety of organization
participants: administrators, front-line staff, and clients. These evaluations also examine communication patterns, program policies, and the interaction about individuals, groups, programs, or organizations in the external environment.
Use the following criteria to determine type of evaluation Research question to be addressed. Amount of resources and time that can be
allocated for research. Ethics (can you reasonably construct control
groups or hold confounding variables constant) Will the evaluation be conducted by an
internal or external evaluator? Who is audience for the evaluation? How will the data be used? Who will be involved in the evaluation?
Types of evaluation approaches that involve organization constituents
Participatory Action Research Empowerment Evaluation Self-evaluation
Differences in approaches are:
Participatory Action Research
Empowerment Evaluation
Self-Evaluation
Role of Researcher
Consultant; Partner with participants
Consultant; works for participants
Consultant; works for agency/funder
Purpose Social Change
Self-Determination
Evaluate Agency Services
Outcome Alleviate Oppression
Increases Participant Skills and Control
Improved Service Quality
Advantages of Methods Increases feelings of participant ownership
of process/programs. Increases likelihood that data will be used. Increases likelihood that the resulting
program or intervention will meet needs of stakeholders and be culturally appropriate.
Participants develop skills and confidence. They gain knowledge and information and thus become empowered.
Disadvantages of Method Distrust and conflict among participants. Length of time needed to develop consensus
around goals, mission, and methods. The need for training around research
methods, data collection, and analysis. The need for skilled facilitation, coordination,
and follow-up on task completion. Money and an organizational structure are
needed to do all these things. The group must be able to apply findings in
order to achieve an outcome