21
Community Development Programs Program Evaluation Primer By: Nicholas Galiardo, Community Development Capacity Building Analyst, VISTA

program eval

Embed Size (px)

DESCRIPTION

program evaluations

Citation preview

Page 1: program eval

Community Development Programs

Program Evaluation Primer

By: Nicholas Galiardo, Community Development Capacity Building Analyst, VISTA

Page 2: program eval

Contents

What is Program Evaluation? ..................................................... 3

Some myths regarding Evaluations: ........................................... 3

Why is Program Evaluation important? ...................................... 4

Options for Program Evaluations: ............................................... 5

Process Evaluations: ................................................................. 5

Outcomes Evaluations: ............................................................. 6

Structure of Evaluations: ............................................................ 6

Form Collaborative Relationships: ........................................... 7

Determine Program Components: ........................................... 7

Developing a Logic Model: ....................................................... 8

Determining Evaluation questions: ........................................ 11

Methodology: ......................................................................... 13

Consider a Management Information System/Dashboard: ... 14

Data Collection and Analysis: ................................................. 15

Writing the Report: ................................................................ 17

Additional Resources: ............................................................... 19

Works Cited .............................................................................. 21

Page 3: program eval

Program Evaluation overview:

What is Program Evaluation?

In short, program evaluations are a series of efforts to determine an organizations fidelity to its original

intent and program design, as well as the effect its actions are having on the intended participant group.

Such an assessment is conducted in a number of ways, but usually involves some sort of data

acquisition, analysis and representation in order to understand an organization’s operations and convey

the results of its efforts to specific audiences.

Some myths regarding Evaluations:

1. The first myth, which needs to be debunked in order to move forward in this discussion, is that

an individual needs to be an expert in order to conduct a successful evaluation.

You need to understand that for all intents and purposes, you are in fact, the foremost expert

with regards to what you are trying to accomplish and how to measure those efforts. An outside

“expert” is not as familiar with what your program is intending to do, and how to measure the

important variable(s) associated with your program.

2. The second common myth of evaluations is that they take too much time.

Although some evaluations can take a considerable amount of effort, most do not. In fact, once

you understand how to conduct an evaluation and have the basic foundation in place, most of

an evaluation pretty much takes care of itself.

Although you may occasionally have to supplement certain information and extract data for a

periodic report, the rest of the process can be conducted through a series of routine practices,

such as initial applications, follow-up surveys, etc…

Page 4: program eval

3. Thirdly, it is sometimes stated that evaluations are “unnecessary” and merely produce a

plethora of “inconsequential data.”

Conducted properly, evaluations are never unnecessary, and rarely produce data, which cannot

be relevant and help your organization move in a positive direction.

Thomas Edison was once asked if he ever got discouraged with his ten thousand attempts to

produce a functional light bulb, before he finally arrived at one that worked, to which he replied

“I didn’t fail ten thousand times. I successfully eliminated, ten thousand times, materials and

combinations which wouldn’t work.”

Why is Program Evaluation important?

Program evaluation has become an essential part of the lives of nonprofits. Whether it is in an effort to

better understand their progress and help guarantee they are following their goals and reaching their

intended audience, or in an attempt to justify their operations to current or potential future funders,

nonprofits have, for all intents and purposes, been required to implement some sort of accountability

effort for their organization. With most sources of funding nowadays, it is no longer enough to say that

you are doing something. Funders want to know that what you are doing is actually working to reach

your intended goals.

Accountability of donor funds – be it through individuals, foundations or state agencies – is becoming a

necessary part of nonprofit life. A recent GrantCraft report best described this trend, when they quoted

a foundation president as saying, “it is terribly helpful to be able to articulate why they are doing it this

way, what they are accomplishing, and how. It is talking about not only doing right things but doing

things right, being clear about what you are aiming for. It’s testing your compass.”1

Not only does this statement indicate a desire and ultimately a necessity to prove your organizations

program worth to your current and potential funders, but it also helps to re-emphasize the fact that you

are doing what you’re doing, to help. Don’t you want to make sure that you are actually accomplishing

your intended purpose?

1 http://www.grantcraft.org/pdf_2012/guide_outcome.pdf

Page 5: program eval

Options for Program Evaluations:

Depending on the organization, its reason for conducting an evaluation process, and the intended

audience, there are numerous types of program assessments which can be utilized to gain a better

understanding of what the programs are trying to accomplish. Whether they are conducting their

operations in such a way that those goals can be accomplished. And what the ultimate result of those

efforts are.

Done correctly, evaluations are not a once in awhile effort that is made at the conclusion of a project to

see how well something worked. Rather, it is a continual process throughout the life of a program or

project so you can routinely assess what is being done well, and make the appropriate changes

where/when need be.

For sake of relevancy to most nonprofit organizations, as well as time, we will attempt to provide a brief,

but thorough, synopsis of process evaluation and outcomes evaluation the two most prominent and

applicable evaluation systems for nonprofits today.

Process Evaluations:

Nonprofits are formed out of a desire of an individual or a group of individuals to satisfy some societal

need which appears to be unmet. As such, it is important for the organizations attempting to satisfy

these needs, to ensure they are operating in such a way that works towards that end. This is why it is

important for nonprofit organizations to conduct what are called process evaluations.

Process evaluations ask the question “what?”… What are we trying to accomplish as an organization?

Subsequently, are we implementing our plan as designed, in order to accomplish this task?

When initiating a new process evaluation, the following three questions should be kept in mind in order

to help guide the assessment:

1. What is the intended purpose of our program?

2. Are we implementing the program as designed in order to accomplish our goals?

3. If there are any gaps in design and implementation, what are they?

Page 6: program eval

Outcomes Evaluations: Outcomes Evaluations, more commonly referred to as Outcomes Based Evaluations, are very similar to

Process evaluations in their general structure. However, where they differ, is on the main focus of the

assessment.

Whereas Process Evaluations concentrate on whether an organization is doing what it had intended to

do, Outcomes Evaluations focus on the overall effect of an organizations initiatives; or rather, whether

an organizations actions are actually helping… and to what extent.

If Process Evaluations asked and answered the question “what?”, Outcomes Evaluations answer the

question “so, what?”.

Structure of Evaluations:

Now that we have a basic understanding of what a process evaluation and outcomes evaluation is and

what to look for, let’s look at the different stages of the assessment. Remember that despite attempting

to answer different questions, the two evaluations processes are very similar in nature and thus can

benefit from a mutual outline, such as the one listed below. Any potential differences in application, will

be noted in individual sections.

The Georgia Department of Human Resources produced a step-by-step guide on process evaluations

back in 2002. As part of that guide, they listed a sequence of stages they proposed for the

implementation of an evaluation. They are as follows:2

1. Form Collaborative Relationships

2. Determine Program Components

3. Develop Logic Model

4. Determine Evaluation Questions

5. Determine Methodology

6. Consider a Management Information system

7. Implement Data Collection and Analysis

8. Write Report

2 http://health.state.ga.us/pdfs/ppe/Workbook%20for%20Designing%20a%20Process%20Evaluation.pdf

Page 7: program eval

Given the general outline provided on the previous page, we will attempt to describe in detail each

individual step, provide examples where needed, and expand on previous concepts and steps in order to

provide a concise, yet comprehensive overview of the process.

Form Collaborative Relationships:

Regardless of whether, you are the lead individual on the project/program you are trying to evaluate, or

you are assisting said individual, collaborative relationships are a necessary part of any assessment.

Sometimes, those involved in the day-to-day activities of a program are apprehensive about an

evaluation being conducted, since they view it as potentially being a judgment on how well they are

doing their job. It is important to communicate that the process is not meant to evaluate their job

performance, but rather help the organization better understand how management, board members

etc… can help improve overall operations.

It is important, for the overall success of an evaluation process, to ensure that two primary groups are

focused on: the intended audience (who the results will be presented to, and what they are looking for),

and the source of data (who is able to provide you with the information you are looking for.) Once these

two groups are known, it is important to ensure that there are enough relationships up and down that

chain of communication to ensure proper collection of desired information, while at the same time,

ensuring that unnecessary “middlemen” are not lengthening the process.

The key thing to keep in mind is the following “Good” equation:

Good relationships = Good data/information = Good results

Determine Program Components:

Once the first step of establishing good relationships is underway, the next step in the process is to

utilize those collaborative relationships to understand what an organization is trying to accomplish with

the evaluation.

Determining program components oftentimes is the most daunting task for those conducting an

assessment, but by taking the “Who, What, When, Where, How” approach, you are able to take an

otherwise intimidating process and significantly simplify it.

Page 8: program eval

The following example of determining program components was taken from the Georgia Department of

Human Resources manual on Process Evaluations, and provides an easy to understand and visualize

scenario.3

EXAMPLE:

Who: Elementary School Students

What: Fire Safety Intervention

When: 2 Times Per Year

Where: In Students’ Classroom

How: Group Administered Intervention, Small Group Practice

1. Instruct students what to do in case of fire (stop, drop and roll).

2. Educate students on calling 911 and have them practice on play telephones.

3. Educate students on how to pull a fire alarm, how to test a home fire alarm and how to

change batteries in a home fire alarm. Have students practice each of these activities.

4. Provide students with written information and have them take it home to share with their

parents. Request parental signature to indicate compliance and target a 75% return rate.

An important thing to keep in mind is that gathered data is only as valuable as the level of participation

it yields.

The statistics discipline teaches us that we can get an adequate indicator of a population of people, by

taking a smaller sample size and conducting our analysis on the smaller group. However, this assumption

is only true under the presupposition that we are able to gather all the necessary information through

active participation of the sub-group.

As such, I’d suggest an addendum to the last point in the example above. I would suggest that an

incentive be placed on the “parental signature” to encourage participation and thus ensure a greater

chance of achieving the desired “75% return rate”.

Developing a Logic Model:

Now that we understand the first two steps (building collaborative relationships, and determining

program components) it’s time to move onto developing a logic model.

3 http://health.state.ga.us/pdfs/ppe/Workbook%20for%20Designing%20a%20Process%20Evaluation.pdf

Page 9: program eval

Logic models are a list of the logical and sequential steps of your program, from idea conceptualization

through implementation and into short, medium and long term goal formation. Most logic models

consist of four distinct steps: inputs, activities, outputs, and outcomes.

Many individuals get the “deer in the headlights” look at the first sound of “logic models”, but

understood correctly, the concept is quite easy to conduct and incredibly beneficial to understanding

your program and how your intentions relate with your actions in an effort to attain your goals.

Most logic models are represented in some sort of visual way in order to organize the different elements

and better convey the steps. The example, below, is a basic logic model meant to express the concept in

a simplified way.

Remember that these models are used to help you understand the sequence of steps in your program

and thus can be represented in a easily discernible way – such as that below – or could include more

detail for a more thorough representation. Neither way is necessarily better than the other, but is

dependent on what makes things easier for those involved in the initiative.

You have probably gleaned from the previous example, that this particular nonprofit organization has

determined that there is an unmet need in their community with regards to lack of available food for

children on the weekends, and that it is having an adverse effect on childhood health. As a result, they

have decided to implement a “backpack” program, whereby they provide bags of food to children to

take home on the weekends – when government funded school meals programs aren’t available.

From the example, it is easy to see what the inputs are (or rather what they have to give) – food

donations. Likewise, it is also easy to see what the organizations activities are (or rather how they go

about giving what they have, to those who need it) – the “backpack” program.

Where the largest amount of confusion with logic models comes from however, is with the last two

categories: outputs, and outcomes.

Food donations to mitigate child

hunger.

Inputs

Having a "backpack" program to

give children food on the weekends.

Activities # of children

reached Outputs

Better subsequent year, school

entrance health exams.

Outcomes

Page 10: program eval

One easy way to distinguish between the two categories is by thinking of them in the following terms.

Whereas outputs tell stakeholders, “ how much?”. Outcomes answer the question, “so what?”. The

intention is to show that we are not simply intervening with a certain amount of effort and

donations/services, but that the effort is also having the intended effect.

Outputs – or, again, what you produce from your programs - have long been an important way to assess

your organizations productivity. However, in recent years, funders have been demanding recipients of

their philanthropic resources to demonstrate that the quality of result is being maintained along with

quantity. Such a variable is what we call outcomes. In order to understand the importance of outcomes

in addition to outputs, think of it this way. If you are producing a lot of tires, but they fall apart, what

benefit is there in the end.

In order to ensure clarity on the differentiation of outputs and outcomes, let us revisit the previous

example. But this time, we’re going to phrase it in terms of a narrative.

Example:

An organization recognizes the health of children in their community is significantly

lower than the state average. They hypothesize that the lack of health is due to lack of

access to healthy food on the weekends, as quick, convenience meals, are provided by

their overworked parents who lack the time to cook wholesome meals.

As a result, the organization decided to start a “backpack” program whereby they

provide a bag of healthy snack foods and healthy alternative quick meals for the

children to take home on the weekends.

Within the first year, the organization is able to distribute over 10,000 meals and reach a

total of 345 children.

The following year, as children are going back to school and go to the doctor for their

annual physical, the rates of the previous childhood illnesses has diminished drastically.

In this example, the numbers of meals/children served is a quantitative output measurement, whereas

the effect – lower childhood illness – is the outcome, resultant from the number of meals provided.

A final thing to note with Logic Models, is they are not meant to be a static linear process, but rather –

as indicated by the arrow going from Outcomes, back to Inputs – a continual process to aid in program

improvement. Once you have determined your outcomes and how they indicate the health of your

program, the results should be used to readjust your inputs, as to make your initiative more effective.

Page 11: program eval

Determining Evaluation questions:

This next step in the evaluation process is yet another example of where a good collaborative

relationship pays off. Now that we understand how our project works, and what steps are involved in

the process, it’s important to come up with a series of questions to explore for the actual analysis.

In terms of Process Evaluations:

Utilizing the “Who, What, Where, When, Why” approach, we can establish a series of general questions

the evaluator can ask in order to form a basis to determine how well the program is conducting its

operations in the intended way.

Examples of questions:

WHO:

1. Who are we trying to help?

2. How many people participate in our program?

3. How many have dropped out?

4. Has there been a growth in participation? If so, how much?

5. Has anyone declined participation? If so, why?

6. What are the eligibility criteria for participation?

7. Are the participants satisfied with the program?

8. What are some specific demographics of the participants?

WHAT:

1. What are we delivering to our participants?

2. How is it delivered?

3. How long has this particular program been conducted?

4. What sort of success or difficulty has the program experienced in the past?

5. What is the reputation of the program?

WHEN:

1. Are there specific times/days the program is in operation? If so, why is this, the case?

2. Is only one action taken per participant, or multiple?

3. How often does the same individual participate in the program?

4. Is there a limit on the number of times an individual can participate?

5. What is the average length of time each activity requires?

WHERE:

1. Where do we conduct our program?

2. Why do where conduct our program where we do?

Page 12: program eval

3. Have we conducted our program in different locations in the past?

a. If so, where?

b. Are we still there?

c. If not, why not?

d. What was the overall experience in previous locations?

WHY:

1. Why have we decided to do this particular program?

2. Why is this more beneficial than another program we could offer?

3. Why are people participating?

4. Why aren’t people participating?

In terms of Outcomes Evaluations:

Given our previous discussion on logic models, it is probably most prudent to think of this step as a

working backward approach. The first step in developing a series of questions is to first determine what

your intended impact will be.

In keeping with the theme of previous examples, let’s assume for a moment, that your organization is

attempting to ameliorate childhood illness due to hunger. As such, we’ve determined that our program

will have been successful if yearly medical physical tests are improved over the previous year – our

outcome.

Likewise, we’re able to quantify this cause and effect relationship through the number of healthy meals

we provide and the number of children we serve each year – our output.

Therefore, the questions we would use to determine the data for our output and thus discern the effect

of our efforts could be the following:

1. How many children did we serve?

2. How healthy were they when they started our program?

3. How long have they participated in our program?

4. What has been the change in health over the time they have been in our program?

5. What is the demographic breakdown of the children in our program?

a. Gender?

b. Age?

c. Ethnicity?

d. Location of their home?

e. Family size?

f. Demographics of their household?

g. Family income?

Page 13: program eval

Methodology:

Now that you’ve determined which questions need to be answered, you will need to decide on the best

way to collect the data.

Some questions, such as: family income, gender, age etc… may be available through standard, pre-

participation paperwork. In this case, this easily available data – commonly referred to as “low hanging

fruit” - can be quickly gathered and assessed with no further client involvement.

However, in other cases where questions, such as: why are people participating, why aren’t people

participating, are specific times/locations ideal for our clients, etc… are involved, alternative means of

data gathering need to be employed. Depending on the type of questions which need to be asked, the

following list contains options for further data acquisition:

1. Survey’s

2. Interviews

3. Post-tests

4. Behavioral observation

When using a series of questions to gather data – be it through surveys, interviews, or a guideline for

observation - Ellen Taylor-Powell, Program Development and Evaluation Specialist for The University of

Wisconsin Cooperative Extension service, recommended three main things to consider when

determining questions for an evaluation, in her paper Questionnaire Design: Asking questions with a

purpose. They are as follows:4

Consider:

1. The particular people for whom the questionnaire is being designed;

2. The particular purpose of the questionnaire; and

3. How questions will be placed in relation to each other in the questionnaire

(For further detail and a number of excellent examples on types of questions, please refer

to Questionnaire Design: Asking questions with a purpose5)

4 http://learningstore.uwex.edu/assets/pdfs/g3658-2.pdf

5 http://learningstore.uwex.edu/assets/pdfs/g3658-2.pdf

Page 14: program eval

Consider a Management Information System/Dashboard:

A Management Information System (MIS) in its broadest definition, is a way in which, data is collected,

stored, analyzed and represented in a way, which is relevant to the management of an organization.

A Dashboard is a way to isolate specific data and represent it in a visual way as to convey a select set of

information to management, the board of directors, or another stakeholder. See image below:

6

Dashboards received their name due to the similarity they share with the instrument panel in a car.

Whereas an instrument panel just shows the relevant information, so too, does an MIS Dashboard show

only the data which management, a board of directors, etc… need in order to make educated decisions.

In keeping with the vehicle theme then, most Dashboards color code the data similar to stoplights.

Green typically indicates that everything is “good to go”, yellow means “slow down and start to take

notice”, and red indicating “STOP, something might be wrong”.

In the above example of a dashboard, for instance, this particular nonprofit has set a target of 90%

attainment of GED certificates for those enrolled in their program. As of the current date, an 82% rate

has been attained for the program, which is within an acceptable pre-determined range – as indicated

by the green box.

Though Dashboards can contribute significantly to the well-being of an organization by aiding those in

decision making positions in their understanding of key indicators, three things should be kept in mind

when determining whether to implement such a system.

6 http://www.blueavocado.org/node/398

Page 15: program eval

Consider the following acronym - T.E.A.

1. Time – Do you have the time to dedicate? Even though a lot of your data could be collected

through standard forms, it still needs to be inputted in a system.

2. Effort – As the old adage says, “garbage in… garbage out”. Do you have the right data to

make it worth your effort?

3. Affordability – Although there are a few free Dashboard systems out there, the ones that

will provide you with the most time saving friendly interfaces, which provide relevant

information and require a little learning curve, are more costly.

Data Collection and Analysis:

Having discussed how to determine evaluation questions, and how to determine a methodology for data

gathering, there are two more things that should be understood in order to conduct a proper

evaluation; the significance and difference between quantitative and qualitative information.

Quantitative information: A set of data which can be represented with a numerical value.

Such data can originate as a series of numbers, such as in a response to the question “how many days

does it take our N.A. (Narcotics Anonymous) members to go sober?”… Answer: “25 Days”. Or can be

altered slightly to be given a numerical equivalent such as with the question “Did our NA program help

you become sober?”… Answer: “Yes” – and tabulating the “Yes” answers.

Qualitative information: A series of responses which cannot easily be represented by a number, but

rather are more verbal in nature.

Examples: An interview setting

Question: What can our organization do to improve the effectiveness of the domestic

violence shelter?

Answers:

Employee 1 response:

“Provide job skills training to the abused to enable them to move out of an

abusive relationship.”

Page 16: program eval

Employee 2 response:

“Partner with a substance abuse program since most abusive relationships stem

from dependency on drugs or alcohol.”

Client 1 response:

“Provide the couple’s a counselor.”

Client 2 response:

“Provide legal representation.”

Manager response:

“There is a high recidivism rate amongst abuse victims – taking an average of

seven interventions before someone leaves an abusive relationship. Therefore,

we should continue counseling services - outside stays in the shelter - to

attempt to mitigate future relapses in behavior.”

In the case of qualitative answers, it is usually helpful to organize like responses into groups. For

instance, in the example above, you could probably group Employee 2, Client 1 and the Manager

responses into a larger set entitled Additional Counseling.

Too often, organizations feel the need to provide some sort of numerical justification of their efforts.

The important thing to remember is this.

Not all evaluation results need to have a quantitative aspect to them. What you are really trying to

determine, is “What do I need to know to in order to convince myself and others that we are making a

meaningful impact?”

It doesn’t always need to be about numbers, but you need to be able to give good justification as to

your reasoning.

Additionally, not all stakeholders – be they funders, clients or employees – are looking for the same

information. People respond to the same information in different ways. Whereas some people like to be

presented with concrete numbers, others enjoy hearing first-hand, first-person accounts in the form of

stories or narratives. Therefore, it’s best to have stories behind your numbers, and numbers behind your

stories.

Page 17: program eval

Writing the Report:

Contents of an evaluation write-up will be different, depending upon the nature of your audience. One

thing which will remain consistent amongst all reports however, will be the necessity to communicate in

a common language.

The purpose of an evaluation is to answer questions your audience deems important and to convey the

results in a meaningful way. As such, it’s important to ensure that discipline specific language be avoided

where possible, and defined where needed. For instance, if you were attempting to present an

economic analysis to an organization-wide audience, you wouldn’t use the common economic term

“ceteris paribus”, but would rather phrase it in a more common parlance such as “all else being kept the

same”.

Another important thing to keep in mind is that most people are visual learners. As such, it might be

prudent to incorporate a wide array of graphics in order to convey your information. Graphs, pictures,

etc… help your audience understand the point you’re making in a way that verbal descriptions may lack.

Finally, it’s important to be as concise as possible. Especially with younger generations, individuals tend

to lose focus with large blocks of text, and thus it would be more effective to communicate in small

snippets of information supplemented by graphical representations.

As the general structure of a report goes, Carter McNamara of Authenticity Consulting, LLC, provides a

good overall guide to coalescing the different parts of a report.7 It is important to remember that

depending on your specific needs, your report may have more or less parts, as well as altered parts of

similar sections. However, Authenticity Consulting’s guide is a good basis on where to start. The guide is

as follows:

1. Title Page (name of the organization that is being, or has a product/service/program that is

being, evaluated; date)

2. Table of Contents

3. Executive Summary (one-page, concise overview of findings and recommendations)

4. Purpose of the Report (what type of evaluation(s) was/were conducted, what decisions are

being aided by the findings of the evaluation, who is making the decision, etc.)

5. Background About Organization and Product/Service/Program that is being evaluated

a) Organization Description/History

b) Product/Service/Program Description (that is being evaluated)

7 http://managementhelp.org/evaluation/program-evaluation-guide.htm

Page 18: program eval

i) Problem Statement (in the case of nonprofits, description of the community

need that is being met by the product/service/program)

ii) Overall Goal(s) of Product/Service/Program

iii) Outcomes (or client/customer impacts) and Performance Measures (that can

be measured as indicators toward the outcomes)

iv) Activities/Technologies of the Product/Service/Program (general description

of how the product/service/program is developed and delivered)

v) Staffing (description of the number of personnel and roles in the organization

that are relevant to developing and delivering the product/service/program)

6) Overall Evaluation Goals (eg, what questions are being answered by the evaluation)

7) Methodology

a) Types of data/information that were collected

b) How data/information were collected (what instruments were used, etc.)

c) How data/information were analyzed

d) Limitations of the evaluation (eg, cautions about findings/conclusions and how to use

the findings/conclusions, etc.)

8) Interpretations and Conclusions (from analysis of the data/information)

9) Recommendations (regarding the decisions that must be made about the

product/service/program)

Appendices: content of the appendices depends on the goals of the evaluation report, eg.:

a) Instruments used to collect data/information

b) Data, eg, in tabular format, etc.

c) Testimonials, comments made by users of the product/service/program

d) Case studies of users of the product/service/program

e) Any related literature

Page 19: program eval

Additional Resources:

For further information/reading/examples on evaluations, please refer to the following list of resources:

Performance Evaluation Overview – Delaware Department of Education (February, 2012)

http://www.doe.k12.de.us/rttt/files/PerfEvaluationOverview.pdf

5 Tips (and Lots of Tools) to Become an Evaluation-Savvy Nonprofit Leader – NGen

http://www.independentsector.org/blog/post.cfm/5-tips-and-lots-of-tools-to-become-an-

evaluation-savvy-nonprofit-leader

Using Dashboards in Training Evaluation – Predictive Evaluation Model – Dave Basarab Consulting

http://www.davebasarab.com/blog/dashboard/using-dashboards-in-training-evaluation-

predictive-evaluation-model/

Evaluation Strategies for Human Services Programs – The Urban Institute

https://www.bja.gov/evaluation/guide/documents/evaluation_strategies.html

Rigorous Program Evaluations on a Budget: How Low-Cost Randomized Controlled Trials Are Possible in

Many Areas of Social Policy – Coalition for Evidence-Based Policy

http://coalition4evidence.org/wp-content/uploads/Rigorous-Program-Evaluations-on-a-Budget-

March-2012.pdf

Basic Guide to Program Evaluation (Including Outcomes Evaluation) – Carter McNamara, MBA, PhD,

Authenticity Consulting, LLC.

http://managementhelp.org/evaluation/program-evaluation-guide.htm

Workbook for Designing a Process Evaluation – Georgia Department of Human Resources Division of

Public Health, Melanie J. Bliss, M.A. and James G. Emshoff, PhD

http://health.state.ga.us/pdfs/ppe/Workbook%20for%20Designing%20a%20Process%20Evaluat

ion.pdf

The State of Nonprofit Data – NTEN

http://www.nten.org/research/2012-state-of-data

Page 20: program eval

The Educators’ Guide to Service-Learning Program Evaluation

http://www.servicelearning.org/filemanager/download/37/EvaluationToolkit.pdf

Making Measures Work for You: Outcomes and Evaluation – GrantCraft

http://www.grantcraft.org/pdf_2012/guide_outcome.pdf

Questionnaire Design: Asking questions with a purpose – Ellen Taylor-Powell

http://learningstore.uwex.edu/assets/pdfs/g3658-2.pdf

Finally – Outcome Measurement Strategies Anyone Can Understand – Laurel A. Molloy, MPA

The Many Faces of Nonprofit Accountability – Alnoor Ebrahim, Havard Business School

http://www.hbs.edu/faculty/Publication%20Files/10-069.pdf

Building a Common Outcome Framework to Measure Nonprofit Performance – The Urban Institute

http://www.urban.org/publications/411404.html

Where are you on your journey from Good to Great? – Jim Collins

http://www.jimcollins.com/tools/diagnostic-tool.pdf

Outcome Based Evaluation – Janet Boguch, MA Seattle University Institute of Public Service

http://www.seattleu.edu/WorkArea/DownloadAsset.aspx?id=17716

Page 21: program eval

Works Cited There are no sources in the current document.