238
Program Evaluation for Organizations under CAPC (Community Action Program for Children) © EVALUATION TOOLS FOR ENHANCING PROGRAM QUALITY ^ . it HV 745 . Q44 T373 .1998 V. 2 Centre de recherche sur les services communautaires Faculté des sciences sociales JNIVERSITÉ AVAL Association des centres jeunesse du Québec

EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Program Evaluation for Organizations under CAPC

(Community Action Program for Children)

©

EVALUATION TOOLS FOR ENHANCING

PROGRAM QUALITY

^ . it

HV 745 . Q44 T373 .1998 V. 2

Centre de recherche sur les services communautaires

Faculté des sciences sociales

JNIVERSITÉ AVAL

Association des centres jeunesse du Québec

Page 2: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

f 5 V

S A N T É C O M N

(

C (

Page 3: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

PROGRAM EVALUATION FOR ORGANIZATIONS UNDER THE COMMUNITY ACTION PROGRAM FOR

CHILDREN ( C A P C )

I N S n m NATIONAL DÉ SANTÉ PUBLIQUE DU QUÉBEC C E N T O DE DOCUMENTATION

MONTRÉAL

EVALUATION TOOLS FOR ENHANCING PROGRAM QUALITY

Caroline Tard André Beaudoin Daniel Turcotte Hector Ouellet

MARCH 1998

Page 4: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Secretarial Services: Jocelyne Gallant

English Translation: Translation Bureau

Public Works and Government Services Canada

The masculine used herein refers to both genders.

ISBN: 2-89497-016-1

These documents were produced as part of a contract with CLSC Les Blés d'Or evaluating Health Canada's Community Action Program for Children (CAPC).

The viewpoints herein are those of the authors and do not necessarily reflect the official policy of Health Canada or the Province of Quebec.

Page 5: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Table of contents

Foreword Hi

Introduction /

Section I Evaluation Targeting the Improvement of Program Quality 7 1. The Relation Between Program Evaluation and Quality

Improvement 9 2. Evaluating a Program to Improve its Quality Does Not Mean

Only Evaluating its Effectiveness 10 3. Questions relating to Quality Improvement: Who Asks Them

and Why 12

Section 2 Undertaking an evaluation 15 1. A Useful Process 17 2. Designing an Evaluation 19

Section 3 Needs Assessment and Feasibility Assessment 29 Scenario A: The Helping Hand Organization. 31 1. Needs Assessment 33

Summary of the Needs Assessment 37 2. Feasibility Assessment 57

Summary of the Feasibility Assessment 59 For Further Reading 60 Sample Needs Assessment Questionnaire 61

Section 4 Process Evaluation or Formative Evaluation 69 Scenario B: The Tiny Tots Organization 71 1. Process Evaluation 73 2. Developing a Logical Program Framework. 75 3. Formulating Objectives 81

Summary of the Process Evaluation. 89 For Further Reading 99 Sample Satisfaction Survey Questionnaire 100

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 6: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

i i

Section 5 Outcome Evaluation or Summative Evaluation Ill Scenario B (Cont.): The Tiny Tots Organization. 112 1. Outcome or Summative Evaluation. 115

Summary of the Outcome Evaluation 120 2. Different Approaches to Outcome Evaluation 121 For Further Reading 157

Section 6 The Basics of Data Analysis 159 1. Analyzing Quantitative Data 161 2. Analyzing Qualitative Data 175 3. Interpreting Quantitative and Qualitative Data 188 For Further Reading 192

Section 7 Formulating an Evaluation Plan 193 The Evaluation Plan. 195

References 213

Appendix 221

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 7: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

iii

Foreword

A speaker who was to talk on the topic of planning and who wanted to lead the listeners to have realistic expectations about

the potential of the planned action began his presentation with the following:

"When Christopher Columbus discovered America, he didn't know where he was going. He didn't know how to get there. He didn't know how long the trip would take or what resources would be needed along the way. Once he landed, he didn't know where he was. And yet he discovered America. "

To which someone interested in the field of evaluation could have responded:

"At least Columbus had an objective: finding India. Having an objective enabled Columbus to decide to attempt the trip and to pull together the required resources. The few navigational instruments (in evaluation terms, these are referred to as evaluation tools) he had indicated that he was heading westward and on course. Despite their primitiveness, his instruments allowed him to calculate the time lapsed since departure and provided a rough estimation of the distance traveled. Furthermore, these instruments (in this case, his own observations of the results, combined with a comparison between what he expected to discover to what he actually saw) indicated to him that either he hadn't reached India or that his description of India was incorrect. "

This is what evaluation is all about: a means for stakeholders and practitioners to guide their actions.

The Community Action Program for Children (CAPC) evaluation team set a number of objectives for itself, including the publication of

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 8: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

manuals to provide groups working under the CAPC framework with support that would correspond as closely as possible to their needs and real potential for carrying out evaluation activities. Three manuals have been produced.1 The first presents and clarifies the main ideas, topics, and notions relating to the concept of evaluation. This is the Introductory Manual.

The second manual aims at providing tools to organizations that want to evaluate their performance in order to enhance the quality of the programs that they deliver. It provides concrete examples that illustrate different ways of using evaluation methods and techniques. It ,also shows how to handle a certain number of ethical and methodological challenges often encountered in carrying out evaluations. The title of the second manual is Evaluation Tools for Enhancing Program Quality.

The third manual, entitled Presentation of Evaluation Guides, offers a fairly detailed presentation of the contents of certain "evaluation guides" that have been developed and are available. We have focused on highlighting the aspects of these guides that seem the most relevant or useful to organizations.

We would like to express our appreciation to the individuals who, in one way or another, have helped in preparing these manuals. We are particularly grateful to the members of the CAPC advisory committee for program evaluation, whose names and affiliations are listed below.

These three manuals (in English and French versions) may be obtained from the Centre de recherche sur les services communautaires (community services research centre), whose address is provided on the back cover.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 9: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

André Beaudoin Lyne Champoux Martine Cinq-Mars

Richard Cloutier Danielle Couture/ Lucie Lafrance Anne Dubé Richard Foy Michel Gaussiran Florence Isabelle

Gisèle Laramée Hector Ouellet

CRSC, Université Laval (Québec) CRSC, Université Laval (Québec) Regroupements des projets PACE des Laurçntides (Montreal) CRSC, Université Laval (Québec) Mauricie/Bois-Francs Regional Board (Trois-Rivières) La Débrouille (Rimouski) Le Pignon Bleu (Québec) CAPC (Montreal) CLSC Seigneurie Beauharnois (Valleyfield) ENAP (Québec) CRSC, Université Laval (Québec)

We sincerely hope that these manuals will be of use and that the organizations for which they have been published will find them of value in their efforts to develop services offered to children and parents in their communities.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 10: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 11: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

2

SUMMARY

1- The Types of Evaluations Illustrated In This Manual

2 - The Techniques and Tools Illustrated in This Manual

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 12: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

This manual is designed to provide tools to organizations that would like to evaluate their performance in order to improve the quality of their programs. It offers concrete scenarios that

illustrate various ways of using evaluation methods and techniques. The examples provided include different levels of difficulty and required resources. Although based on reality, the examples are fictitious and partial: they do not reflect the actual situation of any particular organization. We feel that the use of case studies is a good way of understanding and learning how to use evaluation. The challenge facing the authors of this manual has been to provide examples that are close enough to the reader's concerns that he will be able to refer to them, use them, and transfer them to his own situation.

This manual also deals with a certain number of ethical and methodological choices often faced in evaluation. The manual can be conveniently used from the scenarios or the index of the various types of evaluations, techniques, and tools presented.

1 - The Types of Evaluations Illustrated in This Manual

Different types of evaluation are illustrated, beginning in Section 3, using scenarios. • N e e d s a s s e s s m e n t p. 3 3 • Feas ib i l i ty a s s e s s m e n t p. 57

Process eva luat ion or F o r m a t i v e e v a l u a t i o n p. 7 3 • S u m m a t i v e eva lua t ion ( o u t c o m e evaluat ion 1 ) p. 115

1 Other manuals sometimes use the term "impact evaluation" to refer to "out-come evaluation." In reality, these are two different types of evaluation that shouldn't be confiised. Impact evaluation goes beyond the evaluation of effects. It can be used to analyze the long-term results and the effects of a program in the community as a whole. We do not deal with it here, since organizations rarely have recourse to it.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 13: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

4

2- The Techniques and Tools Illustrated in This Manual

The case studies also serve to illustrate a certain number of evaluation techniques and tools, which, of course, can be used in contexts other than those presented here. For example, we look at the formulation of questions in the section on needs evaluation. The guidelines given for formulating good questions also apply to other types of evaluation.

While there are many evaluation techniques and tools other than those mentioned in this manual, we feel that organizations should be able to readily use the ones that we have selected. While organizations can adapt these techniques and tools to suit their specific needs, users should use them advisedly depending on the type of evaluation and the conditions required for their application.

Consequently, we will discuss how to handle the topics below.

Formulating measurable objectives Formulating success indicators When to use questionnaires, individual interviews, or focus groups Formulating questions Designing a questionnaire Conducting an interview Organizing and leading a focus group Collecting information from key informants Organizing a community forum Developing a logical framework Selecting a standardized measurement instrument Observing participant behaviour Using attitude rating scales Using the single-case technique Coding quantitative data

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

p. 81 p- 83

p- 51 p. 39 p. 43 p- 44 p. 47 p. 54 p. 55 p- 75

p. 135 p. 138 p. 139 p. 145 p- 162

Page 14: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

• Using descriptive statistics p. 169 • Constructing tables and graphs p. 171 • Analyzing qualitative data p. 175 • Interpreting qualitative and quantitative data p. 188 • Formulating an evaluation plan p. 195 • Drawing up a schedule p. 201 - Setting the budget p. 202 • Getting participants' informed consent p. 207 • Maintaining confidentiality of information p. 210 • Writing the evaluation report p. 204 • Using the evaluation results p. 204 • What to do if the evaluation indicates that the

anticipated objectives were not achieved p. 154 • When to seek the assistance of evaluation

resource persons p. 155

The reader will also find examples of the following: • Program logical framework p. 87 • Needs assessment questionnaire p. 61 • Satisfaction evaluation questionnaire p. 100 • Focus group discussion checklist p. 50 • Behaviour observation checklist p. 133 • Consent form p. 210

We have generally attempted to illustrate both the potential of and problems with program evaluation. Some examples aim at showing how, in certain instances, some, simple, low-cost processes can provide answers to the question of evaluation that will enable you to iimprove the quality of your actions. Other scenarios shed light on evaluation situations that require greater commitments of resources and time. Taken as a whole, the manual therefore presents a well-rounded, overall picture of implementing evaluation with respect to improving a program's quality.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 15: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Program Evaluation for Organizations : Evaluation Tools for. Enhancing Program Quality

Page 16: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

SECTION 1

EVALUATION TARGETING THE IMPROVEMENT

OF PROGRAM QUALITY

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 17: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

SUMMARY

1- The Relation Between Program Evaluation and Quality Improvement

2 - Evaluating a Program to Improve its Quality Does Not Mean Only Evaluating its Effectiveness

3 - Questions relating to Quality Improvement : Who Asks Them and Why

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 18: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

•here are a number of evaluation references and guides for organizations2 who want to evaluate their activities. All of these

documents have their own approach to evaluation and offer different methodologies.

The approach provided in this manual differs mainly in its focus on improving the quality of programs and in offering organizations sound tools that can be used to evaluate programs in accordance with their means, capacities, and limitations.

Evaluation constitutes an integral aspect of what organizations do. It allows both program workers and participants to enrich their understanding of the program in which they are involved and to grasp its functioning, scope, and impact. It is also a means for providing an accounting to outside organizations — especially sponsors, of the program's status and some of its outcomes.

1 . The Relation Between Program Evaluation and Quality Improvement

The expression "program evaluation" refers to the methods, techniques, and theories that are used to evaluate work, interventions, and projects, which are often simply called programs. Evaluation has a rather long history, which we have attempted to outline in the first manual in this series.3 At one time, the notions of quality assurance,

2 The manual entitled Program Evaluation for Organizations under CAPC -Presentation of Evaluation Guides provides a critical look at a number of these guides. 3 See the appendix in Program Evaluation for Organizations under CAPC-Introductory Manual.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 19: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

10

improvement, and promotion were most often encountered in the field of administration, especially business administration. In the social sphere, however, we are hearing more and more about evaluation techniques as part of an approach centered on promoting and improving the quality of programs and services.

In fact, improving the quality of a program requires:

— identifying problems that affect the entire program even though they may only involve a few program elements;

— gathering appropriate data to document them; and

— using the data to find solutions to these problems and thereby improve the program.

As you will see in reading through this manual, the process for improving quality uses methods and tools normally found in program evaluation.

2- Evaluating a Program to Improve its Quality Does Not Mean Only Evaluating its Effectiveness

As we pointed out in the first manual in this series, program evaluation is too often limited to effectiveness alone. We can talk about the effectiveness of community-action programs, such as CAPC, if it can be shown that the action carried out improves the situation of children and if it develops the skills of parents or prevents the appearance of all kinds of problems. You need to remember, however, that program effectiveness, in the strict sense, is not a simple process. To demonstrate that a program is effective, you need to answer questions such as:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 20: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1. Do the people who receive the services or who participate in the activities perform or behave differently than those who do not? For example, can you show that the children who took part in an infant stimulation program experience fewer problems when they start regular classes than similar children that didn't? Or can you demonstrate that children who have attended abuse-awareness programs exhibit less violent behaviour than other children do?

2 . Why do certain program participants achieve better results than individuals who do not take part? Why does one form of intervention or service produce better results than another? To what extent can we attribute outcomes to one type of intervention rather than to another?

3 . Does the program produce results for all participants? If not, who benefits the most and why?

Answering these questions is very often difficult. Usually, the process is quite costly and often impossible to carry out unless researchers or expert evaluators are involved. If you limit yourself to this kind of restrictive approach, you will generally be unable to use it unless you have access to appropriate resources and means.

Fortunately, evaluation can be more limited in scope yet yield results that will be just as useful for organizations. For example, evaluation methods can be used to deal with the question of quality of services. Evaluation tools can provide an outline of what's happening in a program, shed some light on its functioning, or establish how different activities under the program converge on achieving one or more program objectives. Assessing the implementation of a new activity can ensure, from the outset, that it meets certain quality requirements.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 21: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

12

3- Questions Relating to Quality Improvement: Who Asks Them and Why?

When used for program improvement, evaluation should take place while the program is ongoing. It should therefore be one component in continual, critical monitoring of activities: the information gathered is used to help the program better achieve its objectives. TTie main types of evaluation questions asked to establish quality are:

— Do we have a clear idea of the needs of the target clientele? — Have we adequately taken into account all the needs of our

clientele? — Does our program have all the elements required to achieve

its objectives? — Does die program truly serve the target populations or

groups? — Are the activities or services tailored to the different types of

clients? — What is the proportion of clients for whom the observed

outcomes indicate an improvement in their situations? — What aspects of the program produce the most changes in

clients? — What use does the program make of different community

resources? — Do other community resources make use of the

program? For what purposes?

Anyone involved in carrying out a program has the right to express their opinions during evaluation. Obviously, administrators, program coordinators, and organizations providing funding are the most likely to ask questions. In order to determine whether the program should be continued or modified, they need to know if the program is being applied as described. In addition, they are interested in seeing concrete evidence of what program participants come away with.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 22: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

13

But evaluation questions can come from a number of others sources insofar as the evaluation focuses on quality. Program workers and volunteers may have questions about their actions to understand why things are going well or not so well, and what they can do to make the program better. The participants themselves, other community organizations, collaborators, partners from other programs, and outside researchers may also be interested.

This manual has been designed to allow anyone involved in a program to formulate questions that could improve its quality and help provide answers to these questions.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 23: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

14

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 24: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

15

SECTION 2

UNDERTAKING AN EVALUATION

O O C

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 25: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

16

SUMMARY

1- A Useful Process

2 - Designing an Evaluation

Phase 1: Focusing the Evaluation Phase 2: Planning the Evaluation Phase 3: Conducting the Evaluation Phase 4: Using the Evaluation Results

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 26: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

The process proposed to you throughout this manual is program evaluation for improving program quality. This process has three

main features: 1) it must be, first and foremost, useful to the organization and program staff; 2) it involves formulating questions tailored to the context; and 3) it involves various phases that must be followed.

1- A Useful Process

Program evaluation is both useful and positive for an organization. It must not involve attempts to judge the success or failure of the program or the organization. It rather should aim at understanding how to maximize program outcomes. What you can learn from the evaluation process and the lessons drawn from it provide valuable information for improving the program.

Who is in a better position to know what's going on in a program and how to improve its activities than the program workers, coordinators, volunteers, and participants? These individuals have much to contribute to the evaluation since they have unique experience with the program in question. Indeed, they implement it, give it life, and live it themselves. In this sense, the evaluation process must be participative. It must rely on the cooperation of all stakeholders and take into account their expertise. A participative evaluation4 has a certain number of features that we have summarized below.

4 The principles involved in this process have been adapted from a document entitled Keeping On Track - An Evaluation Guide for Community Groups, produced by the Women ' s Research Center in Vancouver and restated in Program Consultant's Guide to Project Evaluation, designed for the Population Health and Issues Directorate, Health Canada.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 27: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

18

• Participative evaluation stresses learning, success, and action. Basically, you need to draw lessons from the program, about what worked well and what didn't, and whether there were outcomes. This is done in a spirit of improving the program and adding value to current and future interventions.

• The process must be useful to organization members. It allows program staff, volunteers, and anyone else involved to do their jobs better. Program evaluation must be based on the program's goals and objectives.

• It provides information to all stakeholders throughout the evaluation process. Since it is an ongoing process throughout the program, it yields information for all interested parties at different times. To illustrate, program workers could use the information to add value to their interventions during the program. It is therefore important that the information be distributed in a clear, precise, and timely manner.

• The process takes account of change. It's not a question of measuring the complete success or failure of a program — that would be misleading. Instead, it's an attempt to assess the degree of success, which means the progression of change affecting knowledge, attitudes, skills, and behaviour of program participants. This requires identifying objectives of change or outcomes for each program component or activity. It is important that the objectives take into account the organization's resource limitations.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 28: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

19

• The process reflects the organization's evaluation questions and criteria. The organization must either formulate its own evaluation questions or participate in their formulation. It also has the task of defining the indicators for evaluating its work in a realistic manner.

• It integrates the viewpoints of all stakeholders. Participative evaluation lets you come up with questions of common interest to workers, participants, volunteers, project coordinators, and funding agencies. It is important that the evaluation integrate information provided by all stakeholders and that it reflect an overview of these viewpoints.

2- Designing an Evaluation

The evaluation process comprises four main phases:

Phase 1: Focusing the Evaluation Phase 2: Planning the Evaluation Phase 3: Conducting the Evaluation Phase 4: Using the Evaluation Results

[PHASE 1: Focusing the Evaluation |

Determining the evaluation questions that need to be answered is the first phase in the evaluation process. It consists in selecting the issues that will be addressed (in other words, the WHAT).

Program evaluation can be looked at from different angles: the program's relevance for the community, the fit of program components, client characteristics, how activities are conducted, the services produced (which are referred to as outcomes), user satisfaction, and outcome analysis. These are the different questions

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 29: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

20

on which evaluation processes, each in their own way, are built in order to critically examine a program or some of its components, to identify problems, and to find solutions. In other words, to improve program quality.

^ These are the questions that you will encounter throughout the different sections in this mamtal and which will be illustrated through examples.

Evaluation Process Types of Questions to Ask Yourself

Evaluation of Client Needs and Program F e a s i b i l i t y

- Does the community need this type of program? - What aie the characteristics of clients who would

benefit from this type of program? - What resources (staff, offices, etc.) are required to

provide a minimum number of responses to the requests?

- Do we have the material, financial, and human resources needed to carry out the program properly?

Analysis of Program A r t i c u l a t i o n

- What are the program's targeted objectives? - What are the main elements or components of the

program? - What is the relationship between the objectives

and the elements or components? - What is the relationship between the various

program objectives, program components, and the overall mission of the organization?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 30: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

21

Study of the Characterist ics of P a r t i c i p a n t s , Clienteles, and Users

- What are the characteristics of the individuals, families, and groups affected by the program?

- To what extent does the program reach the targeted individuals, families, and groups (target populations)?

Analysis of How Services Are Being Delivered and Act ivi t ies Achieved

- What is the nature of the activities carried out directly or indirectly with participants? Based on precise sub-questions such as: What is the average number of hours of participant contact? How many participants does one program component or another affect? How many workshops have been held? How often?

Analysis of Services Produced by the Program (Outcomes)

- How many participants have left the program? When and for what reason?

- How many participants stay in the program to the very end?

- How many participants are referred to other services?

Study of User S a t i s f a c t i o n

- What is the opinion of participants about what they found most useful? Least useful?

- To what extent do they feel that their needs have been met?

- Why did some participants leave the program? - What do other community organizations think

about the usefulness of the services delivered by the program?

Outcome Studies - What changes did the program produce in participants?

- To what extent can the observed changes be attributed to the program?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 31: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

22

All of these questions represent different levels of complexity. Not all organizations have the potential, means, or resources to respond to all of them. For this reason, you need to determine how far you can go by yourself and at what point you need to call on appropriate evaluation experts.

Once the process has been determined, you also need to clarify the following.

WHY AND FOR WHOM? Why does the organization want to conduct the evaluation? Which groups are stakeholders in the evaluation and will use it: sponsors, the board of directors, staff?

H o w AND WHO? How will the organization undertake the evaluation? Who will be in charge of it? Who is best placed to conduct the evaluation? Does the organization have the human resources required to conduct the evaluation? On what aspects of the evaluation should experts be consulted?

• WHEN AND HOW MUCH? What is the evaluation schedule, given the tasks to be undertaken? What costs are involved?

^ We will discuss these topics in greater detail in Section 7 of this manual

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 32: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

|PHASE 2: Planning the Evaluation |

The planning phase comprises the following operations:

GATHER ALL THE DOCUMENTS PERTAINING TO EVALUATION Literature pertaining to the program, funding requests, program descriptions, minutes of meetings, progress reports, participant registration forms, attendance records, logbooks, and the like,

DEFINE THE TYPE OF EVALUATION This consists in defining the type of evaluation depending on the questions that must be answered.

The evaluation can emphasize program relevance, in which case a needs assessment would be appropriate. To determine if an activity is realistic, a feasibility study is carried out.

If the focus is program articulation, implementation, or operation, formative evaluation would be used to judge implementation.

Lastly, the evaluation could emphasize program effectiveness (to what degree does the program achieves its objectives?) or efficiency (how does the program produce results? What modifications or improvements could be made to produce the targeted outcomes at lower cost or more rapidly?). In this case, summative evaluation or outcome evaluation would be used.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 33: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

24

Occasionally, evaluation approaches involve both summative and formative evaluation.

Evaluation Process Corresponding Types of Evaluation

Evaluation of Client Needs and Program Feas ib i l i t y

"=> Needs assessment <=> Feasibility assessment

Analysis of Program Articulation

& Formative evaluation •=> Used occasionally in summative evaluation

Study of the Characteristics of Part ic ipants , Clienteles, and Users

Formative evaluation Used almost always in summative evaluation

Analysis of How Services Are Being Delivered or Activities Achieved

•=> Formative evaluation

Analysis of Services Produced by the Program (Outcomes)

<=> Formative evaluation •=> Used occasionally in summative evaluation

Study of User Sat i s fac t ion

O Formative evaluation o Used occasionally, but only in summative

evaluation

Outcome Studies Summative evaluation

•=> Ways of conducting these kinds of evaluation are illustrated starting in Section 3.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 34: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

25

• ENSURE THE PROGRAM OBJECTIVES ARE CLEAR When undertaking a formative or summative evaluation, you need to be sure that the program objectives arc clear and precise. If they aren't, then you need to try defining them in measurable, specific terms. It should be obvious that evaluation should ideally start at the program outset. Too often, however, evaluation planning is left until after the program has been implemented. In this case, if the objectives arc not specific, it is impossible to assess whether or not they have been attained.

^ Later in this manual, we explain how to define clear and measurable program objectives.

• ENSURE THE SUCCESS INDICATORS ARE CLEARLY DEFINED Outcome evaluation of a program cannot be undertaken if no success indicators have been defined. These indicators serve as objective criteria in measuring how successful the activity has been. Generally, the indicators are developed at the program outset when the objectives are defined. They need to be looked at if they haven't been designed or have been poorly designed.

^ The process for designing success indicators is developed later in this manual and illustrated with examples.

• DETERMINING HOW TO GATHER INFORMATION by answering three questions:

• What information is needed? You have to determine what kind of data to collect to answer the questions corresponding to the type of evaluation.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 35: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

26

• Who can provide the information? You need to target those groups or individuals in the program who are most likely to have the information that you need. Other sources such as minutes of meetings, attendance records, and logbooks can also be used.

• How to collect the information? This concerns selecting the most appropriate data-collection tools, based on the types of information required for successful evaluation. Of course, the organization's human, material, and financial resources must be taken into consideration.

We provide examples of data-collection tools for different evaluation types throughout this manual.

• DETERMINING HOW TO ANALYZE AND SUMMARIZE INFORMATION This involves determining how the collected data will be analyzed in order to extract information that will be useful in answering evaluation questions. You have to stick to what is necessary: the tendency is often to gathering more information than is really required. Specifying how the collected data will be analyzed from the outset often allows for making adjustments and keeping the evaluation reasonable in scope.

PHASE 3: Conducting the Evaluation

This phase comprises the following operations:

COLLECTING THE DATA The data is collected from the targeted sources using the instruments chosen in the preceding phase.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 36: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Data collection involves ethical considerations. Care must be taken to ensure that all precautions have been taken to protect the rights of individuals affected by the evaluation.

O Ethical considerations are dealt with in Section 7.

• ANALYZING INFORMATION A variety of methods are used to make the data "talk." The information is organized, then the pertinent contents extracted and analyzed, keeping in mind the program objectives and success indicators as reference points. The important thing is to see how the program is moving towards its objectives in an endeavour to improve program quality. This is the standpoint from which information must be analyzed and interpreted.

O The various data-analysis methods are dealt with in Section 6.

• WRITING A REPORT Information analysis should make it easy to write an evaluation report that is both simple and comprehensive.

Section 7 discusses the different topics that should be covered in the evaluation report.

| PHASE 4: Using the Evaluation Results |

As we have seen above, in order to be useful and positive, an evaluation must be able to help improve the program.

In this regard, everything must be done to ensure that everyone involved in the program has access to the contents of the evaluation and use the results to improve program quality.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 37: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

28

Moreover, the organization can apply the results on a larger scale in the community or neighbourhood to promote its activities. Evaluation results can also be used in applying for funding renewals or new grants. When rigorously collected, evaluation data is information that must be taken into account.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 38: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

SECTION 3

NEEDS ASSESSMENT AND FEASIBILITY ASSESSMENT

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 39: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

3 0

SUMMARY Scenario A: The Helping Hand Organization

1 - Needs Assessment A- What information do we need? B- Who can provide the information? C - How to collect the information? Summary of the Needs Assessment

a) Collecting information f rom clients b) Collecting information from other community organizations

2 - Feasibility Assessment A- What information do we need? B- Who can provide the information? C- How to collect the information? Summary of the Feasibility Assessment

For Further Reading

Sample Needs Assessment Questionnaire

ILLUSTRATED TOOLS AND TECHNIQUES: — Needs assessment questionnaire — The focus group — Interviews with key informants — Community forum

ILLUSTRATED METHODOLOGICAL CHOICE: — Choice between the questionnaire, interview, and focus group

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 40: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

31

SCENARIO A: THE HELPING HAND ORGANIZATION

The mission of the Helping Hand organization is to provide support to parents of mentally impaired children. The organization's services include information meetings for parents, distribution of a newsletter about available resources, and child-care services to allow parents respite on weekends.

During the information meetings, some parents mentioned to staff that it was harder to find someone to care for their children during the summer and that the situation was making it hard for working parents to organize their schedules.

As a consequence, the staff discussed the idea of offering a special service in the summer.

— Have enough parents expressed a need for the service to warrant setting it up?

— Do other organizations offer a similar service to which parents could be referred instead of setting up a new service? Could the new service complement a service offered by another organization?

— What form should the new service take? A day camp? An overnight camp? Something else?

— What resources are required to run the new service: human resources, financial resources, offices, municipal permits, etc. ?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 41: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

32

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 42: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

3 3

This first scenario can be used to illustrate two types of evaluation: needs assessment prior to implementation of a program or

service, and feasibility assessment. They will be discussed in that order.

In our discussion on needs assessment, we will look at constructing a questionnaire, putting together and leading a focus group, and collecting data from key informants and during a community forum. The methodological choice discussed will be that among the questionnaire, interviews and focus group.

1- Needs Assessment

^ Needs assessment consists in gathering information to identify the situation, objectives, and conditions required for planning an activity or program. It generally occurs in the first phase, which is planning (see the Introductory Manual).

^ Needs assessment can also be conducted while a program is ongoing, in which case, it is used to determine if the program still meets clientele needs and if there are new problems or difficulties that the program doesn't address, but which could form the basis for expanding the program.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 43: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

34

How will Helping Hand cany out this assessment of its clients' needs? By trying to answer these three questions:

/ What information do we need? / Who can provide the information? / How to collect the information?

A - WHAT INFORMATION DO WE NEED?

The first thing that the organization's practitioners and coordinators must do is ask themselves the questions: What is it that we want to know exactly? This is the information that needs to be gathered, and not anything else.

Helping Hand's coordinators and staff must examine three aspects.5

The Situation

— What is causing a problem for the parents? In other words: • How many parents actually are having trouble finding

someone to take care of their disable child during the summer?

• Who are these parents and children (what are their characteristics)?

• How are they making out now?

— How do the parents view the response to their problem? In other words: • What types of services would they use (home child care,

day camps, overnight camps, or other solutions)?

5 See Introductory Manual (Section 4) in this series.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 44: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

• During what part of the summer would the need be the greatest At what frequency?

• Are the parents prepared to pay for the service and, if so, how much?

Objectives

• What needs are you trying to meet? What objectives should be given priority?

• What is the best formula for responding to these needs? What orientation should the program take?

• Is there agreement on the objectives from all groups?

Conditions for Implementation

• How will this service respond to the identified needs?

^ As with all other types of evaluation, needs assessment must have specific objectives. There's no use col lect ing records and information without knowing what it will be used for.

B- WHO CAN PROVIDE THE INFORMATION?

A number of individuals and groups can provide the needed information.

The Parents Themselves

As the people who are experiencing the need, parents are the best placed to determine the response to solve their problem.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 45: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

3 6

Other Community Organizations

Many different groups in the community can have an interest in this question. It is important to contact them for their input since their activities may be useful or contribute to the end solution, such as setting up a new service.

^ Selection criteria for sources can vary depending on the situation. The populations who have the need should usually be the primary source of information. On the other hand, you have to take into account criteria such as respondent age, being able to get in touch with them, their availability, and their physical and mental capacities.

C - How TO COLLECT THE INFORMATION?

There are a variety of methods for collecting the information required to conduct a needs assessment.

In gathering information from parents, for example, Helping Hand could use:

• a questionnaire sent to homes; • individual interviews during which parents respond to ques-

tions asked directly by an interviewer, • a focus group.

To collect information from other community organizations, Helping Hand could:

• conduct interviews with key informants; • hold a community forum.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 46: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

, _ 3 7

SUMMARY OF THE NEEDS ASSESSMENT

What You Want to Find Out

• What is the problem situation and what are the population's needs?

• What is the best formula for responding to these needs?

What Information Po You Need?

The Situation • What is the problem situation? • How do clients perceive their needs?

The Objectives • What objectives should be given priority in addressing the problem? • Is there agreement on the objectives f rom the other community groups? • What orientation should the program take?

The Conditions • How will the service respond to the identified needs?

Who Can Provide the Information?

• The clients themselves • Other community organizations

How to Collect the Information?

• A questionnaire sent to homes • Interviews • Focus group • Interviews with key informants • Community forum

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 47: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

3 8

a ) COLLECTING INFORMATION FROM CLIENTS

THE QUESTIONNAIRE AND INDIVIDUAL INTERVIEWS

How do you decide whether to use a questionnaire sent directly to clients or to opt for individual interviews with clients? Both methods have their strengths and weaknesses.

Questionnaires sent to the home —then filled out by clients themselves— is a low-cost way of reaching many respondents. They allow anonymity and increase the chances that the responses genuinely represent the respondents' beliefs. They also let clients work at their own speed in responding. The main disadvantage is that questionnaires are rather impersonal. For this reason, a covering letter is added to encourage people to respond.

Interviews elicit more active participation from respondents by establishing a rapport with the interviewer. Interviews are useful in obtaining information from people who do not read or write very well, or find it easier to express their ideas orally. In addition, the interviewer can explain questions that seem unclear to respondents. On the other hand, interviews demand more time and energy, and they are more costly when it comes to collecting and analyzing the data.

In a structured interview, the interviewer asks spec i f ic questions in a set sequence and records the respondent's answers on the questionnaire.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 48: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

3 9

^ In a semi-structured interview, respondents are invited to develop their answers and to provide more depth. Their responses are often recorded on tape for later transcription and qualitative analysis, (sec Ethical aspects, section 7)

What Makes a Good Question? Good questions have at least three essential qualities.

1- Clarity: Clear questions are understood. The wording must be simple and understood by anyone. Avoid abbreviations, words with vague meanings, and jargon. A question should be singular, which means one question, one idea.

Were you satisfied with the length and contents of the workshop?

WARNING! This question addresses two ideas: contents and length.

2- Relevance: Questions must be designed to elicit the desired information and touch on topics for which respondents have information.

3- Neutrality: Questions are aimed at letting you find out who the respondents are, what they think, and what they do. You don't want to influence them into responding with what they think they should say, instead of what they really feel. Questions must therefore be unbiased and neutral.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 49: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

40

Here are some examples of biased questions, which should be avoided!

Are you for a day camp that operates only in the morning? • Yes • No Q Not sure

WARNING! This question is biased because it offers only one possible choice. The wording of the question must present both choices "agree or disagree", "for or against." It is a well-known fact that if only the positive choice is presented, people will tend to answer "yes. "

State whether you agree or disagree with the following statement: "Municipalities should provide some of the funding for disabled children's day camps. "

Q Agree fully Ui Agree Q Totally disagree

é * WARNING! This question is biased because it offers more positive choices than negative ones. The number of positive and negative answers must be the same, or the respondent is more likely to answer one way or the other.

Do you agree or not with the exorbitant fees charged for disabled children's day camps?

Q Agree

Û Disagree

• WARNING! This question is biased because it makes a value judgment on the "exorbitant" nature of the fees.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 50: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

41

What Kind of Question? There are two kinds of question: closed and open.

Examples of closed questions: Do you need child care for your child( ren) during the Christmas holidays?

[ J Yes I ] No

Overall, to what extent are you satisfied with the quality of services provided by Helping Hand?

I am very satisfied Q / am satisfied Q / am unsatisfied Q / am very unsatisfied Û

(Note: In this type of question, a maximum of four possible answers are provide to discourage people f rom choosing the middle, most "neutral" one.)

Which one of the following types of child care do you use the most regularly? (Choose only one. )

[ ] Someone comes to the house to look after the child(ren) [ ] Relatives look after the child( ren) at their house [ ] Neighbours or friends look after the child(ren) at their house [ ] Daycare [ ] Other. Please specify:

What type of child care do you prefer during the week? (Choose all that applv in order of preference beginning with I for the most preferred)

[ ] At home [ J At the home of relatives [ ] At the home of neighbours or friends [ ] Daycare [ ] Other. Please specify:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 51: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

4 2

^ You use closed questions when the question is clear and the series of possible answers is known. The important thing is to make sure that the suggested answers are plausible (possible), exhaustive (covers all the possibilities) and mutually exclusive (one answer is not contained in another). Sometimes a possible "other" answer must to offered.

Examples of open questions: Do you have trouble finding child care for your disabled child?

I J Yes I ] No

If yes, please describe the problems.

What made you decided to attend the Helping Hand information session?

If you had to organize a summer camp for disabled children, what activities would you plan?

^ Open questions do not limit the scope of the answer. They allow respondents to use their own words, structure their answers and qualify what they say. Open questions give them the chance to say how they see things and what they fee l about them They also, however, require more effort on the I part of the people you are polling, especially if you want written answers. You have to remember, too, that you have to analyze and process the material. If you have a large number of questionnaires, you have to code the answers. Coding and analysis are complex and .take time. (See Section 6)

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 52: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

43

Sequence of Questions Generally, questionnaires (or interview questions) are divided into a certain number of sections that correspond to the different aspects or themes about which you want information. Sections are put in an order that encourages respondents to take part: the easier sections are placed at the start and the more difficult or delicate ones at the end. For example, the questionnaire could begin with the more factual sections on child motor or social development, while more personal themes such as the parent's difficulties, feelings, and opinions could be dealt with at the end. You should mark the transition from one section to the next with a title such as Child Social Development, or by a short, introductory sentence such as The following questions deal with your child*s social development.

Demographic questions (age, income, job, etc.) are generally left to the end of the questionnaire. Open-ended questions can be included in a closed-question questionnaire to enrich and complete the information.

OmretiQunaire length People should be able to complete the questionnaire in about 15 minutes. They tend to lose interest in long questionnaires, which can lead to them rushing through the questions in the later sections without paying much attention to their answers. A structured interview, on the other hand, can take about 30 minutes; a semi-structured interview can run from 1 hour to 1 1/2 hours.

Questionnaire Presentation Presentation is important. An attractive, well laid out, cleanly printed questionnaire will elicit a higher response rate. Pages must not be too dense. The question and the space for the answer must be on the same page, and there must be enough room to check responses or fill in answers.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 53: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

44

Questions must be numbered, as must the pages! An introduction and instructions must come at the beginning. A wide right margin is left on the questionnaire for codifying answers for electronic data processing (EDP), if required.

^ Putting together a questionnaire is not easy. Before conducting a survey, make sure your questionnaire i s properly constructed. Consult people who are knowledgeable about methodology and questionnaire content.

^ The matter of electronically processing responses must be settled before the questionnaire is used: it could be very difficult to do so after the fact. It is ABSOLUTELY necessary to consult a specialist if you are not familiar with EDP and statistics.

Interviews Most practitioners in community organizations are already specialists in interview techniques. One must remember, however, that although a client interview and an evaluation interview have much in common, they are different in many respects.

— The same starting point for different objectives During client interviews, practitioners try to gather objective and subjective information in order to understand what the client is going through with a view to providing the client with tools to find solutions to his own problems.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 54: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

45

The starting point of evaluation interviews is the same: to gather objective and subjective information. The purpose, however, is not. The evaluation interview seeks to understand what clients expect of programs, how they experience them, what they think of them, and what they get out of them.

— Similar preliminaries: building trust and setting the scene

For both a client interview and an evaluation interview, the first minute is crucial. A climate of trust and empathy must be established to obtain the other person's cooperation. The interviewee must not feel pressured. On the contrary, one must take the time to explain how the evaluation works and how the interview will be conducted.

— Similar attitudes The same listening skills are required for both client and evaluation interviews: open-mindedness, empathy, acceptance of the other, non-directive attitude, etc.

— Common interview techniques Just as in a client interview, certain techniques are used in an evaluation interview: questions, explanations, restatement ("If I understand correctly, you mean..."), generalization, especially when a question is particularly delicate or threatening ("Many people think that..., what do you think? ").

— Different techniques Unlike intervention interviews, evaluation interviews should not involve techniques that can influence the respondent. Suggestions, advice, persuasion, approval, confrontation, criticism, contradiction, and help in making a decision must not be used.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 55: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

46

Consequently, you must not put the validity of the respondent's information in doubt or contradict it. Neither should you provide hints that would suggest a particular response or quote answers given by other respondents. The interviewer must not give his personal opinion.

— Different kinds of interview support In intervention interviews, the topic of discussion is often not deter-mined beforehand, but depends on events experienced by the client or other factors. In contrast, an evaluation interview is based on a guide established ahead of time that must be followed and completed during the interview.

In a structured evaluation interview, the questions are read slowly and in their entirety. Each question must be asked, even if you think that the respondent has answered it indirectly. In this way, you ensure that you have answers to every question and that you'll be able to process data from the respondents as a group. With closed questions, you check off the corresponding response. Responses to open-ended questions must be written down word-for-word with nothing changed.

In a semi-structured evaluation interview, the interviewee can be asked to clarify or elaborate. One can also go back to a question if the person cannot answer right away or seems uncomfortable.

Improving the Quality of the Questionnaire or Questions When designing the questionnaire or questions, you need to be able to anticipate how respondents will react by putting yourself in their place. Come up with different wordings, compare them, and then keep the one that seems best.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 56: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

47

Before distributing questionnaires or conducting interviews, you need to check for quality by trying the questions out on people who are similar to the ultimate respondents. This is called a pretest. To illustrate, the person in charge of the evaluation for Helping Hand might give the preliminary version to other parents, record their hesitations, and note which questions give them problems. In looking at their responses, the evaluator must decide if the questions are eliciting the information being sought. Based on that, questions can be revised, eliminated, or added as required.

A Sample Questionnaire for Needs Assessment from Helping Hand Armed with this information, Helping Hand staff can now start designing a questionnaire to assess the needs of parents. Before using the questionnaire, it must be pretested on two or three parents. Afterwards, it can be distributed to parents to complete on their own or it can be used to conduct structured interviews.

You can find a copy of the complete questionnaire at the end of this section. It contains both open and closed questions, presented in a logical sequence. The use of very specific questions will yield relevant information about: 1) child characteristics; 2) the way that parents usually plan for child care and how they organized it last summer; 3) the specific needs for child care next summer; 4) how much parents would be prepared to pay for the service; and 5) family characteristics. The questionnaire is prefaced with a short text that provides some instructions to parents.

THE FOCXJS GROUP The focus group is another method for obtaining information on a specific theme or topic. It's an efficient way to gather information from a fairly large number of respondents at the same time.

According to this method, a number of relatively homogeneous groups are formed (generally 6 to 12 people, although 10 is die ideal number). Typically, participants are led to express their opinions by a

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 57: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

4 8

moderator using relatively structured instruments defining the topics to be discussed. Each participant gives his own point of view of the situation, based on his experience. Then the opinions from all the groups are analyzed and synthesized, bringing out the main areas of agreement and disagreement.

^ A focus group isn't expected to arrive at any kind o f consensus. In fact, everyone must be allowed to express their own opinions. Since you're trying to reflect the different viewpoints of participants, you use whatever means to get as many opinions as possible.

To make a focus group successful, you need to:

• do a good job recruiting participants and form a sufficient number of groups. At least two groups must be formed. In fact, it's not possible to use this technique with a single group. Each group must be fairly homogeneous. For example, groups of parents could be formed based on the age of their children, the type of disability (mild or severe mental impairment; motor, hearing, or visual impairment), or geographical origin (rural area, urban area, or by city neighbourhood). The key is to have as many groups as necessary to reflect the heterogeneity between the groups for the greatest number of opinions. Since analysis is time-consuming, however, you shouldn't create too many groups. Interviews tend to last about two hours.

• ensure good facilitation during the interviews. Facilitators play an important role in focus group interviews. They must have good skills in facilitating and group process to conduct the interview and ensure that everyone participates.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 58: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

49

Each interview starts off by the facilitator providing some instructions for participants. These include recalling the goal of the interview, indicating what is expected of participants, indicating the time allotted for each topic, and explaining the follow-up to the program. Facilitators must assure participants that there is no need to identify themselves and that the discussion will remain anonymous. Facilitators must obtain participant permission to tape record the discussions (see confidentiality and anonymity concerns, Section 7).

construct a good interview guide. The interview guide must be well-constructed to ensure that all relevant topics are covered.

The interview guide is an essential tool for facilitating and conducting the interview. It must be focused on the evaluation themes and constructed logically, starting with the general and narrowing down to the more specific. Enough time must be allotted to discuss each topic.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 59: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

50

Sample Focus Group Interview Guide for Needs Assessment

for the Helping Hand Organization

Presentation of instructions (10 minutes), followed by discussion.

7- Part One (30 minutes): Identification of the Current Situation

How did each parent arrange child care for his disabled child last summer? Did he encounter difficulties? If so, what?

2- Part Two (30 minutes): Identification of Child-care Needy

What kind of care would each parent like to have for his disabled child next summer? When ? How often ?

3- Part Three (40 minutes): Discussion of Different Types of Child Care

Based on the different types of child care brought up by the parents, conduct an in-depth discussion about the strengths and weaknesses of each

Ask parents to suggest how they think that the service should be configured: "If you had to organize the child-care service, what orientation would you give it?"

Have them provide specifics on schedules for each of the approaches, child transportation, rates that parents would be willing to pay, and so on.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 60: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

51

• draft a systematic summary of the information. The next step is to produce a systematic summary of the discussions that took place in each group. The different opinions expressed should be culled out and categorized according to theme. Once this has been done, identify what is similar and what is different from one group to the next. To illustrate, this could be what parents said when broken down according to disability type or child age. This summary report brings out a highly specific, accurate range of parental needs

If you've never been a group facilitator before, you shouldn't use the focus-group approach without the supervision of someone who has. If you already have facilitating skills, contact a resource person or consult the references at the end of this section to find out the specifics of focus groups.

>- >- ̂ METHODOLOGICAL CHOICER

Whether to Use Questionnaires. Individual Interviews, or Focus Groups

Questionnaires, individual interviews, and focus groups all have their particular strengths and weaknesses. A number of factors must be considered in your decision to use one instead of another: the population that you want to get information from, its location and availability; the needs for information; available resources; and the organization's practical concerns. The table below shows the various aspects to be taken into account when deciding on whether to use questionnaires, individual interviews, or focus groups.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 61: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

5 2

Opt for questionnaires when:

Opt for individual interviews when:

Opt for focus groups when:

Population Fairly large Small Comprised of heterogeneous groups

Location and Availabil ity

Population geo-graphically scattered

Population is easy to reach (if not, use telephone in-terviews)

Easy to bring par-ticipants together

Information Needs

Need for extensive quantitative data

Want to compare subgroups (e.g., men/women)

Need for informa-tion with depth rather than breadth

Reason to think that people will not fill out ques-tionnaires

Need for rich descriptive data

Reason to think that group interac-tion will bring out new aspects

Available Resources

Access to resource persons who can process and ana-lyze quantitative data

Access to skilled interviewers

Access to resource persons who can analyze the con-tents of the inter-views

Access to a skilled facilitator

Access to resource persons who can analyze the con-tents of the discussions

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 62: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

53

O r g a n i z a t i o n ' s Relatively re- Larger evaluation Relatively P r a c t i c a l stricted evaluation budget (time restricted C o n c e r n s budget (inexpen- required to contact evaluation budget

sive tool, depen- respondents, con- (depends on the ding on scope and duct interviews, number of groups. the number of transcribe inter- Plan for the time respondents. Plan views in the case required to analyze for costs relating of semi-structured and process the to statistical interviews, ana- information.) analysis, if lyze and process required.) information.)

Travel not possi- Travel possible Group meetings ble can be well*

organized

Although our discussion has centered on needs assessment, these methods and techniques can also be applied in other circumstances.

b ) COLLECTING INFORMATION FROM OTHER COMMUNITY ORGANIZATIONS

The people heading up the needs assessment for Helping Hand could also call on other interested community organizations for information. This would allow them to take stock of the environment in which the child-care service might be set up and to see to what extent the new service would be complementary to existing services. It is important that the new service doesn't conflict with programs offered by other organizations and that the other groups are in agreement with the objectives of the new service. This keeps the door open for possible cooperation with the other organizations.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 63: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

54

One way for Helping Hand evaluates to get information is to conduct interviews with key informants in the community.

INTERVIEWS WITH KEY INFORMANTS

This consists in meeting with people who represent significant groups or subgroups and who are knowledgeable about the community. The structured or semi-structured approach can be used with the same interview guide. The contents of the interviews are then analyzed to bring out the different points of view. Generally, interviewing five or six key informants yields adequate information if the territory covered is small.

Choosing Kev Informants You can start by word-of-mouth. After meeting with one person who is familiar with the problem, you can ask him to suggest one or more other persons who could provide the information sought and who might be able to suggest still others. This process is referred to as snowball sampling. You could also decide to simply meet with representatives of different community organizations or groups who are interested in the issue on the local level.

The Kind of Information Sought Probably, the people heading up the evaluation for Helping Hand would want to ask key informants questions like the following in order to get the information sought:

• Based on your knowledge of the community, what are the needs of parents with respect to child care in the summer for their disabled children?

• Would setting up a new child-care service meet the needs expressed by parents?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 64: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

55

• What direction should this new service take? What should its objectives be?

• Would this new service cause conflicts with any programs offered by their organization or another in the community?

• Could the new service be complementary to services already provided by their organization or others in the community?

HOLDING A COMMUNITY FORUM

Organizing a community forum can round out the information. This consists in inviting all concerned individuals and groups to a public meeting where they can express their opinions on the community's specific needs. For a community forum to be successful, you have to:

• try to inform as many people as possible about the location, time, and objectives of the forum;

• name a chairperson to moderate the discussions, and one or more secretaries to take notes;

• focus discussion on the selected theme in order to achieve the forum's goal.

Community forums can be rich sources of diversified information about community needs. On the other hand, they are normally used in conjunction with other methods, since they have a number of limitations. For example, some opinions can dominate forums, while others are underrepresented, and this skews the overall picture.

* * * * * *

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 65: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

56

Once Helping Hand has finalized data collection with parents and other community organizations on the issue of summer child-care services for disabled children, they are ready to build a service or program to meet the need. Because of the information gathered, the organization now has a clear picture of the situation of parents and is aware of the other resources available to them. This means that Helping Hand is in a position to determine the future service's direction and define its objectives. Clearly stating the program and defining its specific objectives are key tasks in ensuring program quality. This question is dealt with in Section 4.

Let's assume that Helping Hand's staff has constructed a program based on two components: a day camp operating on weekdays from 6:30 a.m. to 6:00 p.m. and a sitter service for evenings and weekends. Before implementing the program, Helping Hand has to determine if it can actually be carried out. This is called feasibility assessment.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 66: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

57

2 - Feasibility Assessment

Feasibility assessments seek to ensure that the activity is realistic and relevant. This includes determining if the organization has the human, financial, and material resources required to deliver the program, (see the Introductory Manual)

In carrying out its feasibility assessment for its disabled child-care service, Helping Hand has to answer the following three questions once again:

/ What information do we need? / Who can provide the information? / How to collect the information?

A - WHAT INFORMATION DO WE NEED?

Helping Hand needs to determine the human, material, and financial resources required for the program and decide if it can provide them. Based on this decision, the organization can set the scope of the program. The following issues should be looked at.

Human Resources How many educators will be needed to set up the camp? How many sitters will be needed to set up the sitter service? What qualifications should be set for both? Should any specific training be provided for? How should the teams be structured? Will staff be needed for meals and transportation? Who will be responsible for program administra-tion (work schedules, invoicing parents, paying suppliers, and so on)? Who will be responsible for facilitating the camp? And so on.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 67: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

58

Material Resources Where can the day camp be set up? Will a significant amount of money have to be invested in modifying existing structures? Is a city permit required? What standards or norms must be met? Is special insurance needed? And so on.

Financial Resources How much will each of the program activities cost? What financial resources are available: Will the program pay for itself? Should you apply for a grant from the Disabled Persons Commission? Are there other grants for programs like this one?

B- WHO CAN PROVIDE THE INFORMATION?

Information is available from a variety of sources: officers for similar programs and other organizations and agencies (provincial depart-ments, municipal governments, etc.). A costs study may involve getting professional help. Organization coordinators and workers have to make the final decision on whether the proposed program is relevant and realistic, taking into account the resources required and those available to the organization.

C - How TO COLLECT THE INFORMATION?

Collecting this type of data does not involve any instruments, tools, or techniques that practitioners are not already familiar with. Rigor and judgment are musts.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 68: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

5 9

SUMMARY OF THE FEASIBILITY ASSESSMENT

What You W a n t to Find O u t

• Is the proposed program realistic and re levant?

W h a t Informat ion D o You N e e d ?

• Do we have the human, material, and financial resources required for the program?

• What is the scope of the program?

W h o Can Prov ide the Informat ion?

• Heads of other programs • Public agencies (municipal, provincial government, etc.) • Specialists, if a cost study is necessary • Organization personnel

H o w to Col lect the Informat ion?

• Rigorous search for information using the normal techniques and tools

Once the information has been gathered, organization coordinators and workers must review all of the data and decide if implementing the program is realistic. Based on the results of the feasibility assess-ment, the organization may decide that it would be reasonable to implement only part of the program.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 69: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

6 0

FOR FURTHER READING (See also References at the end of the Manual)

These two references contain valuable information about questionnaires and interviews:

Social Research Methods. Qualitative and Quantitative Approaches Chapter 10: Survey research W. Lawrence Neuman (1997). 3rd Edition, Boston, Allyn & Bacon.

Social Work Research & Evaluation. Quantitative and Qualitative Approaches Chapter 9: Designing Measuring Instruments Chapter 14: Survey Research Richard M. Grinnell, Jr. (1997). 5th Edition, Itasca, P.E. Peacock Publishers, Inc.

About Focus groups, we recommend:

Focus groups: a Practical Guide for Applied Research Richard A. Krueger (1994). 2nd Edition, Thousands Oaks, Sage Publications.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 70: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

61

SAMPLE NEEDS ASSESSMENT QUESTIONNAIRE

Dear Parents:

Helping Hand would like to ask you to kindly fill out the enclosed questionnaire about your child-care needs for next summer.

The questionnaire should take about 15 minutes to complete. Your responses will be kept confidential and anonymous. Just put a checkmark in the appropriate box or write in a short answer where requested.

After you have completed the questionnaire, please return it to us in the self-addressed envelope or drop it into the specially marked box at the office by (date).

We greatly appreciate your cooperation.

Project Coordinator Helping Hand

Do not write in this space. For coding purposes only.

A - Information About Your Child

1. Give the month and year of birth of the child for whom you currently receive services from Helping Hand.

Month L U Year L U f | ïï | ïï | 1

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 71: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

62

2. What kind of disability does your child have?

For coding purposes only.

3 . W h o takes care of your child now while you are at work or if you have to go out? (Put a checkmark next to each answer that applies. Rank your answers by placing a number in the parentheses next to the box, with 1 being the resource most often used.)

One of the parents, if not working • ( )

Relatives or neighbours Q ( )

A sitter at home Q ( )

Child goes to a sitter G ( )

Child goes to a daycare Q ( )

Child goes to a day center • ( )

Child goes to a special school Q ( )

Other. Specify: • ( )

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 72: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

63

4 . Who took care of your child last summer, excluding vour vacation? (Put a checkmark next to each answer that applies. Rank your answers by placing a number in the parentheses next to the box, with 1 being the re-source most often used.)

Child 's father or mother • ( ) Relatives or neighbours • ( ) A sitter at home • ( ) Child went to a sitter • ( ) Child went to a daycare • ( ) Child went to a day center • ( )

Other. Specify: • ( )

For coding purposes only.

5. I.ast RII m mer, excluding vour vacation, did you have trouble arranging child care for your child?

Yes •

No •

6. If yes, briefly describe your trouble.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 73: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

64

B - I n f o r m a t i o n A b o u t Your Chi ld-care N e e d s for Next S u m m e r

7. Ne.ftt piimmex. excluding vour vacation, who will take care of your child? (Put a checkmark next to each answer that applies. Rank your answers by placing a number in the parentheses next to the box, with 1 being the resource most likely to be used.)

For coding purposes only.

Chi ld ' s father or mother • ( ) Relatives or neighbours • ( ) A sitter at home • ( ) Child will go to a sitter • ( ) Child will go to a daycare • ( ) Child will go to a day center • ( ) You have no one to take care of your child • ( )

Other. Specify: • ( )

8. Next summer, if Helping Hand were to set up a child-care service, would you enroll your child in it?

Yes • G O O N T O Q U E S T I O N 10

No •

Not sure Û

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 74: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

9. If no or not sure, please explain:

For coding purposes only.

10. If yes, what type of child-care would be the most appropriate for you? (Put a checkmark next to each answer that applies. Rank your answers by placing a number in the parentheses next to the box, with 1 being the most appropriate resource.)

Home child care Q ( ) Day camp Q ( ) Overnight camp Û ( )

Other. Specify: • ( )

11. During which weeks next summer, would you like to like to have child care for your child? (Put a checkmark next to each answer that applies.)

Week of June 23 O

Week of June 30 •

Week of July 7 •

Week of July 14 •

Week of July 21 •

Week of July 28 •

Week of August 4 G

Week of August 11 •

Week of August 18 •

Week of August 25 •

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 75: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

66

For coding purposes only.

12. Next summer, on what days and at what times would you like to have child care for your child? (Put a checkmark next to each answer that applies. Fill in times for which the service is required.)

Day of the Week Time

Q Monday to

• Tuesday to

• Wednesday to

Q Thursday to

Û Friday to

Q Saturday to

Q Sunday to

13. Can you provide transportation for your child?

Yes •

No •

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 76: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

67

14. For your information, these are the weekly rates charged for child care for disabled children in other areas:

Home child care $180 per week

Day camp $225 per week

Overnight camp (inc. room and board) $335 per week

What is the maximum amount that you would be willing to pay per week?

Home child care $ per week

Day camp $ per week

Overnight camp (inc. room and board) $ per week

For coding purposes only.

C - General Information About the Family

15. In what kind of family does the child for whom you are receiving services from Helping Hand live?

With both his natural parents • With his mother • With his father • Joint custody • Foster family •

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 77: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

68

16. Is the child's father employed?

Yes •

No •

If yes, is the employment:

Part-time

Full-t ime

Seasonal Other. Specify:

• • •

17. Is the child 's mother employed?

Yes •

No •

If yes, is the employment:

Part-time

Full- t ime

Seasonal Other. Specify:

• • •

For coding purposes only.

Thank you for having taken the time to fill out this questionnaire. The informa-tion that you have provided lets us know what your needs are and helps us to plan better services for your children.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 78: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

69

SECTION 4

PROCESS EVALUATION OR

FORMATIVE EVALUATION

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 79: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

7 0

SUMMARY

Scenario B: The Tiny Tots Organization

1 - Process Evaluation A- What information do we need? B- W h o can provide the information? C - How to collect the information?

2- Developing a Logical Program Framework

3- Formulating Objectives

Summary of the Process Evaluation

a) Collecting information from program coordinators and staff

b) Collecting information f rom participants

For Further Reading

Sample Satisfaction Survey Questionnaire

ILLUSTRATED TOOLS AND TECHNIQUES

— Developing a Logical Framework — Formulating Measurable Objectives — Satisfaction Survey Questionnaire

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 80: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

71

SCENARIO B: THE TINY TOTS ORGANIZATION

School staff in an underprivileged area noticed that a large number of children were entering kindergarten with significant developmental delays in adjustment and learning. To deal with this problem, Tiny Tots decided to set up a child stimulation program.

The goal of the program is to prevent adjustment difficulties among neighbourhood children aged 3 and 4 years entering kindergarten and to reduce developmental delays. The practitioners determined that the program should have the following objectives:

1) Set up activities promoting child growth development in the physical, emotional, social, and cognitive areas;

2) Develop the self-esteem of the children; 3) Develop the language skills of the children; 4) Develop the psychomotor skills of the children; 5) Make parents aware of the specific needs of their

children; 6) Give children access to educational material.

To carry out the program, the workers developed three phases of activities. The first dealt with the children (daily morning stimulation workshops); the second with both parents and children (joint workshops); and the third with parents alone (talks, discussions, and meetings with other parents and with educators).

Since it wants to provide a quality program, Tiny Tots decided to carry out an implementation evaluation before the program actually gets under way.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 81: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Program Evaluation for Organizations ; Evaluation Tools for Enhancing Program Quality

Page 82: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

7 3

Our second scenario illustrates process evaluation (also referred to as implementation or formative evaluation).

Through this example, we will discuss developing a logical framework for a program and formulating objectives, which are two key components. While the logical framework is not, strictly speaking, an evaluation tool, it can be extremely useful in developing or systematizing general program structure. We will also take a look at developing a questionnaire for evaluating user satisfaction.

1- Process Evaluation

^ Process or Implementation evaluation (sometimes referred to as formative evaluation) is used to ensure that the organization is achieving what it planned to, to monitor how it is being achieved, and, if it is not, to account for any discrepancies, (see the Introductory Manual)

The goal of process evaluation is to improve how the program is being developed or conducted.

How should the staff of Tiny Tots go about conducting a process evaluation? They should start off by asking the same three basic questions:

/ What information do we need? / Who can provide the information? / How to collect the information?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 83: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

74

A - WHAT INFORMATION DO WE NEED?

The organization has to look at three aspects:6

The Objectives First of all, the objectives must be well defined. There must also be a logical connection between the objectives and program activities. Specifically, you have to examine how activities allow achievement of objectives and how achievement can be measured.

Program Fimctioping You also need to examine the overall characteristics and functioning of the program from the viewpoint of program workers and participants. Problems and adjustments to be made are identified to help the program better achieve its objectives.

O b s e r v a t i o n s An accounting must be made of the program's strengths and weaknesses. In addition, it should be noted whether the program is more suitable for certain kinds of participants or whether different workers are applying the program differently.

B- WHO CAN PROVIDE THE INFORMATION?

Program coordinators, staff, and participants comprise the main sources of information in formative evaluation.

Program Coordinators and Staff These two groups can provide information in specifying program objectives and goals, and ensuring that they are clearly stated. They can further provide access to the program plan and various program documents (participant sheets, attendance records, logs, and so on).

6 See the Introductory Manual (Section 5).

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 84: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

75

Coordinators and staff are also valuable sources of information on how the program is actually being conducted, on the interaction between the various resources, and on the cooperation with other community organizations.

Program Participants You will want to find out participant characteristics. In other words, who is affected by the program and what do they experience while in the program. Participant satisfaction could also be looked at.

C - How TO COLLECT THE INFORMATION?

A number of methods can be used to collect information:

• analysis of available documents; • questionnaires or interviews with program coordinators,

workers, and participants.

2- Developing a Logical Program Framework

When concentrating on enhancing program functioning, you need to ensure that program objectives are clear and explicit, and that they are linked to the different program components and activities. Sometimes, when objectives are poorly stated or vague, they need to be reformulated. In fact, it may be necessary to add, delete, or alter them, which is one of the possibilities with formative evaluation.7

Using a logical framework matrix is one way of taking an explicit accounting of program objectives and bringing out the linkages

7 This doesn ' t hold true in the case of a program that has already been developed Take the Nobody's Perfect program as an example. It was established with a well-defined goal, objectives, and activities, forming a coherent whole. Reformulating its objectives would therefore be inappropriate.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 85: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

76

between the program objectives and components. This very useful tool can serve to systematically describe the program and make evaluation easier. It's neither an approach nor a model. A program logical framework is rather a diagram that shows the relationships between program processes and outcomes.

Since a program can be broken down into its components, a logical framework can be used to establish how each component contributes to achieving program objectives. A logical framework can be presented in table form, as shown below.

• Main Components (Brief description of the activities for each program component or means used)

• Implementation Objectives (Defined using action verbs such as "provide," "give," and "deliver")

• Processes, Services Produced, or Outcomes (Indicators of services or activities, and activity characteristics)

• Specific Objectives or Short-term Effects (Defined using action verbs such as "increase," "decrease," "maximize," or "prevent.")

• Success Indicators (Measurable indicators that show whether the anticipated change has occurred)

Program Component or Activity A

Program Component or Activity B

Program Component or Activity C

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 86: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

77

The elements of a logical framework are described below.

MAIN PROGRAM COMPONENTS: TO identify the main program components, the evaluator asks the question: "What are die main activities or methods used to achieve program objectives?"

PROGRAM IMPLEMENTATION OBJECTIVES OR PROCESS OBJECTIVES: Program implementation objectives indicate what the program wants to accomplish with respect to each of its components. Objectives should normally contain an action verb, such as "provide, "give, or "deliver," which clearly describes the targeted achievement.

PROCESSES, SERVICES PRODUCED, OR OUTCOMES: These are quantitative indicators of the services delivered and the characteristics of the individuals receiving services. Examples are the number of activities in a specific program phase, main participant characteristics, and participant attendance.

SPECIFIC OBJECTIVES OR SHORT-TERM EFFECTS: These are specific objectives described in terms of results or effects to be achieved for program participants deriving from each main program component or activity. Théy should also contain action verbs such as "increase," "decrease," maximize," and "prevent."

SUCCESS INDICATORS: These are measurable indicators that enable us to determine if the anticipated outcome has actually been achieved. They provide indications of the extent to which objectives have been reached in terms of observed effects or outcomes.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 87: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

78

EXAMPLE: TINY TOTS' CHILD STIMULATION P R O G R A M

Let's put ourselves in the place of the coordinators and practitioners of the Tiny Tots organization, who are drafting a logical framework to be able to systematically describe the program and facilitate the task of implementation evaluation.

To start off, we need to identify the main program components. The program has three phases: child stimulation workshops, joint parent-child workshops, and activities for parents alone.

• Main Components (Brief description of the activities for each program component or means used)

Workshops Joint Parent-

child Workshops Parent Talks

Daily morning workshops (8:30 a.m. to 11:30 a.m.) for 10 weeks, comprising practical exercises, games, and outings.

Joint workshops involving parent and child for 10 weeks, comprising parent-child activities and meetings with educators.

Talks for parents and discussions (three per session).

The next step is to examine the program objectives. The child stimulation program is described as having three elements: program goal, objectives, and related activities.

The program goal is twofold: prevent adjustment problems upon entering kindergarten and reduce developmental delays in children.

The project personnel then discuss the objectives as formulated in the program description. Arriving at a consensus on objectives is not always easy. While this takes time and energy, it's the best way to ensure program coherency and enable evaluation. It may be necessary

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 88: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

. 79

to make adjustments to bring things into line with objectives imposed by granting agencies, and this doesn't always make the task any easier.

The coordinators and staff then take a close look at the interpretation that each gives to the wording and the linkages between the program goal and its objectives. The following could be the result of their discussions.

OBJECTIVE 1) Set up activities promoting child growth development in the physical, emotional, social, and cognitive areas.

^ Setting up activities in itself is not an objective per se, but rather a means for achieving an objective. It's a process objective. Moreover, can you really claim that the targeted objective is to promote every aspect of child growth and development?

OBJECTIVE 2) Develop the self-esteem of the children.

^ This objective states an outcome to be achieved, but it is somewhat vague and not specific to the child stimulation program.

OBJECTIVE 3) Develop the language skills of the children. and

OBJECTIVE 4) Develop the psychomotor skills of the children.

These objectives state outcomes to be achieved in very specific spheres of child development. The logical link to the program is evident, but are the objectives too narrow?

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 89: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

80

OBJECTIVE 5) Make parents aware of the specific needs of their children.

The expression "make aware" is somewhat vague. Does it mean increasing parental knowledge about child development? If so, this should be clearly indicated.

OBJECTIVE 6) Give children access to educational material.

^ As with the first objective, this is a process objective.

Having this discussion let the program coordinators and staff see for themselves that the objectives were to vague and poorly stated. As a consequence, they decided to reduce the number of objectives and to restate them.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 90: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

8 1

3- Formulating Objectives

H o w TO FORMULATE GENERAL OBJECTIVES AND SPECIFIC (OPERATIONAL) OBJECTIVES

A general objective states a targeted outcome. It g ive s more focus to the set goal. A single goal may have more than one general objective.

^ A specific objective (also called an operational objective) specifies the changes to be brought about in the target population. It must be measurable and verifiable; i t should also be reviewed and adjusted regularly.

Formulating the general objective(s) requires that you look ahead and ask yourself the question:

What results do we want to see in the target population at the end of the program?

A general objective should give an overview of the program. It should outline the skills, knowledge, and abilities that you want to promote in the clientele.

Formulating a specific objective requires even more precision; you have to ask yourself:

What changes should we see in the target population at the end of the program if the objectives have been achieved?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 91: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

8 2

A specific objective must be concrete. It provides more focus and complements the general objective. It makes the general objective meaningful in terms of intervention and evaluation. It must be formulated in terms that describe observable (and therefore measurable) behaviour.

The logical link between the goal, general objectives, and specific objectives is essential. The whole must be coherent.

G O A L u ft

GENERAL OBJECTIVE GENERAL OBJECTIVE ft ft ft ft

Specific Specific Specific Specific objectives objectives objectives objectives

The Three Essential Rules for Formulating Objectives

Three rules must guide you in formulating objectives:

1 . Design objectives as a function of the clientele, because the clientele is the reason the program exists (the Who?).

2 . Clearly state the outcome that you want to see in the clientele as a result of program activities (the What?).

3 . Use a verb that designates an action.

^ Some Action Verbs: Improve, maximize, increase, reduce, give, deliver, provide, support, promote, acquire, attain, develop, s t imulate , equip, resolve, prevent, and create.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 92: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

8 3

Writing Specific Objectives Since specific objectives must be even more precise, they should identify concrete behaviour, actions, reactions, verbal responses, and tasks that are measurable with respect to changes due to the intervention.

The anticipated change must be specified through the addition of two other elements:

• Specify the conditions for achievement (the How?, the Where?, and the When?).

• Indicate realistic success indicators (To what extent?).

^ In order to be évaluable, an objective must have three qualities:

• it must be clear: the objective provides indications about what should be seen as an observable consequence of a program;

• it is specific: the objective indicates that once it has been achieved, the situation will be obvious ly different;

• it is measurable: indicators can be found that enable us to measure the difference between the init ial situation and the final • situation. In other words, the difference (or lack thereof) can be demonstrated. (see the Introductory Manual, Section 5)

Determining Success Indicators Setting success indicators is not always obvious at the start. Coordinators and staff can make the task easier by asking themselves the following questions:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 93: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

84

• What do we feel will be an indication of the success of our work?

• What reasonable results for the client can we expect from the program?

• On the basis of what changes in client attitude or behaviour should we judge our success?

• Can we define different degrees of success? How can we measure this progress? Against what can it be measured?

Even though the success indicators are determined within the logical framework in a formative evaluation process, they are actually measured when the time comes to evaluate program success. This topic will be dealt with in Section 5.

EXAMPLE: FORMULATION OF OBJECTIVES FOR THE TINY TOTS9 CHILD STIMULATION PROGRAM

In reformulating their program objectives, the coordinators and staff of the Tiny Tots' organization asked themselves this question:

At the end of the child stimulation program9 what results should we be able to see in the children and parents who participated?

Keeping in mind the logical link that must exist between program goal (preventing adjustment problems upon entering kindergarten and reducing developmental delays in children) and objectives, they have come up with three ideas which they feel best describe the desired results :

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 94: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

85

1) The children should exhibit greater language skills, higher psychomotor development, and more fine motor skills.

2) The children should have the background needed to go into kindergarten.

3) The parents should be involved in the development of their child.

To develop specific objectives that truly refine and round out the general objectives, the program staff must formulate them in terms that describe observable behaviour, which means that they are measurable. The coordinators and staff must also specify what they feel are reasonable achievement conditions and performance criteria. For each general objective, they should ask themselves the following question:

At the end of the child stimulation program, what changes should we be able to see in the children and parents who participated to determine whether we have attained our objectives?

Here are the results of their discussion.

With respect to General Objective 1) (The children should exhibit greater language skills, higher psychomotor development, and more fine motor skills), these are the changes that should be observed in the children if the objective has been achieved:

Specific Objectives • At the end of the program, the child should have increased

his language skills to nearly that of the norm for his age group.

• At the end of the program, the child should have developed psychomotor and fine motor skills close to the norm for his age group.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 95: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

86

With respect to General Objective 2) {The children should have the background needed to go into kindergarten), the targeted changes should be in group behaviour and autonomy. The children taking part in the program did not attend daycare and have not developed learning that would prepare them for sitting still, listening to instruc-tions, tying their own shoelaces, dressing themselves, and so on.

Specific Objectives • At the end of the program, the child should have learned to

listen to instructions given to his group. • At the end of the program, the child should have reached the

level of autonomy required to attend kindergarten.

With respect to General Objective 3) (The parents should be involved in the development of their child), the staff feel that there must be a concrete change in the attitude of the parents towards the child. Therefore, the parental behaviour that they would like to see is:

Specific Objectives • At the end of the program, the parents should show

increased and better interaction with their child

EXAMPLE: LOGICAL FRAMEWORK FOR EVALUATING THE TINY TOTS9

CHILD STIMULATION PROGRAM

The Tiny Tots coordinators and staff now have all the elements they need to complete the logical framework for their child stimulation program.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 96: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

87

CHILD S T I M U L A T I O N PROGRAM

• Main Components (Brief description of the activities for each program component or means used)

• Implementation Objectives

• Processes, Services Produced, or Outcomes (Indicators of services or activities, participait attendance, and activity characteristics)

• Specific Objectives or Short-term Effects

• Success Indicators (Measurable indicators that show whether the anticipated change has occurred.)

Child Stimulation Workshops

Joint Parent-child Activities

Talks for Parents

Each morning from 8:30 to 11:30 a.m. Duration: 10 weeks

Once a week from 2:00 to 4:00 p.m. Duration: 10 weeks

3 talks per 10-week session 9:00 to 11:30 a.m.

Set up a workshop comprising:

- practical exercises (language, psycho-motor skills, fine motor skills) educational games

- group outings

Set up a joint workshop comprising:

educational games, parent-child bonding, and modeling storytelling outings parent-educator meetings

Organize foil" comprising:

• information and exchanges on various aspects of child development

22 children enrolled in the program: 18 attend on a regular basis; the others less so. One child has left the program.

Average of 15 parents attend die joint work-shops; mostly mothers. All take part in at least 4 workshops. At least 16 mothers for each outing.

Talk attendance: Talk 1: 17 parents Talk 2 : 12 Talk 3: 20. The parents stay to the end of the talks. Some participate more than others. Greater number of fathers than in the workshops.

f j i iwrwtwg the Child By the end of the program, the child should have: - increased his language skills to nearly that of the norm for his age group. • developed psychomotor and fine motor skills close to the norm for his age

group. - started following group instructions.

- reached the level of autonomy required to attend kindergarten.

Concerning the Parents By the end of the program, the parents should show increased and better interactions with their child. Concerning the Child Modifications in the level of language development, psychomotor development, fine motor skills, capacity for interaction, and autonomy.

Concerning the Parents More frequent and appropriate interactions with the child.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 97: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

88

The program logical framework that the Tiny Tots coordinators and staff have put together will be very useful for them in their formative evaluation process. It will enable them to clearly see how the program is working and how the activities flow from the various objectives.

The framework can help in mapping out the program. By updating the third row regularly, which gives an accounting of processes, services produced, or outcomes, they will be able to monitor participant activities, involvement, and attendance as well as services provided. New columns can be added to . the framework if new activities are started up.

The framework is a very valuable instrument for the organization, because formative evaluation involves constantly reviewing die program throughout its life.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 98: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

89

SUMMARY OF THE PROCESS EVALUATION

What You Want to Find Out

• Is the organization achieving what it planned to? How is the work being done? Is there a discrepancy between what was planned and what has been achieved? If yes, how can it be accounted for?

What Information Do You Need? TTie.Objectives • What are the program objectives? • How do program activities lead to attainment of the objectives? • Can achievement be measured?

Program Functioning • What are die program characteristics (participants, staff, activities, administration, and

materials)? • How is die program functioning? • What can be done to ensure that the program better reaches the target population and uses

all the resources at its disposal optimally? • What problems have been encountered in the program and how were they resolved? • What adjustments were made to better attain the objectives? • What are the participants experiencing in the program?

Observations About the Program • Is the program or parts of it better suited to some participants than to others?

What are the program's strengths and weaknesses? • What activities or combinations of activities best correspond to each objective? • What variations are there in program application from one worker to another?

Who Can Provide the Information?

• Program coordinators and staff • Program participants

How to Collect the Information?

Analysis of written documents: program description, job descriptions, participant enrollment forms, logbooks, activity attendance sheets. Questionnaire and interview with coordinators and staff focusing on implementation, functioning, and cooperation between staff or other organizations.

• Participant questionnaire dealing with their characteristics and reasons for participating in the program.

• Participant questionnaire dealing with their satisfaction with the activities and the program. • Collective or individual interviews dealing with what drew them to the program, what they

experienced in the program, and how they feel about the program.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 99: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

9 0

| a)COLLECTING INFORMATION FROM COORDINATORS AND STAFF |

The information to be collected from coordinators and staff must shed light on what the organization planned on doing and what it actually accomplished. This information can cover many different aspects, which need to be selected based on project nature and features.

• Planning and Statement of Activities On what are activities, outings, and talks based? What links program objectives to activities? Do we cany out all the planned activities and components announced? If not, why not? What is being done instead? What is the justification for the change? How will the new activities or components allow the program to achieve the program objectives?

• Contents of Activities What concretely is being done with the children each week in the workshops? Is the activity plan followed each week? If not, why not? What activities do the children take part in instead? What is the justification for selecting these activities instead of others? Do the discussions with parents focus on the planned theme or on other topics? If so, what other topics? How do the themes fit into achieving program objectives?

• Activity Functioning How are the activities concretely carried out? Do the workshops always start and end on time? If not, why not? Do educators and parents meet as often as planned? If not, why not? Under what conditions do the meetings take place? Can the educators and parents talk quietly without interruption and without the child being present? Do educators make entries in their logbooks to follow up on the children as planned? If not, why

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 100: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

; 91

not? How is follow-up then implemented? What problems are encountered in the program? What improvements could be made?

• The Participants What are the characteristics of the parents and the children (age, sex, domicile, etc.)? How did they hear about the program or who referred them? Who generally accompanies the child? Does one parent take part in activities more often than the other does? How many participants have dropped out of the program? Is there a waiting list for the next session? How do the participant groups differ?

The Program Staff How many people work with the parents and children? How are they paid? How many are volunteers? What are their characteristics (age, sex, etc.)? What is their level of education? What kind of previous volunteer and work experience do they have? What are their responsibilities under the program and how are tasks divided up? For example, what specifically are the tasks of educators prior to, during, and after the workshops?

• Program Administration Which organization sponsors the program? How are decisions regarding the program made? How is information shared among program staff? How often does the staff meet? Who attends staff meetings? What do they discuss? What is done to publicize the program in the community? How are participants recruited? What are the intake and registration procedures when a new parent wants to join the program?

• Cooperation with Other Community Organizations What kind of contact is maintained with other organizations involved in the program? Are participants referred to other groups when necessary (such as a support group)? Is there

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 101: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

92

contact with the referring person (nurse, CLSC social worker, or other) to get a clearer picture of the situation? If so, who does it and how is the information used?

• Material Aspects of the Program Where does the program take place? Are facilities adequate? Are they easy to get to and get into for participants? What kind of equipment is used (furniture, educational material, etc.)? Is there adequate material or equipment? If not, what impact does that have on program functioning? What solutions could be found?

There are three ways to collect this information: analyzing available documents, using questionnaires, and conducting interviews with coordinators and staff.

•=> Analysis of Available Documents Most programs have a wide variety of documents that, if properly maintained and updated, can be gold mines of information for formative evaluation. This can include the official program description, the last grant application, program advertising, participant registration sheets, attendance records, logbooks, staff job descriptions, minutes of staff and Board of Directors' meetings, and so on.

These documents provide accurate, valuable information on what the program was designed to achieve and what it actually produced.

•=> Questionnaires for Coordinators and Staff You can ask coordinators and staff to complete a questionnaire that can round out the information collected from document analysis.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 102: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

93

For example, you could ask all practitioners to fill out a questionnaire about their characteristics (age, education, experience, seniority in the project, etc.), about their specific duties in the program, about their involvement in different components of the program, about how they apply the program, and about cooperation with other community organizations.

•=> Interviews with Coordinators and Staff You can also decide to interview a certain number of coordinators and staff. You can select informants according various criteria that are program specific. The important thing is to collect information from a variety of sources that can provide different and complementary data about the program. To illustrate, you could interview a few staff members that have been in the program for a while, that sit on the Board of Directors or who took part in drafting the last grant application, or that lead different activities or who are responsible for various components. The purpose is to collect information that will yield a systematic description of how the program was developed, how the activities or program components are carried out, possible changes in activity contents, and the reasons underlying the changes. They could also be asked to talk about the difficulties they have encountered in the program and their solutions.

See Section 3 for ideas on how to construct a questionnaire and interview checklist.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 103: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

9 4

| B ) COLLECTING INFORMATION FROM PARTICIPANTS |

The information collected from participants provides data on their characteristics, what led them to participate in the program, what they are expèriencing in the program, and their thoughts on the program. The information sought should cover the following aspects:

• What are the characteristics of the clientele? Does the program reach the targeted population? What draws the participants to the program?

In order to understand the clientele, you need to have access to data such as participant age, sex (parents and children), family status, employment type, income, place of residence, and other relevant, descriptive information. It would also be of value to know how the participants heard about the program, who referred them, and the reasons why they decided to register for the program.

This kind of information provides the basis for determining if the program is actually reaching the target population, that is, the age groups or family types that it was designed to help.

Contacting people who have dropped out of the program is also a good idea to collect information about their characteristics and the reasons why they left the program. This data can be important in adjusting the program, if required, or in recruiting new participants.

Registration sheets can yield the data about participant charac-teristics , provided, of course, they contain enough information. Otherwise, the parents can be asked to fill out a short questionnaire.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 104: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

95

• What do participants experience in the program?

Information on what participants are experiencing in the program can be obtained from logbooks filled out by program staff in contact with the parents. To be useful, however, logbooks should contain signs that the parents are committed to action when participating in activities. For example, do the parents arrive on time for joint workshops, meetings, and talks? Do they stay until the end of the activity? Do they ask questions? Do they do the suggested activities?

Some parents could also be interviewed, either individually or in groups, for their reactions about their experiences in the program, how they see the various activities, what the program brings to their relationship with their children, the growth they can see in their children from the start of the program, and so on.

• What is the level of participant satisfaction? What adjustments are needed?

Client satisfaction can be measured at different times, such as at the start of each joint workshop, at the end of the first five workshops, or at the end of the program.

What you must not do is attempt to evaluate a workshop or program by asking the individuals around the table if they are satisfied with it or not. Chances are that the responses will be biased. Furthermore, if a person directly associated with the program — a facilitator or coordinator, for example — asks the question, the participants will probably give only favourable responses.

You may be able to get enough information to judge participant satisfaction with a workshop through a few anonymous, written questions answered by parents at the end of the activity.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 105: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

If you want to have feedback on the entire program, on the other hand, you could construct a questionnaire.

For more specific information about participant satisfaction, you could deal with activities on a one-by-one basis or even break activities down into components. For example, you could try to find out how often parents and/or children take part in each program activity.

Over the course of the 10-week program, what would be the average number of times that your child attended the child stimulation workshops?

Always Q

4 mornings a week Q

3 mornings a week Û

1 or 2 times a week Q

Never Q

You would also like to know how useful the parents found each of the activities with respect to program objectives. To illustrate, in the case of the child stimulation program, parents could be questioned about every aspect of the child's development targeted by the workshop. The questions would deal with language, psychomotricity, fine motor skills, behaviour of the child in groups, and autonomy. Hie parents should be given concrete examples of behaviour on which they can make a judgment.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 106: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

97

In your opinion, how useful were the workshops in improving your child's physical movement (psychomotricity): throw and catch a ball, hop on one foot, ride a tricycle, etc. ?

In your opinion, how useful were the workshops in improving your child's group behaviour (follow instructions from an educator, keep his place in line, remain seated, etc.)?

To get a concrete idea of how an activity has been useful or why participants are satisfied, we recommend asking an open question, which will yield valuable qualitative information.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Very useful

Fairly useful

Not very useful

Not at all useful

• • • •

Very useful

Fairly useful

Not very useful

Not at all useful

• • • •

Page 107: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

98

In your opinion, how useful were the workshops in bringing you closer to your child in the games, outings, story time, etc. ?

Describe what you liked most about the joint parent-child workshops.

Describe what you liked least about the joint parent-child workshops.

A sample satisfaction questionnaire can be found at the end of this section. You might also like to refer to the guidelines provided in Section 3 to develop your own questionnaire.

Very useful

Fairly useful

Not very useful

Not at all useful

• • • •

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 108: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

99

FOR FURTHER READING (See also References at the end of the Manual)

This reference contains useful guidelines and examples about writing goals and objectives :

Designing and Managing Programs. An Effectiveness-Based Approach. Chapter 6: Sett ing Goals and Object ives Peter M. Kettner, Robert M. Moroney and Lawrence L. Martin (1990). Newbury Park, Sage Publications.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 109: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

100

SAMPLE SATISFACTION SURVEY QUESTIONNAIRE

Tiny Tots sends this satisfaction-survey questionnaire to parents a week after the end of the program.

Dear Parents:

Over the last few weeks, you have taken part in child stimulation workshops set up by Tiny Tots. We would ask you to take a few minutes to fill out this questionnaire to provide us with feedback on your experience and that of your child in the program. We would welcome your suggestions and any improvements you think could be made to the program. Since we will be offering this program to many other parents such as yourself, we want to keep on making it better!

The questionnaire should take about 15 minutes to complete. Your responses will be kept confidential and anonymous. Just put a checkmark in the appropriate boxes or write in a short answer where requested.

After you have completed the questionnaire, please return it to us in the self-addressed envelope or drop it into the specially marked box at the office by (date).

We really appreciate your cooperation.

The Tiny Tots Project Coordinator

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 110: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

101

Do not write in this space. For coding purposes only.

A- Information about Your Participation and That of Your Child

1. Which session did you and your child register for?

The session starting in March: Q

The session starting in May: Q

2. How did you hear about the program?

3. Over the course of the 10-week program, what would be the average number of times that your child attended the child stimulation workshops?

Always • 4 mornings a week • 3 mornings a week • 1 or 2 times a week • Never • 4. Did your child ever miss a whole week of activities?

Yes •

No •

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 111: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

102

For coding purposes only.

5. Over the course of the 10-week program, about how many joint parent-child workshops (afternoon workshops, once a week) did you attend?

Every one • 8 or 9 • 5, 6, or 7 • 3 or 4 • 1 or 2 • None • 6. Over the course of the 10-week program, how many

talks-discussions did you attend?

I attended talks.

7. Over the course of the 10-week program, about how many individual meetings did you have with an educator?

None • l o r 2 • 3 or 4 • 5 or more •

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 112: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

103

B- Information on Your Evaluation Child Stimulation Workshops

of the

8. How satisfied are you overall with the quality of the workshops that your child attended?

Very satisfied

Satisfied

Dissatisfied

Very dissatisfied

• • • •

9. How well did your child get along with the workshop leaders?

Quite well (no problems) Q

Well (nearly no problems) Û

Not very well (occasional problems) Ul

Not at all (constant problems) Q

For coding purposes only.

10. In your opinion, how useful were the workshops in improving your child's language skills (the number of words used, the right choice of words, pronunciation)?

Very useful Q

Fairly useful Q

Not very useful Û

Not at all useful Û

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 113: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

104

FOr coding purposes only.

11. In your opinion, how useful were the workshops in improving your chUd's physical movement (psycho-motricity): throw and catch a ball, hop on one foot, ride a tricycle, etc.?

Very useful

Fairly useful

Not very useful

Not at all useful

• • • •

12. In your opinion, how useful were the workshops in improving your child's manual skills (fine motor skills): using scissors, thread small beads, follow a line with a pencil, etc.?

Very useful

Fairly useful

Not very useful

Not at all useful

• • • •

13. In your opinion, how useful were the workshops in improving your child's behaviour in a group: follow the educator's instructions, keep his place in line, remain seated, etc.?

Very useful Q

Fairly useful Û

Not very useful Û

Not at all useful •

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 114: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

105

14. In your opinion, how useful were the workshops in improving your child's autonomy: tie his own shoelaces, get dressed by himself, take care of his belongings, etc.?

Very useful Û

Fairly useful Q

Not very useful Q

Not at all useful •

15. In your opinion, is your child ready to start kindergarten?

Yes •

No • If not, why not?

16. Provide your comments or suggestions about the child stimulation workshops (schedules, games, outings, etc.).

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 115: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

106

C- Information About Your Evaluation of the Parent-Child Workshops

17. How satisfied are you overall with the quality of the workshops that you attended?

Very satisfied Û

Satisfied •

Dissatisfied Q

Very dissatisfied Q

18. In your opinion, how useful were the workshops in bringing you closer to your child in the games, outings, story time, etc.?

Very useful Û

Fairly useful Û

Not very useful Q

Not at all useful Û

19. Describe what you liked most about the joint parent-child workshops.

Describe what you liked least about the joint parent-child workshops.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 116: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

107

20. Provide your comments or suggestions about the joint parent-child workshops (schedules, games, outings, etc.).

For coding purposes only.

D- Information About Your Evaluation of the Talks

21. Rank the talks from the one you enjoyed most (1) to the least.

First talk ( ) Second talk ( ) Third talk ( )

22. In your opinion, how useful were the talks in helping you better understand your child's development?

Very useful Q

Fairly useful Q

Not very useful Û

Not at all useful Q

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 117: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

108

23. Describe what you found most useful about the talks.

Describe what you found least useful about the talks.

24. Provide your comments or suggestions about the talks (schedules, themes, etc.).

E- Information About Your Overall Evaluation

25. Would you recommend this program to other parents?

Yes •

No •

Please give your reasons.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 118: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

26. To what extent, during the 10 weeks of the program, were you able to express your opinion on how activities were run, how the program was offered, and so on?

A lot •

Some •

Very little •

Not at all •

F- General Information

27. In what type of family does the child who participated in the program live?

With both his natural parents Q

With his mother •

With his father •

Joint custody Q

Foster family Q

28. If the child lives with both biological parents or under joint custody, did one of the two parents participate more frequently in the program than the other?

Mother •

Father •

Equally Q

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 119: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

110

29. Is the child's mother employed?

Yes •

No • If yes, is the employment:

Part-time • Full-time • Seasonal • Other. Specify:

30. Is the child's father employed?

Yes •

No •

If yes, is the employment:

Part-time • Full-time • Seasonal • Other. Specify:

Thank you for having taken the time to fill out this questionnaire. The information and suggestions that you provide help us improve the program.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 120: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

I l l

SECTION 5

OUTCOME EVALUATION OR

SUMMATIVE EVALUATION

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 121: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

112

SUMMARY

Scenario B (Cont.): The Tiny Tots Organization

1 - Outcome or Summative Evaluation A- What information do we need? B- Who can give the information? C- How to collect the information? Summary of the Outcome Evaluation

2- Different Approaches to Outcome Evaluation Approach 1: The Participant Perception Approach 2: Collecting Workers' Observations Approach 3: Collecting Data from Participants on their Behaviour

and Attitudes Approach 4: Prestructured Intervention Programs Approach 5: Cases of Intervention that Are More Difficult to

Evaluate If the Evaluation Indicates That the Anticipated Outcomes Were Not Achieved Pushing Evaluation Further

For Further Reading

ILLUSTRATED TOOLS AND TECHNIQUES:

— Using Attitude Rating Scales Observing Behaviour Single-Case Study

ILLUSTRATED METHODOLOGICAL CHOICE

Selecting a Measurement Instrument

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 122: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1X3

SCENARIO B (CONT.): THE TINY TOTS ORGANIZATION

The Tiny Tots community organization offers a child stimulation program to children ages 3 and 4 living in an underprivileged neighbourhood.

The goal of the program is to prevent adjustment difficulties for children entering kindergarten and to reduce developmental delays. The general program objectives are: By the end of the program

1) the children should exhibit greater language skills, higher psychomotor development, and more fine motor skills;

2) the children should have the. background needed to go into kindergarten;

3) the parents should be involved in the development of their child.

The specific objectives are : By the end of the program,

a) the children should have increased their language skills to nearly that of the norm for their age group;

b) the children should have psychomotor and fine motor skills close to that of the norm for their age group;

c) the children should have learned to obey instructions given to their group;

d) the children should have reached the level of autonomy required to attend kindergarten;

e) the parents should show increased and better interaction with their child.

The program comprises three activity components: the first deals with the children (daily morning stimulation workshops); the second with both parents and children (joint workshops); and the third with parents alone (talks, sharing and meeting with other parents and with educators). The organization has decided to conduct a summative evaluation of its program. How should it go about it?

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 123: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1X4

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 124: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

115

This scenario illustrates the evaluation of program results, which is referred to as outcome evaluation or summative evaluation.

We will use this scenario to look at different approaches to outcome evaluation that use different techniques and tools, in particular, attitude rating scales and behaviour observation. When used before and after a program has been carried out, these techniques can reveal whether there have been changes in participants' attitudes and behaviour. We will also take a look at the single-case study and other techniques for use with interventions that are more difficult to evaluate.

Selecting a rating scale is the methodological choice dealt with in this section.

1- Outcome or Summative Evaluation

^ Summative evaluation enables us to make judgments about the outcomes or effects of a program, (see the Introductory Manual, Section 6)

Evaluating program outcomes is a twofold process. First o fI all, it consists in determining if there has been a change in a participant from the moment that he entered the program to a given point after he has completed a number of activities. This is referred to as measuring change. Secondly, it involves determining that the change was in fact due to the program, not to a whole other series of circumstances. This is attribution of the changes to the program, (see the Introductory Manual, Section 6)

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 125: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

116

An outcome evaluation therefore looks at two different dimensions:

FIRST DIMENSION

Do participants experience observable and measurable changes after completing all or a part o f the program?

SECOND DIMENSION

Can we confirm that the changes are actually due to the program?

^ ORGANIZATIONS SHOULD GENERALLY LIMIT THEMSELVES TO THE FIRST DIMENSION WHEN CONDUCTING A SUMMATIVE EVALUATION: OBSER-VED AND MEASURED CHANGES IN PARTICIPANTS. The second dimension, on the other hand, is especially difficult to check. In fact, attempting to attribute changes to a program calls for evaluation models that are largely beyond the, resources of organizations.

But there are strategies that enable organizations to measure changes that occur in participants. We will discuss these different strategies and their related tools and techniques in this section, illustrated with the scenario and other selected examples.

To evaluate the outcomes of its action, Tiny Tots needs to ask itself three basic questions:

/ What information do we need? / Who can provide the information? / How to collect the information?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 126: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

117

A- WHAT INFORMATION DO WE NEED?

In carrying out its outcome evaluation, the organization needs to find out if it achieved its objectives.

Achievement of Objectives You need to show what the program was able to achieve and try to determine why certain objectives, if any, were not attained. Particular attention should be paid to the outcomes generated by each of the program components or activities. Program effectiveness — that is, the methods used were adequate to meet the set objectives — should also be looked at. Lastly, you need to consider if the program leads to unanticipated outcomes.

This is the stage in which the success indicators established by the program coordinators and workers become all-important (see "Determining Success Indicators", Section 4). Indeed, in order to determine if a program or program component caused the desired effect in the participants, a success indicator has to be set. To make this determination, you need to answer the question:

• How great must the difference be between the situation of the participants before and after the program in order to be deemed significant?

Summative evaluation can also lead to an analysis of program efficiency, which means establishing the ratio between costs and outcomes. This is of greater concern, however, to sponsors than to organizations, who rarely have the resources to carry it out. Consequently, we shall not deal with it in this manual.

Continuing the Program Evaluating the outcomes can be crucial in deciding whether the program should be continued, extended, or if certain aspects should be altered. The effectiveness of the program can also be compared to

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 127: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 1 8

that of other, similar programs. Lastly, different results among types of participants can be brought out.

B- WHO CAN PROVIDE THE INFORMATION?

The two main sources of the information required for summative evaluations are the program participants and staff.

Program Participants Program participants are well placed to indicate the progress they or their children have made in the program. On one level, we try to find out their perception of the change in their child since entering the program or of their own progress as a parent participating in the program. On another level, the parents can be asked to relate what changes in attitude or behaviour they have observed in themselves or in their children.

Program Staff Whether facilitators, volunteers, or educators, program staff interact directly with participants. Because of this, they are strategically placed for supplying information on the progress of participants in the program. To a large extent, their observations of the attitudes and behaviour of the parents and children provide the basis for evaluating the changes that have occurred.

C - H o w TO COLLECT THE I N F O R M A T I O N ?

From Program Participants Information can be gathered using:

• Questionnaires and instruments prepared by the organization ("in-house") to collect the perceptions of participants regarding their progress, or that of their child, in the program.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 128: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

119

• Existing measures that the participants fill out by themselves (called self-reports), from which the progress in their child's behaviour or changes in their own attitudes can be detected. These are standardized attitude and behaviour measures.

From Program Staff Information can be gathered using:

• Logbooks in which workers' observations about participant attitudes and behaviour have been regularly recorded.

• Systematic observation of participant behaviour by program personnel using observation instruments that may be developed by the organization itself (in-house) or may already exist. In this case, published, standardized behaviour rating scales are used.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 129: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

120

SUMMARY OF THE OUTCOME EVALUATION

What You Want to Find Out

• Was there an observable or measurable di f ference in t h e part ic ipants after complet ing the program or a part of t h e p r o g r a m ?

What Information Do You Need? Achiev ing the Object ives • What program objectives were achieved? • Why were some objectives not achieved? • What are the outcomes of different program components or activities? • Is the program effective, that is, have the objectives been achieved with the

methods used? • Does the program produce outcomes other than those anticipated?

Continuing the Program • Is the program worth continuing or extending? • Should certain aspects be modified or dropped? • How does the program's effectiveness rate against similar programs? • What problems have been encountered in the program and how were they

resolved? • Do outcomes vary depending on participant type?

Who Can Provide the Information?

• Program participants • Program staff

How to Collect the Information?

• Questionnaires or in-house measures that participants fill out themselves (parent and/or child), revealing their perception of the progress made in terms of their attitudes and behaviour or that of their child.

• Participant use of standardized measures of attitude or behaviour. • Systematic worker observation of participant behaviour using "in-house"

measures or standardized measures of behaviour and attitudes. • Analysis of logbooks kept by staff.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 130: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

121

2- Different Approaches to Outcome Evaluation

Outcome evaluation can be conducted from a number of different approaches using quantitative analysis with standardized measures that have been validated or using in-house measures. They also make use of qualitative analysis, which yields a more general picture of the situation of participants.

Each of these approaches has inherent challenges and causes problems that will be systematically dealt with in this section. For example, it is generally believed that the qualitative approach is easier, which is not the case at all. Gathering relevant, useful information with a qualitative approach means adhering to the same rigorous scientific standards as you would with a quantitative approach. Moreover, you should view the various approaches to evaluation presented here as different but complementary means of enriching the information and better understanding how the program really affects its participants.

As the experts on what they are doing, the organization's coordinators and workers are the best placed to judge for themselves the appropriate approach for evaluating the outcomes of their program. The decision to opt for one over another is based on the following criteria:

THE TYPE OF PROGRAM TO EVALUATE In order to take into account the complexity and unique nature of most programs, evaluators use different evaluation approaches based on various methods of collecting data and the integration qualitative and quantitative analyses. Prestructured intervention programs, such as Nobody's Perfect, use a specific strategy. For others, built around individualized interventions that vary according to participant needs, a very different approach —involving what is referred to as single-

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 131: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

122

case study— would be appropriate. In the case of one-shot, no-follow-up interventions —such as a help-line service— you need to highlight the outcomes or services produced by the program. These different situations are dealt with in more detail below.

INFORMATION AVAILABLE ON THE SITUATION BEFORE THE PROGRAM BEGINS OR WHEN THE EVALUATION IS BEING CONDUCTED The approach to the evaluation depends to a great extent on the information available and the time when the evaluation is undertaken. In fact, certain evaluation approaches are based on comparing what the participants were experiencing prior to taking part in the program and what they were experiencing after completing some of the activities or at the end of the program. If it wasn't possible to start collecting information prior to the start of the program or if the decision to evaluate is made once the program is running, then the approach chosen must provide the means for determining —perhaps retrospectively— if there were changes in participant attitude or behaviour.

THE LEVEL OF AVAILABLE RESOURCES You also need to take into consideration the resources on which the organization can count in collecting, analyzing, and interpreting information. Some standardized measures require a fairly high level of training if they are to be used properly. They often require significant statistical analysis; interpreting the results fiequently involves appropriate supervision. Many organizations do not have staff with the skills needed to carry out a successful evaluation in its entirety. Any organization that wants to use some of the standardized measures should seek help from outside experts in the field.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 132: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

123

APPROACH 1: THE PARTICIPANTS'PERCEPTION

In Section 4, we discussed evaluation of the level of participant satisfaction and demonstrated how it provided the point of view of program participants and what they got out of the program. On the other hand, it doesn't yield any information about program outcomes.

It could very well be that every last one of the participants is totally satisfied with the program, that they say that they and their children got a lot out of it, and that they will tell all their friends to enroll in the program. But this is all still satisfaction. We need to get beyond satisfaction to discover how the participants perceive the progress of their child and/or their own as a result of the program.

UNDER WHAT CONDITIONS CAN W E HOPE TO COLLECT INFORMATION ON THE PARTICIPANT'S PERCEPTION?

/ In nearly every type of program. When the participants are adults, they can be contacted directly. When the participants are young children, the information is collected from their parents or guardians. Older children can also be dealt with directly.

• Whenever the evaluation is undertaken or whatever the preprogram information available. The information collected does not necessarily have to be compared with preprogram data. Even though having some kind of baseline data might be interesting, the main concern is determining the participants' perception of the change in themselves or in their children after completing a number of activities or at the program end.

/ Whatever the organization's resources. Developing questionnaires or in-house measures for collecting really relevant program information concerning participant

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 133: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

124

perception is fairly straightforward. The organization will be able to analyze and interpret the information in-house if the number of participants is low and the measures relatively simple.

COLLECTING INFORMATION ABOUT PARTICIPANT PERCEPTION OF THEIR PROGRESS OR THAT OF THEIR CHILD

— Interviewing Participants The questioning during such interviews focuses on the changes that participants perceive in their own behaviour or that of their child as a result of program activities.

The following are examples of questions that Tiny Tots' interviewers could ask parents during the interviews.

Did you get something out of the parent-child workshops ? If so, could you describe it for me?

Did the discussions with the educator change your relationship with your child in any way? If so, could you tell me what the changes are?

Did you learn anything new about your role as a parent by participating in the program? If so, could you tell me about it?

Can you identify one or more changes in your relationship with your child that you feel are due to the program? If so, could you describe the change or changes?

What draws you back each week to the workshops?

Have you noticed anything new in your child's behaviour (motor development, language development) since he has become involved in the program? If so, could you tell me about them?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 134: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

125

When there is no baseline data for the participant, it's a good idea to ask questions that call on him to compare his current situation to that prior to the program.

Tell me how you reacted to your child's fits of anger before you enrolled in the program? ...Now tell me how you reacted the last time that your child had a fit of anger.

How would you describe your child's level of language prior to the program? ...Now that the program has ended, how would you describe his level of language?

^ Refer to Section 3 for a fuller discussion about developing interview questions.

— Using In-house Measures You can also get data on participants' perception of their or their child's progress with in-house measures developed by the organization. They can be used alone or in conjunction with an interview or questionnaire containing open questions, which allows respondents to give longer responses.

In-house measures can be developed as a series of closed questions or rating scales designed to elicit participants' perception of their or their child's progress due to each activity and with respect to targeted objectives in each phase of the program. We will deal with the activities and objectives systematically, one after the other. As few examples follow.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 135: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

126

Since the start of the program, has your child learned anything new in the following areas:

Nothing Very little

Some A J

Language • • • • Motor development • • • • (run, climb, jump) Fine motor skills • • • • (drawing, crafts, etc.) Group skills • • • • Discipline • • • •

Has communication with your child improved since the start of the program?

• Not at all • Very little • Some Q A lot

In your role as a parent, were the meetings with the educators:

Q very helpful?

• somewhat helpful?

Û not very helpful?

• not helpful at all?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 136: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

• Some

• Some

Do you think that the joint parent-child workshops enabled you to:

better understand your child's reactions? • Not at all • Very little

better understand your reactions to your child? • Not at all • Very little

better monitor your child's development? • Not at all • Very little

enjoy playing games with your child? • Not at all • Very little

see how other parents act with their children? • Not at all • Very little

see how educators act with your child? • Not at all • Very little

127

• Some

• Some

• A lot

• A lot

• Some • A lot

• Some • A lot

• A lot

• A lot

On a scale of I to 4, draw a circle around the number that best corresponds to your level of agreement with the following statement: "The talks helped me better understand my child's development"

Disagree Disagree Agree Agree completely completely 1 2 3 4

LIMITATIONS OF THE FIRST A P P R O A C H

This first approach to outcome evaluation doesn't let us go very far in analysis of program results. It does, however, let us gather a certain number of measurements about what the participants are

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 137: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

128

experiencing. It cannot claim to cover all the changes the program may have caused. Neither does it enable us to determine if all the changes were actually due to the program.

The perception that participants have of their own progress or that of their child as a result of program activities represents their own point of view on the benefit they have derived from the program. This information must be validated using means or tools that make it possible to confirm whether the reported progress has or has not be made. This corresponds to the second evaluation approach.

APPROACH 2 : COLLECTING WORKER OBSERVATIONS

In this second approach, we no longer are looking for perceptions, but rather facts. Observation throughout the course of a program can systematically yield a set of participant behaviours.

The workers or individuals in contact with participants are better placed than anyone else to observe participant behaviour and identify changes produced during the course of the program.

UNDER WHAT CONDITIONS CAN W E HOPE TO COLLECT WORKER OBSERVATIONS?

/ In nearly every type of program. This approach can be used with any program in which workers or others are in direct, regular contact with the participants (parents or children).

• Baseline information on the participant's situation prior to the start of the program is needed. In order to measure the change in the participant's behaviour, you need to be able to compare his situation before and after the program or a part of the program. This is referred to as a pretest-posttest design because data is collected both before

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 138: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 2 9

(pre) and after (post) the program. Observations must be taken on at least two different occasions during the program: at the outset and at program end or after a significant number of activities. Obviously, collecting information on the same items is essential.

• Observation tools should be available. Worker observations can be collected in a variety of ways. Organizations already have simple observation instruments that can be used in evaluation. For example, logbooks can be used, provided they have been designed for systematically recording observations and have been used consistently. You can also develop in-house observation questionnaires for use at the beginning and the end of a program, or after a significant number of activities have been completed. Furthermore, it is possible — although not always necessary— to use existing instruments for measuring language development or child motor development. Some of them make use, analysis, and interpretation difficult, often requiring expert help or supervision. Others, however, are well within the means of organizations financially and in terms of personnel.

COLLECTING W O R K E R O B S E R V A T I O N S

• Analysis of logbooks. Program staff must systematically and rigorously keep their logbooks if they are to provide good information about participant behaviour. You

can make the staffs task easier by developing participant observation checklists for each activity. The checklist should include observable aspects that correspond to program objectives. The practitioner will just have to check off his observations and write down any comments during the activity or once it ends.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 139: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

130

The following have been taken from logbooks of practitioners with Tiny Tots. The concepts shown here relate to the specific objectives of the child stimulation workshops.

Date of Meeting: Name of Child

Larry Lisa Sandy

LOGBOOK STAFF MEMBER: Christine

February 17,1998 Workshop: 3

Present >/

V Cloakroom: child undresses himself

Alone Larry Lisa Sandy

Behaviour on arrival

Larry Lisa Sandy Behaviour during activity

Larry Usa Sandy

Mixes in

Follows instructions V yf

Language unit: child completes the activity With ease

Larry V Lisa v Sandy

Motor unit: child completes the activity With ease

Larry Lisa v Sandy V

Comments about child: Larry Lisa Sandy

Absent

With some help

Enthusiastic yf

Cooperates

With difficulty

With difficulty

Late

V

Much help

Uncomfortable

V

Distracts

V V

With much difficulty

V

With much difficulty

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 140: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

131

Other aspects pertaining to changes in participant behaviour could be noted in logbooks by workers or others in the organization in direct contact with the parents. These could include:

• active and regular participation of the parents • their involvement in parent/child games • their attitude towards their child • the interest they show (Do they ask questions? Do they

make comments? Do they offer suggestions?) • the initiatives they take • workshop follow-up (Do they do the recommended

exercises at home?)

By analyzing the logbook, you can follow the progress of participants as it relates to different program objectives, comparing the situation prior to the start of the program to that at the end or after a significant number of activities have been completed.

Using In-House Observation Sheets You can develop your own observation sheets for staff to note their observations of each participant at the program start. The same sheets are used once again at the end of the program or after a significant number of activities have been carried out. This before-and-after information can be used to assess behavioural change in participants. The important thing is hot to modify the sheet and to apply it in exactly the same way each time in order to ensure uniformity.

Some organizations already apply in-house measures to their interventions to determine the level of development of children and to bring out their strengths and weaknesses at the program outset. If the instrument is fairly comprehensive, it could also serve as a tool in evaluating child progress by applying it once

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 141: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

132

again at the end of the program or after a significant number of activities have been carried out.

This kind of observation sheet is fairly simple to construct. It should take into account the objectives targeted under the various program components and include aspects that are readily observable.

When the children enrolled in the child stimulation workshops met for the first time, Tiny Tots practitioners and other staff members observed the children during the activity. At the same time, they completed observation sheets, parts of which are provided below.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 142: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

133

Observation Sheet for: William Date: March 24, 1998

Child's behaviour: Yes No

Cries when parent leaves V Difficult to console after parent leaves V Continues to cry during the activity V Doesn't mix with other children v Pushes others, is rude V Follows instructions V Remains indifferent to instructions V Constantly asks for educator's help V Is resourceful v etc.

Level of autonomy:

Can do up bis buttons V Can tie a bow V Puts on coat by himself V Puts on boots by himself V Goes to the bathroom by himself V etc.

Level of fine motor skills:

Able to draw a complete figure V Traces a square V Draws a house V Follows a line with a pencil V Cuts along a line with scissors V Writes his first name V Writes three letters or numbers V etc.

The staff will use the same observation sheet for William and the other children at the end of the program or after a significant number of activities have been carried out. By comparing program-end observations with those from the program start, it will be possible to evaluate progress with respect to workshop objectives. Practitioners will have to calculate the progress of each child according to the various areas of observation in order to draw conclusions about the entire group.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 143: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

134

— Using Standardized Measurement Instruments There are a number of different standardized and validated measurement instruments for observing behaviour and measuring the changes that have occurred. In this context, standardized means that they have been constructed so that the differences they detect are in fact real differences between the individuals and not just due to chance. Accordingly, they offer good reliability. They are validated because the information collected reflects the position or reality of the respondents. When using a rating scale that has been translated, you should find out if it has been validated for your cultural context.

These kinds of measure are based on concrete behaviours that are readily recognizable. They are used like the in-house observation sheets that were discussed above. Observations are made and noted at the program outset and end (or after a significant number of activities have been carried out), and the results compared to determine if change has occurred.

The Preschool Behavior Rating Scale, (W. R Barker and A. M. Doeff, 1980), for example, can be used to measure the development of children (3 to 6 years old) and follow their progress in relation to various skills. The scale is filled out by educators in daycare centers and day schools based on their observations of the children's behaviour. Five aspects are examined: Coordination (Gross Motor, Fine Motor); Expressive Language (Vocabulary, Grammar); Receptive Language (Story Listening, Memory); Environmental Adaptation (Organization, Initiative); Social Relations (Cooperation, Consideration of others).

Each measurement scale consists of four or five descriptions ranked from the lowest to the highest skill level.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 144: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

v-y; 135

The following, for example, are descriptions relating to the child's initiative:

1) Almost never initiates any activity. Almost always at a loss for what to do.

2) May verbalize a wish to do something but can't seem to decide or needs adult support to follow through.

3) Can organize or suggest activities for self but tends to choose the same activities again and again.

4) Can usually find a variety of acceptable activities to do either by self or with others.

^-^METHODOLOGICAL CHOICE^

Choosing a Standardized Measurement Instrument There are many evaluation instruments that can be very useful for practitioners,1 but careful consideration should go into the decision to use one scale instead of another.

• You should choose the tool that corresponds to the questions you want to ask and not shape your questions to fit an existing instrument. Having access to a rating scale doesn't mean that you have to use it.

• Each scale is designed to measure something specific: an infant development inventory won't provide information on parental stress, nor will a language development scale yield data on other aspects of child development.

• It is often necessary to request permission from the authors before using their scale.

• Sometimes you have to pay royalties to use a scale.

1 See appendix.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 145: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 3 6

• It is important to follow the instructions for administering and scoring provided with the instrument. Authors often provide norms that can be used for comparison.

• Sound training is often required to use some scales. • Summarizing and analyzing the data are easier with some

scales than with others. A certain number of instruments come with interpretation software that run on PCs and Macintosh.

LIMITATIONS OF THE S E C O N D A P P R O A C H

The second approach basically indicates whether certain behavioural changes occurred (or did not occur) in participants over the course of the program. For example, if 23 out of the 30 children in the program evidenced language skills beyond what they should normally have had, then this should be highlighted in the program evaluation. It is relevant to compare a child's performance to the norm, that is, to what is normally expected of children belonging to a particular age group.

Comparing post-program data to the baseline data can reveal behavioural changes. While we can assume that the program played a role in these changes, we don't know for sure. It may be that the changes are simply due to maturation over the 10 weeks and that the child would have reached the same level of development even if he hadn't taken part in the workshops. Or the gains might be due to stimulation from outside the program, such as a particular dynamic sitter.

In addition, you have to take into consideration the limitations of the instruments and stick to what they are designed to measure. Furthermore, you cannot assume that die overall changes observed in the participants will persist once the program has ended. Unless follow-up is undertaken, there is no way of knowing if the child will apply what he learned in the program and generalize it once in kindergarten. Similarly, we cannot tell if the parents will maintain

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 146: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

137

their involvement in their child's development once the program ends. Consequently, evaluation conclusions must be used cautiously.

^ Comparing baseline and program-end data (or after a significant number of activities has been carried out) can indicate the progress made and behavioural changes. It cannot, however, confirm with certainty that the outcomes are due to program participation.

APPROACH 3 : COLLECTING DATA FROM PARTICIPANTS ON THEIR BEHAVIOUR AND ATTITUDES

In this third approach, we aim at collecting data from participants on their own attitudes and behaviour. The instruments used are referred to as self-reports because the person reports on his own attitudes.

UNDER WHAT CONDITIONS CAN W E HOPE TO COLLECT DATA FROM PARTICIPANTS ON THEIR OWN ATTITUDES AND BEHAVIOUR?

• In nearly every type of program. In the case of young children, the parents should be asked to fill out the instruments. Older children and adults should fill them out for themselves.

• Baseline information on the participant's situation prior to the start of the program is needed. In order to measure the change in the participant's behaviour, you need to be able to compare his situation before and after the program or a part of the program. This involves a pretest-posttest design as in the preceding approach. The information must be collected at least at two different times, specifically at the program outset and program end (or after a significant

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 147: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

138

number of activities has been carried out). Using the same instruments to collect the data is essential if the data are to be compared. There are also standardized instruments, which means that norms have been established for a population. Therefore, they can be used to compare a given group to a reference population, which would yield information about the direction of the change.

/ The organization must have access, in certain cases, to outside resources. Sometimes expert help is required to interpret the data from certain instruments or rating scales. The organization should be able to interpret the data from simpler instruments or rating scales themselves just by following the instructions provided.

COLLECTING DATA FROM PARTICIPANTS ON THEIR OWN ATTITUDES AND BEHAVIOUR

— Using Behaviour Rating Scales Parents can be asked to use standardized scales to note their child's behaviour at the beginning and the end of the program, or after a significant number of activities has been carried out. There are a number of scales available to measure aggression, withdrawal, anxiety, and attention. There are usually norms to which the child's situation can be compared.

The Child Behavior Checklist (Achenbach, 1991) is filled out by the parents. Thinking back over their child's behaviour now or within the last six months, parents are asked to respond to statements as being Very True or Often True (2), Somewhat or Sometimes True (1) or Not True (0). The following is a sampling of the statements.

0 = Not True; 1. = Somewhat or Sometimes True; 2= Very True or Often True 1 Can't sit still: is restless or hyperactive 2 Demands a lot of attention 3 Likes to be alone 4 Refuses to talk

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 148: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

139

Using Attitude Rating Scales While behaviour is usually observable, measuring attitudes is much more complex. An attitude is a position or a recurrent way of seeing and reacting to events. In this context, recurrent means that the response is the same each time the individual encounters the situation or event.

Attitude rating scales are used to find out more about people's attitudes by asking many questions designed to measure the same attitude. This yields an overall score indicating the direction (for or against) and the intensity (favourable or very favourable) of the attitude being measured. If you want to construct your own attitude rating scale, we suggest you consult the specialized references listed in the bibliography. There are a number of validated, standardized rating scales available, which makes interpreting the scores easier since they can be compared to norms. Below we have included a few items from the Self Description Questionnaire (Marsh, 1988) for measuring child self-esteem.

I. In general, I like being the way I am. False Mostly false Sometimes false, Mostly true True

sometimes true 1 2 3 4 £

2. Overall, I have a lot to be proud of False Mostly false Sometimes false, Mostly true True

sometimes true 1 2 3 4 S_

3. When I do something, I do it well. False Mosdy false Sometimes false, Mostly true True

sometimes true 1 2 3 4 1

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 149: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

140

The Tiny Tots child stimulation workshops include a certain number of joint parent-child activities. These activities aim at involving parents in the development of their child and at putting them more at ease and making them more confident in this regard. To evaluate whether the results of the activities designed for parents had been achieved, it was decided to check their reaction to parental competence.

The evaluators decided to use a subscale of the Parenting Stress Index (Abidin, 1995), which enables us to establish to what extent the parent feels that he has adequate parenting skills. The following is an excerpt.

Circle the number corresponding to your position on whether you agree or disagree with the statements.

1- Strongly agree 2- Agree 3- Not sure 4- Disagree 5- Strongly disagree

1. Being a parent is harder than 1 thought.

2. I feel competent and master of the situation when I take care of my child.

3. I have many more problems raising my child than I thought.

4. 1 like being a parent

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 150: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

141

LIMITATIONS OF THE THIRD APPROACH

The third approach highlights the evolution in the behaviour and attitudes of participants from their viewpoint as a result of the program. As with the preceding approach, you need to be careful in drawing conclusions. Any difference between the baseline data and program-end data will be revealed, but it cannot be attributed necessarily to the program.

Moreover, the participants may have changed in ways that the instrument cannot adequately detect. Let's take the case of self-esteem. It may happen that the instrument shows no change in the participants after ten workshops. This doesn't mean that the participants made no progress; it's not realistic to expect all participants to show measurable differences in self-esteem after so short a time. For some people, judging their own worth can be a long and exacting process. On the other hand, how do we know that an increase in self-esteem will be durable? A participant's self-esteem may drop if he has a bad experience just after the program ends. So you cannot presume that program effects will be lasting.

You also need to remember that standardized instruments measure only what they were designed to measure. In limiting yourself to them, you run the risk of missing other changes that may have been unexpected but nonetheless just as meaningful for participants. This is why qualitative methods — which highlight other aspects of change — should also be used.

APPROACH 4 : PRESTRUCTURED INTERVENTION PRO-GRAMS

Another approach is needed to evaluate prestructured intervention programs that come in kits, like Nobody's Perfect. These programs have been tested and validated. They have specific objectives and

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 151: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 4 2

propose activities to achieve them. They have been evaluated;2 their effectiveness and efficiency are recognized. If applied according to the guidelines, they should produce the expected outcomes.

U N D E R W H A T C O N D I T I O N S C A N W E E V A L U A T E P R E S T R U C T U R E D I N T E R V E N T I O N P R O G R A M S IN T H I S W A Y ?

• Only in the case of established, validated, pre-evaluated programs. The program to be evaluated must be recognized and proven. In other words, it must have been demonstrated that the program can achieve the targeted objectives and produce the expected effects, if applied integrally and correctly.

/ The program must be applied as designed. In order to produce the expected effects, the program must be applied as is, under the required conditions and according to the guidelines provided. You cannot omit, add, or modify aspects of the program and still hope to get the results announced by the program.

• Whatever the organization's resources. Most organizations should have the resources required to evaluate this kind of program. The information given for approaches 2 and 3 applies.

2 The Nobody's Perfect program has been evaluated in the Maritimes by Madine M. L. VanderPlaat (1989) and the pilot project in Quebec by Claire Brochu and Louise Denhez (UQTR, 1991).

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 152: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

143

EVALUATING PRESTRUCTURED INTERVENTION PRO-GRAMS

Two questions need to be asked when evaluating this kind of program:

1) Is the program being applied as designed? 2) What changes are observable?

— Is the Program Being Applied as Designed? Properly applying the program is crucial. The first step, then, is to check that the program is being applied according to the model. This is done with formative evaluation (see Section 4) to analyze program functioning to ensure that it is proceeding as it should.

— What Changes Are Observable? The next step is to highlight changes in the participants using a pretest-posttest design. The various methods discussed for approaches 2 and 3 above can be used for this.

LIMITATIONS OF THE FOURTH APPROACH

If the program has been properly applied, you can be fairly certain that the changes noted in the participants are due to the program. In fact, since this type of program has already been validated, it should produce the expected results if the guidelines are strictly followed.

APPROACH 5 : CASES OF INTERVENTION THAT ARE MORE DIFFICULT TO EVALUATE

There are other types of interventions that do not correspond to the examples given above and which cannot be evaluated using the approaches described. To illustrate, think of a women's shelter, where intervention is flexible and individualized according to the needs of each client. Telephone hotlines and sexual abuse awareness

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 153: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

144

programs for students are two other examples. In such cases, evaluation must be tailored to the intervention in order to evaluate the outcomes.

UNDER WHAT CONDITIONS CAN W E EVALUATE SUCH INTERVENTIONS?

/ When interventions are "one-time" or vary greatly from one situation to the next with few participants or when there are no outcomes visible on the short term. This is the case especially with telephone hotlines, awareness programs for which outcomes cannot be immediately measured, and projects delivering individualized services differing greatly depending on client needs.

• Eva luat ion must be p lanned for f r o m the outse t o f the in tervent ion . Evaluating this kind of intervention requires that it be broken down into components and examined. This requires measures that must be applied throughout the course of the intervention. Accordingly, everything must be in place for the evaluation when the intervention starts.

• Whatever the organization's resources. Most practitioners have the resources necessary to successfully conduct these evaluations, despite the problems that must be overcome. Resource persons may be needed if rating scales and complex protocols are used.

• Choose the procedure that is appropriate for each intervention context. The procedure should be chosen depending on the situation. You also should understand that the evaluation cannot be extended further than the current, specific intervention situation. On the other hand, the evaluation is still useful. In the

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 154: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

145

paragraphs below, we will discuss four types of evaluation for interventions that are more difficult to evaluate: single-case studies, outcome studies, participatory studies, and satisfaction studies.

a ) THE SINGLE-CASE TECHNIQUE FOR EVALUATING INTERVENTIONS THAT ARE FLEXIBLE, VARIED, AND INDIVIDUALIZED

The Open Doors women's shelter receives survivors of violence and their children. The interventions are flexible, complementary, and varied, depending on the needs of the woman and her children. The interventions can start during sheltering and continue on once the woman has left the shelter. They consist mainly of professional helping relations, sessions for sharing thoughts and relating experiences, and discussions on specific themes. Recreational activities are also organized to allow the family to share some enjoyable experiences.

How could you go about evaluating the outcomes of these interventions? The sharing sessions, recreational activities, and the like could be evaluated with the techniques mentioned above.

What's more difficult to evaluate is the impact of the individualized interventions that take into account the reality of the woman and each child. Evaluating such interventions requires what can be called the single-case technique. It can be used to evaluate an intervention with an individual, couple, or family. It can bring out the outcomes of interventions whose objectives vary according to participant and which attempt to respond to the specific and individual needs of each. The technique is used for each person individually. If there are enough similar cases, the outcomes for all the participants can be put together.

The technique allows you to answer evaluation questions such as those given below.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 155: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

146

• Does the intervention allow achievement of the individual objectives set for each participant?

• What are the specific problems of the participants? • With what types of problems do we most easily achieve the

intervention objectives? • With what types of problems do we have the most difficulty

achieving the intervention objectives? • Which intervention activities help most in achieving the specific

objectives? • Based on a series of interventions, can we bring out the

problems that are most frequently encountered in the participants?

The single-case evaluation technique rests on the following points.

1) Determine the specific problem situation for each participant. The target problem must be clearly identifiable and observable. In other words, it's a question of identifying the behaviour, attitude, skill, or feeling that the participant wants to work on.

2) Set the intervention objectives with respect to the target problem. Usually, you should define with the participant himself the objectives for change that will be evaluated.

3) Specify how the change will be measured. The measures must be specific, which means that they will bear on the specific phenomenon, behaviour, attitude, or feeling that was determined beforehand. In addition, they must be observable by the practitioner, the participant himself, or other people in his immediate, circle. Generally speaking, the frequency of the behaviour or attitude is measured. The values are then graphed. The method is easy to use and provides a picture of the changes.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 156: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

147

4) Apply the measures prior to the intervention to determine the participant's situation at the outset. This is known as baseline data.

5 ) Record the measurements regularly (daily, at each meeting, each week, etc.) throughout the course of the intervention.

6) Analyze the change.

Family situations are discussed in the initial meetings between Open Doors' staff and each woman who seeks shelter there with her children. In addition to the crisis that must be dealt with, each woman identifies things that she would like to change in herself or her child so that life can resume a more normal course. Matthew, Sylvia, and Donald came to Open Doors at different times. Matthew, who is age 5, exhibits violent behaviour towards his mother; he screams at her, bites her, scratches her, and hits her. Eleven-year-old Sylvia shows no emotions when talking about the abuse at home. Donald, age 7, blames himself He always says that his stepfather "was right to beat me because I'm bad and always doing dumb things. " The intervention objectives would be different for each of these children. The program staff will work on these behaviours and attitudes during individual meetings with the children, while they are sheltered and afterwards, in group sessions and family activities.

This is how they will conduct the single-case evaluation of their interventions with each of these children.

1 ) Determine the specific situation that is causing a problem for the participant. Matthew: violent behaviour directed at his mother. Sylvia: doesn't express her feelings about abuse she has experienced. Donald: blames himself for his stepfather's abusive behaviour.

2) Set intervention objectives with respect to the target problem. Matthew: halt the violent behaviour directed at his mother.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 157: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

148

Sylvia: express her feelings about abuse she has experienced. Donald: stop blaming himself for his stepfather's abusive behaviour.

3) Specify how the change will be measured. These measures must be specific and observable. Matthew: record the frequency of occurrences of violent behaviour directed at his mother (the number of times he bites her, strikes her, scratches her, screams at her). Sylvia: record the frequency at which she expresses her feelings about abuse at home, regardless of the form of the expression (tears, angry words, etc.). Donald: record the number of times that he says that his stepfather was right to beat him because "I'm bad and always doing dumb things. "

4) Apply the measures before starting the intervention to establish baseline data. The practitioners could observe and record the daily frequency of these behaviours during the first week that the children are at Open Doors.

When the situation calls for immediate intervention or if the intervention is already under way, the participant (or the mother in the case of a child) can be asked to indicate the frequency of such behaviour over the last few weeks, which could serve as a reconstituted baseline.

5) Record the measures on a very regular .basis. Program staff should record the frequency of the behaviour for each child and enter the data on the chart for every individual or group session. In the case of Matthew, his behaviour should be recorded each time he is in the presence of his mother:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 158: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

149

6) Analyze the change. The following charts were produced by the practitioners for Matthew, Sylvia, and Donald. They show against a baseline the progress that has been made with each of the problem behaviours over the weeks in which intervention was carried out. The data for each can be combined to present an overall picture of the progress made by the participants as a whole.

MATTHEW

Bueline

Start of intervention

\

4—1- 1 1 1 * 1 1 * 1 1 1 1 2 3 4 5 6 7 a 9 10 11 12 13 14 15

Days

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 159: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

150

SYLVIA

10

1 9

f 8 u. 7 § ! 6

e- 5 uu o 4 t Ë 3 3 z £ 2

1 0

Start of intervention

Baseline

Intervention

DONALD

20 _

_ IB

I 16

1 " i -cu S -fr 6 3 4 cr K 2 LL i

:: Baseline

Intervention

SUit of intervention \

I I I I •4—1- I I I I I I I I I I I I I I I I I I I I 14 15 8 9

DayB

10

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 160: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

151

LIMITATIONS OF THE SINGLE-CASE A P P R O A C H

There are a number of different versions of the single-case technique for evaluating at different levels. The version shown here is first degree AB approach in which A represents the baseline and B the intervention. It highlights any change accompanying the intervention but cannot confirm that the change was actually due to it. Indeed, the change may be the result of extraneous factors. In order to correctly interpret the results, you need to record and take into account significant events that occurred in the lives of the participants during the intervention.

Despite its shortcomings, this approach is valuable in evidencing participant progress. It helps to clearly define the target problem, intervention objectives, and measures for observing change. It also lends itself to presenting information in a structured manner. The approach requires a degree of discipline on the part of program workers, basic honesty, and the ability to distinguish between what really occurred and what they would like to have occurred.

Other versions of this technique provide for more in-depth treatment and determine if the observed change was actually due to the intervention. These approaches, however, require a level of expertise that often exceeds in-house resources available to organizations. Any organization that wants to use them should therefore call on specialists in the field.

b ) OUTCOME STUDIES FOR EVALUATING ONE-TIME INTERVENTIONS WITH NO F O L L O W - U P

Evaluating the outcomes of interventions of a one-time, no-follow-up nature, such as hotlines, calls for a strategy that is quite different from those we have seen up until here. But on what success criteria should the evaluation be based?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 161: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 5 2

Let's take the example of the SOS Suicide hotline. How would its coordinators judge the success of their interventions? It would be unthinkable to analyze the region's suicide rate and attempt to correlate changes in the rate to hotline interventions. But how then could the evaluation be carried out? To show what a service like this gives to the population, you would need to:

— document the importance of this kind of service from the literature (books, research articles, research conducted elsewhere, etc.). This would help justify the existence of the service provided;

— measure the service's outcomes, which, in this case, is the number of telephone calls. The number of calls can indicate if the service responds to a community need. This means putting in place means for counting calls when the hotline is set up.

c ) PARTICIPATORY S T U D Y OR SATISFACTION S T U D Y FOR EVALUATING INTERVENTIONS

How can you measure outcomes in a sexual-abuse awareness program for elementary schools? This type of intervention comprises two elements: screening abused children and long-term awareness about the problem of sexual abuse. Both are difficult to evaluate. Promoting awareness, on the one hand, doesn't generate immediate outcomes; it adds to many other social recommendations and influences, and aims at better equipping children to confront potential sexual-abuse situations later on. As is the case with most awareness programs, this kind of intervention is a two-edged sword since, in maldng children aware of the potential for sexual abuse, it increases the visibility of the problem.

In light of this, how then can we evaluate such interventions? Following up all the children enrolled in an awareness program to see if there was an effect or to determine if the children who eventually had to face such a situation were better equipped to deal with it would

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 162: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 5 3

be unrealistic. And how could you determine their reaction to the abuse was due to the program and not to other influences? To show what a service like this gives to the children, you would need to:

— document the importance of this kind of program from the literature (books, research articles, research conducted elsewhere, etc.);

— measure the participation, which, in this case, means the number of children who received the training. This information must be surveyed from the program outset;

— measure what they retain from awareness training by asking them to react to simulated situations;

— measure the level of satisfaction of children with respect to the different aspects of the proposed intervention (see Section 3 for more on this topic).

d ) S A T I S F A C T I O N S T U D Y FOR E V A L U A T I N G O N E - T I M E I N T E R V E N T I O N S WITH FEW P A R T I C I P A N T S

How would you go about evaluating a one-time workshop given at the request of a small group of parents?

As the result of a series of suicides among teenagers at the Hillboro high school, a number of concerned parents asked the principal to organize a suicide-prevention workshop. Six parents attended the workshop.

A number of factors make it difficult to evaluate the outcomes of an intervention like this one, particularly the small number of participants involved and the fact that the intervention was a one-shot affair. To show what a workshop like this offers, you would need to:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 163: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 5 4

— document the contribution of this kind of intervention based on research involving larger groups;

— measure group satisfaction being very careful when interpreting the outcomes owing to the small number of participants.

To take the evaluation further would involve using an experimental approach in which the group of parents would be compared to a control group not taking part in the workshop. We do not recommend that organizations take this route unless they have adequate resources to develop and use such approaches.

LIMITATIONS OF ANALYSES OF O U T C O M E S , P A R T I C I P A T I O N , AND S A T I S F A C T I O N

The limitations of these approaches are simple and obvious: they measure what they measure...and nothing more. You therefore need to clearly indicate that you're dealing with outcomes, participation levels, or satisfaction levels, and cannot go beyond them. On the other hand, the material from the literature review could provide sound, convincing support for the action taken.

IF THE EVALUATION INDICATES THAT THE ANTICIPATED OUTCOMES W E R E NOT A C H I E V E D

What do you do if the evaluation process reveals that the program did not achieve its objectives? Does it mean that the program isn't any good? How can you justify its existence?

Of course, it would be totally unethical and socially irresponsible to alter or hide data to put the program in a more favourable light, insofar as the program is funded with public money.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 164: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 5 5

There's only one thing to do: be honest. Accurately describe the program and explain the evaluation methods used: what measurement instruments were used, how and under what conditions they were applied, and how the data was analyzed and treated.

The simple fact that the anticipated results were not achieved doesn't mean that the program isn't producing results or that it's worthless. You have to be able to critically interpret the evaluation results. Perhaps the program is producing unexpected but beneficial outcomes for the participants. If so, this should be emphasized. Maybe the measurement instruments showed no change because the intervention wasn't intensive enough or possibly there's no visible immediate effect. While the program may not have produced the desired outcomes, it may have prevented the situation of the participants from deteriorating. In bringing out what the program gave to participants and to provide the most accurate picture of it, it is often a good idea to combine qualitative and quantitative approaches in collecting data.

PUSHING EVALUATION F U R T H E R

The examples given here aren't meant to represent the wide range of existing programs or provide definitive means to evaluate their outcomes. They should, however, be enough for organizations to recognize and adopt the approach that best suits their program and resources.

For organizations that want to push their evaluation further, we strongly recommend that you call on the services of specialists in the field of evaluational research. Correlating variables or conducting a program cost-benefit analysis requires a more formal approach that goes beyond the scope of organizations delivering programs.

Any organization that wants to go beyond the tools and techniques presented herein should contact the evaluational research resources in their province. In the province of Quebec they could contact:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 165: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

156

• the Regional Health and Social Services Board in their region;

• the social research institutes related to various universities (each institute specializes in a particular field: child development, youth in difficulty, physical rehabilitation, the elderly, etc.);

• researchers with the evaluation team at the Centre de recherche sur les services communautaires (CRSC) at Université Laval (Quebec City)

• researchers with the École nationale d'administration publique (ENAP).

Elsewhere, these resources also include: • researchers at a university in their region (especially

researchers in the humanities, social services, psychology, remedial education, measurement, and evaluation);

• private consultants.

These resources are generally not prepared to offer a direct, immediate response to an organization's request for evaluation services, with the exception of certain private consultants who would do so for a substantial fee. This is why you need to plan for your evaluation requirements ahead of time. The resources may be able to put organizations in touch with university researchers or students interested in carrying out evaluative research or other competent individuals.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 166: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

157

FOR FURTHER READING (See also References at the end of the Manual)

This reference contain valuable information about Outcome Evaluation:

Program Evaluation Methods. Measurement and Attribution of Program Results. Treasury Board of Canada. (1991). Office of the Comptroller General. Program Evaluation Branch, Ottawa.

About Single-Case Design, we recommend:

Social Work Research & Evaluation. Quantitative and Qualitative Approaches. Chapter 23: Case-Level Evaluation Richard M. Grinnell, Jr. (1997). 5th Edition, Itasca, P.E. Peacock Publishers, Inc.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 167: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

158

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 168: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

159

SECTION 6

THE BASICS OF DATA ANALYSIS

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 169: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

160

SUMMARY

1- Analyzing Quantitative Data A- Organizing the Data B- Summarizing the Data

2- Analyzing Qualitative Data A- Classifying the Data B- Categorizing the Data

For Further Reading

ILLUSTRATED METHODOLOGICAL CHOICE

— Requirements of Qualitative Analysis

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 170: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

161

This section aims at equipping organizations to tackle another phase in evaluation: data analysis. Throughout the evaluation

process — regardless of the approach used — data has been collected from registration records, questionnaires, interviews, and logbooks. These raw data do not tell us very much. Analyzing and interpreting the data bring out what the data have to tell. Depending on the type of data involved, the analysis will either be qualitative or quantitative.

Data collected through quantitative means is numerical as is or after processing. Our discussion will focus on a few basics in carrying out first-level analysis, known as descriptive analysis. The analysis of qualitative data deals with the contents of individual and group interviews, open questions on questionnaires, and information from a variety of documents, such as logbooks and official records. Our discussion of the analysis of qualitative data will include a significant methodological choice: the requirements of qualitative analysis.

1- Analyzing Quantitative Data

What do you do now that you're sitting with a pile of completed questionnaires in front of you? To get the raw data contained in these questionnaires (or any other kind of quantitative data) to tell you something, you need to organize and summarize the data.

A. ORGANIZING THE D A T A

As mentioned elsewhere in this manual, the question of electronic processing of the data should be settled before using the questionnaire to avoid processing hassles. Failing to design the questionnaire with analysis in mind can lead to a lot of trouble when the time comes to analyze the data.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 171: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

162

C O D I N G T H E D A T A

You should recall that we left the right-hand column blank when we designed our questionnaire to allow for data coding. Coding is a means of assigning a symbol, usually a number, to each of the responses to the questions. The code is written in the space provided on each questionnaire.

In this example, the month and year of birth of the child could be converted into a total number of months (e.g., 17 months), which would then be entered into the space provided. On the other hand, it is often useful to break ages down into ranges for classification. The following categories were used for the example above:

Do not write in this space. For coding purposes only.

A- Information about your Child

1. Give the month and year of birth of the child for whom you currently receive services from Helping Hand.

Month [ _ | _ ] Year [ _ | _ ] L| J LI J LI J

Category Code

0-2 years 3-4 years 5 years 6-9 years

2 3 4 5 9

10 years and older No response

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 172: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

163

Coding is sometimes straightforward and limited to two choices, as in the case of gender. In other cases, the categories have already been defined on the questionnaire; all you have to do is enter the corresponding code.

Do not write in this space. For coding purposes only.

2. In what kind of family does the child for whom you are receiving services from Helping Hand live?

With both his natural parents Q ^ ^

With his mother Q

With his father •

Joint custody Q

Foster family Û

Category Code With both his natural parents 1 With his mother 2 With his father 3 Joint custody 4 Foster family 5

A codebook is developed when the questionnaire is designed, serving as a guide during the analysis phase.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 173: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

164

The codebook should provide at least the information shown below:

question number wording of the question or an identifying abbreviation code to be used for each category of response

Question Wording or Code Category of Response Number Abbreviation

1 Child's age 1 0-2 years 2 3-4 years 3 5 years 4 6-9 years 5 10 years and older 9 No response

2 Child's sex 1 Male 2 Female

3 Child's family 1 With both his natural parents 2 With his mother 3 With his father 4 Joint custody 5 Foster family

The codebook must be strictly adhered to if all of the questions are to be coded properly.

^ Be sure to put an identification number on each questionnaire for further reference and to ensure the confidentiality/anonymity of the data. -

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 174: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

165

The choice of codes should be simple and depends on the types of categories of responses.

• The question has two possible responses.

5. Last summer, excluding your vacation, did you have trouble arranging child care for your child?

If this type of question is repeated a number of times in the questionnaire, the responses can be consistently coded as:

No

Yes • •

1 = Yes 2 = No

/ The question provides for answers of varying degree.

22. In your opinion, how useful were the workshops in helping you better understand your child's development?

Very useful

Fairly useful

Not very useful

Not at all useful

• • • •

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 175: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

166

All the answers to questions of this type would be coded according to the same pattern and attention would be given to reversing the order when necessary:

• There is no logical flow between the categories of answers.

27. In what type of family does the child who participated in the program live?

While there is no strict order to follow, it's always better to use similar codes for similar categories.

• The answer is already numerical.

6. Over the course of the 10-week program, how many talks did you attend?

1 attended talks.

1 = Very useful 2 = Fairly useful 3 = Not very useful 4 = Not at all useful

With both his natural parents

With his mother

With his father

Joint custody

Foster family

• • • • •

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 176: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

167

In this case, the coding can be based on the numbers in the answer.

3 = 3 talks 2 = 2 talks 1 = 1 talk 0 = none (where zero here is a mathematical zero and not missing data)

The same treatment applies to questions such as:

4. How many children do you have? children

• The question contains "I don't know" as a response category.

Conventionally, this category of response is coded 8 or 98.

• The question either doesn't apply or wasn't answered.

Conventionally, such cases are coded 9 or 99.

• The question is open.

24. Provide your comments or suggestions about the talks (schedules, themes, etc.).

The qualitative analysis of the data is carried out as indicated in the second part of this section.

Once all the questions have been coded, they can be processed.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 177: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

168

P R O C E S S I N G T H E D A T A

Nowadays, data is only very rarely manipulated manually, except in cases of very few data. Equipped with a microcomputer and software such as Excel,3 just about any organization can inexpensively and easily process a mass of raw data.

On the other hand, if the organization coordinators want more in-depth analysis than provided for in this manual, they will need to call in resource people able to process the data with more powerful programs.

Let's assume that an organization is using Excel to process its quantitative data. The first step is to transfer the data onto a worksheet. Each worksheet column is for a question; each row is for the responses from one respondent. The code corresponding to the response is entered in the appropriate cell for each question.

Once all the data have been transcribed, you need to check for errors that may have occurred during coding or when inputting the data.

The following is an example of an Excel worksheet containing the data from six respondents for the first part of a questionnaire.

3 Excel is an application available for both the Windows and Macintosh environments. It is both easy to use and able to handle large volumes of data. Excel can perform a variety of calculations and produce graphs and charts that can be "cut-and-pasted" into the final evaluation report.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 178: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

169

Question 1 2 3 4 5 6 7 8 Respondent: 001 2 1 99 4 3 2 1 I Respondent: 002 3 2 2 1 1 2 2 2 Respondent: 003 3 3 3 4 1 2 98 Respondent: 004 n n 1 2 2 2 3 1 2 Respondent: 005 2 2 2 2 4 2 2 2 Respondent: 006 1 1 1 1 1 1 2 1

Question 2 in our example relates to respondent gender and is therefore coded: 1 = male; 2 = female. The cell corresponding to this item for respondent 003 (shaded) contains a 3, which must be an error. There is also a data recording error for respondent 004, since no code is entered for question 1 (shaded).

B- SUMMARIZING THE DATA

Now the data can be presented in a summarized, meaningful form. This is done with statistics. Our discussion will be limited to a few of the basic notions of descriptive statistics that organizations will be most likely to use to communicate the findings of their program evaluation.

Measures that Describe and Summarize Data can be described and summarized in three simple forms: frequency, percentage, and average.

/ Frequency The results for each answer to each question are compiled. This gives the number of times each possible answer to the question appears and covers all the respondents.

In the example above, the following is the frequency for question 4 (6 respondents):

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 179: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

170

Frequency 1. (Very useful) 2 2. (Fairly useful) 2 3. (Not very useful) 1 4. (Not at all useful) 1 TOTAL 6

/ Percentage (%) To get the frequency to tell us more, we can calculate the percentage, which is the frequency of each response divided by the total number of respondents. In fact, it is much more interesting to find out that 51% of the respondents considered the workshops very useful and 31% fairly useful rather than 23 people considered the workshops very useful and 14 fairly useful, and so on. To illustrate die point, we have chosen an example with more respondents because calculating percentages for a small sample is not very meaningful.

Frequency Percentage 1. (Very useful) 23 51 (23/45) 2. (Fairly useful) 14 31 (14/45) 3. (Not very useful) 5 11 (5/45) 4. (Not at all useful) 3 7 (3/45) TOTAL 45 100

/ Average This is the most widely used measure of numbers that are comparable. We all know what class average means: it's the sum of all the grades of all the students divided by the number of students. Similarly, you could calculate the average income

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 180: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

171

of participants, the average number of children per family, the average number of participants in the workshops or talks, and so on.

1. Attendance at talk 1 23 people 2. Attendance at talk 2 38 people 3. Attendance at talk 3 19 people Average number attending 27 people (80/3)

WAYS OF VISUALLY DEPICTING DATA Data can be presented in tables or graphs to make them easier to understand and more attractive to read.

Of course, you wouldn't want to present every question in a table; it would take up too much room. Evaluation reports should only contain the really meaningful aspects and those that help describe the participants, activities, and the like. Visual representation should make the text easier to understand, not make it cumbersome.

The table below illustrates the gender of individuals responding to a questionnaire.

Table 1 Respondent Gender

Gender Frequency Percentage Males 19 42

Females 26 58

TOTAL 45 100

Normally, the frequency and percentage for each question are indicated in separate columns. The percentage column must always total 100 or a number close to it if the decimal places are rounded off.

•v. v Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 181: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

172

Remember to number the tables and give them short but descriptive titles.

Presenting groups of responses, rather than each of them, in a table can often be a good idea when there is a large body of data.

For example, the ages of children can be grouped when compiling them from their registration forms to avoid having an overly long table (a line for each age: 3 years old, 4 years old, etc.). The groups or ranges must not overlap, so you would want to use groups such as 0-2 years old and 3-4 years old, and not 0-2 years old and 2-4 years old. The table below shows results given by age group.4

Table 2 Participant Age

Age Frequency Percentage

0-2 years 5 9 3-4 years 23 38 5 years 14 23 6-9 years 10 17 10 years and older 8 13

TOTAL 60 100

4 The categories presented in Table 2 can be used for perinatal studies and school services because they can be compared to existing statistics. The new Quebec family policy makes it especially interesting to highlight the age 3-4 group. By taking the 5-year-old alone, they can be easily combined with older or younger groups, depending on the purpose.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 182: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 7 3

Graphs and charts can also be used to illustrate data. As with tables, these figures should be numbered (but not in the same series as the tables) and given short but meaningful titles.

Bar graphs are most often used; they readily indicate if one category of response is greater than another.

Graph 1 Respondent Gender

100% 90% 80% . . 70%

Males Females Gender

The histogram is variant of the bar graph in which the bars are placed immediately adjacent to one another. It is used for columns of data that are juxtaposed (such as participant income).

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 183: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 7 4

Graph 2 Participant Income

100%

80%

60% . . %

$6,000 to $12 )̂01 to $18,001 to $12^00 $18,000 $30,000

Income

Our last chart type provides a striking display of data in circular fashion. To ensure that readers can take in the information at a glance, the percentages should be written in next to every slice in the pie chart; contrasting colors or patterns should be used for the segments.

Graph 3 Participant Age

ED 4 years and younger

• 5 to 7 years old

D 8 to 10 years old

11 years and older

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 184: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

175

2- Analyzing Qualitative Data

The analysis of qualitative data usually tends to be much more involved and longer than generally thought. Of course, a number of factors come into play, such as the nature of the information sought, the volume of material to be analyzed, the accuracy of the questions, and the range of topics dealt with. Analyzing the responses to an open question on a written questionnaire is quite different from analyzing the contents of an informal conversational interview or the contents of a logbook.

In fact, the scope of the task can be estimated from two aspects:

• the coverage of the information collected; • the volume of data.

The coverage refers to the depth of the information, whereas the volume refers to the number of themes and informants.

The Hope organization would like to have a clearer idea of the needs of parents who have experienced the death of a child. They have considered several different approaches to this needs assessment: meetings with hospital social workers, individual interviews with parents, and focus groups comprised of parents. After carefully weighing the alternatives, the organization opted for individual interviews with parents. Obviously, the number of parents interviewed will influence the quantity of information that will need to be analyzed. The nature of the information will impact on the scope of the analysis.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 185: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

176

Because they had limited knowledge of what the parents were living through, the evaluators opted for depth interviewing. They decided to begin with a very general item, given below:

I would like you to tell me about the loss of your child.

Obviously, this kind of statement is bound to lead to very different responses from one parent to the next and to open up a wide variety of topics. The range of subjects they bring up will be as varied as their religious beliefs, the impact the death has had on their marital life, their relations with surviving children, their grieving, neighbourhood support, and so on. You can count on the analysis being difficult if many parents are interviewed.

Faced with this prospect, the organization decided to review its position. Since Hope's mandate lay with parent-child relations, the evaluators decided to go with a semi-structured interview bearing on the influence of the loss of their child on their relations with their other children. They decided to use a more focused question:

Could you tell me how the death of your child affected your relations with your other children ?

It should be obvious that this question will elicit more specific information, which will be easier to analyze, even if many parents are interviewed.

Let's take a look at another example now. The Heads Up program carries out intervention with groups of mothers and children to enhance the educational skills of mothers. The two workers conducting this activity have been taking turns writing down their observations in a logbook. They enthusiastically spent hours writing in their observations and comments. Six months and 25 meetings later, they had filled up more than 100 pages. When the workers

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 186: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

177 " V

started reading back over their observations, they were somewhat discouraged because they couldn't see what they could get from them.

Through hindsight, it's easy to see that they could have made their tasks much simpler had they devised questions for the log at the outset.

Did they want to find out: the number of participants at each meeting; how interested the mothers and children were in the activities; what they, as workers, felt about the activities; the behaviours of the mothers towards their children; the level of satisfaction of the mothers with respect to the meetings; or the nature of the exchanges between mothers during the meetings?

Logbooks can contain extensive information; to avoid having to wade through masses of dissimilar data that are hard to analyze, start off by selecting the kind of data you want to collect.

These examples illustrate that planning for data analysis should begin when data collection is being planned; the key issue is what kind of data should be collected. Once the information you are seeking has been collected, analysis per se can begin. Analysis of qualitative data is a demanding task that, as some say, is both a systematic process and an art. The process breaks down into three main steps:

• data classification; • categorizing data; • interpreting data.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 187: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

178

A- Data Classification

Classifying data generally consists of three operations: structuring the data, selecting the relevant information, and coding.

Structuring the Data The first operation consists basically in pulling together all the information collected that pertains to the research question. This can be observation notes, logbooks, responses to questionnaires, and interview transcripts. In other words, this is the time to make sure that all the information pertinent to the evaluation is there. To illustrate, coordinators might want to use the following information to evaluate participant satisfaction with a specific program: responses to a written questionnaire filled out by the participants, comments in logbooks, and informal comments taken down by program staff during breaks.

Selecting the Relevant Information The next step is to cull the relevant information from all the data brought together. As the examples above illustrate, data selection begins when the research questions are determined and continues as the instruments for data collection are developed. Often, however, it is necessary to refine the selection once analysis per se begins. This is the stage when the relevant information must be culled from the rest.

In the preceding example, the logbooks dealt with topics other than participant satisfaction. The same holds true with informal participant comments: some might apply to participant satisfaction; others may be on altogether different subjects. Even when the information is collected during a structured interview, the respondent can broach subjects that are off the research topic, repeat himself, or provide unnecessary details. Instead of interrupting the person's flow, the

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 188: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

179

interviewer lets them talk on a bit, even somewhat off topic. During analysis, however, the extraneous material must be put aside, which is the purpose of this part of the process.

Coding Data The third operation in the classification phase consists in breaking the information up into groups of similar data. The normal procedure used for this is coding segments of information according to their content. A code is a symbol assigned to a group of words that allows fast recognition. To illustrate the coding process, we will look at an example consisting of an excerpt from an interview with a mother involved in a Local Community Service Center (CLSC) program.

You notice that the mother touches on a number of topics during the interview. She talks about problems with her child, the intake process at the CLSC, her participation in activities, the services delivered by the educator, the effects of the intervention on her son, her level of satisfaction with respect to the services, and improvements that she would like to see.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 189: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

180

Q. Could you tell me how you got into this program?

It was because of my little boy. 1 guess it was about 18 months ago when I realized that something was wrong with him. He just threw things around and wouldn't even talk. I was really getting worried 'cause I could see he was giving me a lot of trouble. One day, when I was talking to the woman across the way, she told me about her son and that she was in that program. She said, "you gotla get yourself some help." And I said, "but where?" And she said to go to the CLSC and talk to some woman or other — 1 can't remember her name — it's that little nurse. She's in charge of that program. So I finally called and told her about my problem and she told me to come down to the CLSC for an appointment. When I got there, we talked about everything and the stimulation program. 1 didn't know what that was, so she explained everything. Then she told me to fill out a paper and said that they would see if I could be accepted. Six or eight weeks later, I got a letter saying I was accepted. That's how I got into the program.

Q. Tell me about the activities that you take part in.

Sometimes it's just us parents together. (...) We talk about all kinds of things. In our group, she wrote down what we wanted to know, what we wanted to {earn about the kids, and what kind of activities that we could do parents and kids together, or just the moms by themselves. (...) I always go the parents' meetings, every time. It's a lot of fun and we're always giving each other advice. And we learn a lot It does me good to be with other parents with the same problem as me. We encourage each other.

During the meetings, the kids are in the other room with an educator, who takes care of them. (...) The first time that I went, there was a group of us. Just women, the kids were off in the daycare with people taking care of them. They really liked that. Mine has changed a whole lot; since he's been in the program he's loads different. He's a really better. And I'm glad about it.

Q. Could you tell me what changes that you have seen in Tommy?

Little things in the way he acts. He used to throw things all over, but now that the educator has given me some tips, he listens better. Before, when his dad wasn't there, he wouldn't listen to his mom. Now, it's not the same. (...) He talks a lot better now, too. He's lots less annoying. He'd throw things around, slam cupboards and drawers. Mess up his bed. Now's he's different. Before, he'd refuse to dress himself. It'd take a half hour, but now that's better. At the daycare, he puts on his boots by himself. He's sure changed.

Q. Are there other activities that you take part in?

There's home visits. They're really good. There should be more of them (...) because she comes every two weeks. (...) It'd be better at least once a week. That'd help more. She comes to see my boy and plays games with him or tells him a story or he tells her a story. She's realty good and I like her a lot. She gives me good advice. Sometimes 1 get so discouraged, and then she helps me and encourages me. If 1 need to talk, she's always got the time to listen. I really like that woman a lot And she listens to me and tells me what I could do. It's really a good program, but once every two weeks isn't enough. There should be more often.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 190: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

181

To help in breaking the information up into like parts, the interview could be coded as shown below. You will see that some information has been removed during coding. In fact, coding very often serves as a second cut in selecting material. You should always keep a complete copy of the interview that you can refer to if need be.

Initial situation It was because of my little boy, I guess it was about 18 months ago when I realized that something was wrong with him. He just threw things around and wouldn't even talk. I was really getting worried 'cause I could see he was giving me a lot of trouble.

Referral One day, when I was talking to the woman across the way (...). She said, "you gotta get yourself some help." (...) go to the CLSC and talk to (..) that little nurse.

Program intake (...) she told me to come down to the CLSC. (...) we talked about (...) the stimulation program. Then she told me to fill out a paper and said that they would see if I could be accepted. Six or eight weeks later, I got a letter saying I was accepted. That's how I got into the program.

Activity Sometimes it's just us parents together. (...) We talk about all kinds of things. (...) we're always giving each other advice. And we learn a lot. (...) During the meetings, the kids are in the other room with an educator, who takes care of them.

Satisfaction with I always go the parents' meetings, every time. It's activities a lot of fun and we're always giving each other

advice. And we learn a lot. It does me good to be with other parents with the same problem as me. We encourage each other.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 191: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

182

Child's satisfaction [ R e f e r r i n g t o a c t i v i t i e s c h i l d r e n ] (...) They really liked that.

for

Intervention outcomes

Mine has changed a whole lot; since he's been in the program he's loads different. He's a really better. And I'm glad about it.(What changes?) Little things in the way he acts. (...) he listens better. He's lots less annoying (...) he puts on his boots by himself.

Overall satisfaction with the program

[Referring to improvements in her son ' s behaviour](...) I'm glad (...) It's really a good program.

Activity There's home visits. They're really good. There should be more of them (.••) because she comes every two weeks. (...) She comes to see my boy and plays games with him or tells him a story or he tells her a story. Sometimes I get so discouraged, and then she helps me and encourages me. (...) If I need to talk, she's always got the time to listen. She gives me good advice.

Opinion about the worker

Suggestion

She's really good and I like her a lot. (...) she's always got the time to listen. I really like that woman a lot

(visits) once every two weeks isn't enough. There should be more often.

Coding is a data organization operation. Its complexity depends on the nature of the instruments used and the degree of knowledge of the subject at hand. If a written questionnaire to elicit short responses or an interview guide with specific questions is used, then coding is easier, because the information can be divided up according to

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 192: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

183

question. For example, if the preceding interview had been carried out using a highly structured interview guide, the following could have been the transcription of what was said.

Q.Could you tell me who referred you to this program? A neighbour who lives across the way.

Q.What activities did you participate in during the program? Parents' meetings and home visits every two weeks.

Q.What is your opinion of the program? / think it's a very good program. I feel good about attending. And what's more, I really like the educator.

Q.Do you have any suggestions to make about the program? The home visits should be more frequent than once every two weeks. It's not enough.

In this example, the coding will be much simpler because the contents are directly related to the question asked. In such situations, the codes are mostly determined from the outset, depënding on the themes dealt with. In this particular case, the starting codes would be:

1) Referral 2) Program activities 3) Project assessment 4) Suggestions

A more in-depth analysis could lead to the codes being refined even more. Usually, the in-depth analysis occurs during the second step in the data-analysis process:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 193: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

184

B- Categorizing Data

Coding the information puts similar data together in groups to get an overview, which is then used to condense the information by highlighting repetitive elements: this is categorizing the data. A parallel can be established between the categories built and the choice of answers in a closed-response format. To illustrate, a written questionnaire might offer the response categories shown below for a questionnaire on how a participant was referred to the program:

Who referred you to this program? (Check off one answer.) — A neighbour — A family member — CLSC staff — Hospital staff — Other (specify:)

In using this type of question, you assume that you know the main possible answers. In qualitative research, the responses are not specified at the start: they must therefore be constructed out of the information collected based on the trends that appear. Let's use the same item —the referral source— to illustrate the land of information that a qualitative approach can lead to. The inset below shows segments from interviews that were coded "Referral." The numbers in parentheses refer to the interviewee and are used for anonymity.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 194: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

185

Referral One fine day, I went to see my pediatrician (...) after that appointment, he told me, "You're going to have to do something because your child has got a problem" and he suggested that I come here. ( 1 )

Referral Saint Justine's Hospital asked me to register for this program. (2)

Referral Well, when I was in a convenience store, Linda, the owner, told me to call that lady. (3)

Referral A few winters ago, 1 took my little boy skating and met another woman who was doing the same thing. Last winter, I ran into her at the drugstore and she said, "There are some groups with children; you should come." (4)

Referral It was kind of a snowball effect. One of my friends told me about it, so I came. Then I recruited two other ladies. (5)

Referral I saw an ad in the local newspaper. (6)

Referral I was taking part in a community group, and they knew I had given birth. Michelle, who is also a member, told me that I was entitled to the service, because I'm on welfare. She told me to go and ask for an appointment. (7)

Referral The CLSC talked to us about i t Once a nurse told us about it to see if we were interested in seeing the worker. (8)

Referral A social worker said something to me about it. The teachers at the daycare also talked to me about it. (9)

Referral Well, it was Caroline's friends. At least, I think it was them. They were in the program and took her once. Then she signed up for herself. (10)

Referral A friend who was already in.the program told me about it. (11)

Referral The CLSC nurse told me about it because I was in the milk-orange-eggs program. (12)

Referral Somebody at the CLSC. One of the nurses came and asked if we were interested in taking part in the program. ( 13)

Referral One of the mothers that I met at the daycare told me about the program. She had already taken it with her daughter. She really encouraged me to register because it was a really good program. (14)

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 195: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

186

In reading these different quotes, you can pick out the various sources of referrals, which can be grouped into larger categories. To begin with, we can identify the following categories:

— hospital setting (1,2) — natural network: family, neighbours, friends (3,4,5,10, 11,

14) — workers from community organizations (7) — CLSC practitioners (8,9,12,13) — advertising (6)

We could also opt for even larger categories such as:

— formal services network: hospitals, CLSC, community organizations (l, 2,7,8,9,12,13)

— natural network (3,4,5,10,11,14) — direct contact with the program because of advertising (6)

Categorizing consists in grouping the various data based on their similarities to bring out trends (similar - dissimilar) in the situations studied. This exercise in cross-sectional analysis or comparing the responses obtained by the various interviewers on a specific theme (designated by a code) is repeated with all of the material. In certain descriptive studies, data analysis stops at this point. Once the analysis has been carried out, the report is written and limited to describing the observed phenomena. More often, however, categorizing the data leads to the third step: interpretation.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 196: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

187

^METHODOLOGICAL CHOICE^ >*>•

Obviously, the analysis of qualitative data must meet the same requirements of scientific rigor as any other data analysis process. As a general rule, these requirements relate to the validity of its findings and the reliability of the techniques. These two parameters pose specific problems in qualitative analysis because of the flexibility of the procedures involved. In fact, qualitative analysis must be evaluated according to different criteria than quantitative analysis. The four criteria below can be used:

/ Credibility: two conditions are necessary to meet this criterion:

1) the analysis must be conducted with credibility of the results in mind;

2) the results must be approved by individuals who took part in the evaluation.

/ Transferability: In order to meet this condition, you have to provide enough detail on the evaluation context for readers to judge to what point the findings can be applied to other programs.

/ Reliability: This criterion requires that the methodology be described with enough detail for an outsider to make a judgment about its scientific value.

/ Validity: This criterion refers to an outside reader being able to make the link between the collected data and the interpretations provided. It's a case of ensuring that the conclusions drawn are based on the data and not on the evaluator's own bias.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 197: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

188

3- Interpreting Quantitative and Qualitative Data

Basically, interpretation consists in attaching meaning and significance to the accumulated data, usually by establishing the linkages between information relating to different aspects. Classification and categorization involve procedures that are more technical; interpretation is harder to define. It's the facet of analysis that is more of an art based on the evaluator's ability to relate patterns, his creativity, and his awareness of the focus of the study.

In the case of quantitative data, it begins deductively with observations of a phenomenon brought out by analysis. The tables and charts used to present quantitative data and the linkages between them enable us to interpret the data.

The following table provides information on the level of satisfaction of parents with the talks, based on gender. How can we interpret these data?

Table 3 Level of Satisfaction of Parents with Talks

according to Gender (N=36)

LEVEL OF SATISFACTION WOMEN n %

MEN n %

TOTAL N %

Very satisfied 8 28 4 50 12 33 Satisfied 18 64 2 25 20 56 Dissatisfied 1 4 2 25 3 8 Very dissatisfied 1 4 0 0 1 3

TOTAL 28 100 8 100 36 100

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 198: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

189

Looking at the TOTAL column in Table 3, we can see that one-third (33%) of the parents (men and women combined; N = 36) were very satisfied with the talks and more than half (56%) were satisfied with them. Only 8% were dissatisfied and just 3% very dissatisfied. Now let's look at the column for women. The percentages shown apply to the breakdown of responses from women and are calculated as: 8 women out of 28 were very satisfied (28%), 18 out of 28 were satisfied (64%), and so on. The column for men is set up the same way. If you look at the columns separately, you can see that more than one-fourth (28%) of women were very satisfied with the talks, while the majority (64%) were satisfied with them, and so on. But since we're dealing with percentages, we can also compare the responses of the women to the men. Looking across the "very satisfied" row, we can see that half (50%) of the men were very satisfied, but only just over one-fourth (28%) of the women were.

But let's not lose sight of the fact that many more women responded to the item than men (28 women/8 men). So 50% of the men means only 4. This example points out that you have to be careful in interpreting data and indicate the number of people the percentages refer to.

In the case of qualitative data, interpretation often starts as an inductive process in which the relationships and linkages between data are observed. Phenomena are examined to determine what possible relationships they might have with one another.

If we take a second look at the example given above on the referral source and program type, we could ask ourselves what relationship, if any, might there be between the two. This type of relationship can be explored by constructing a cross classification. For example, let's suppose that the persons who provided the information labeled 3, 5, 6, 7, 9, 11, and 14 took part in programs put on by community organizations, while the others (1, 2, 4, 8, 10, 12, and 13) participated in programs offered by an institution in the health and

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 199: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

190

social services network. We could then construct a table that combines these two dimensions (the numbers in the table are the identification numbers for the interviewees).

Table 4 Relationship between the Referral Source and Project Type

Referral Institution of the Health and Social Services Network

Community Organization

Formal social network (n=7) 1, 2, 8, 12, 13 7 , 9 Natural social network (n=6) 4, 10 3, 5, 11, 14

Direct contact (n=l) 6

This table shows the relationship between referral source and program type. Five out of the 7 persons referred by practitioners (formal social network) received services in institutions. Moreover, 4 out of the 6 referred by someone in their natural social network were in programs offered by community organizations. Since the number of people involved in the sample is small, you have to be very careful in interpreting this relationship. Nonetheless, the observation is interesting. The next step consists in determining the characteristics of the individuals who deviate from the main trend, that is, respondents 4 and 10, and 7 and 9.

Let's carry our example a little further. After careful analysis, it becomes apparent that respondents 4 and 10 were taking part in a program offering specialized remedial language services that community organizations do not provide. Respondent 7 was also referred by a community-organization worker, while respondent 9 was referred by a daycare worker. Based on this information, a number of interesting trends appear:

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 200: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

191

practitioners (formal social network) tend to refer people to institutions in the health and social services network;

members of the natural social network refer people to community organizations when services are available;

^ workers with community organizations (formal social network) seem to refer to community organizations.

These observations could eventually be verified more completely using quantitative methods. This kind of table combining two types of information can have multiple uses when the relationship between two variables is being investigated. For example, the program coordinators using a written questionnaire on changes produced as a result of program participation might be interested in determining the relationship between changes mentioned by participants and how participants feel about the workers.

Sometimes your mind will wander down all sorts of unexpected avenues as you look at the data from a number of angles. This can trigger new ideas that suggest different perspectives in interpreting the data.

Although you can't count on these flashes as being everyday occurrences, you have to trust in your ability to find meaning in situations and never stop asking yourself if the information in front of you could be interpreted in another way.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 201: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 9 2

FOR FURTHER READING (See also References at the end of the Manual)

These two references contain valuable information:

Social Research Methods. Qualitative and Quantitative Approaches. Chapter 12 : Analyzing Quantitat ive Data Chapter 16: Analyz ing Qual i tat ive Data W. Lawrence Neuman (1997). 3rd Edition, Boston, Allyn & Bacon.

Social Work Research & Evaluation. Quantitative and Qualitative Approaches. Chapter 21 : Quantitat ive Data Analysis Chapter 22: Qualitat ive Data Analysis Richard M. Grinnell, Jr. (1997). 5th Edition, Itasca, P.E. Peacock Publishers, Inc.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 202: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

193

SECTION 7

FORMULATING AN EVALUATION PLAN

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 203: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 9 4

S U M M A R Y

The Evaluat ion Plan

1- Describing the Program 2- Evaluating the Program 3- Using the Results 4- Ethical Considerations

I L L U S T R A T E D E T H I C A L CHOICE

— Informed Consent and Respecting Confidentiality

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 204: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

195

As strange as it may seem, we are going to end this manual with a section on how to formulate an evaluation plan. We're closing

the loop. Now that you have the tools and techniques to evaluate your programs with different approaches, you'll be able to develop an evaluation plan.

An evaluation plan is useful for a number of reasons: it can be used to show sponsors how your organization plans on evaluating its program in order to improve its quality; within the organization, it can serve to pull all the pieces for evaluation together and be used as a guide/checklist throughout the evaluation process. In this regard, we will also touch on developing a timetable and research budget, writing the report, and possible uses of results. Furthermore, we will discuss a major ethical consideration, namely, obtaining the informed consent participants and respecting the confidential nature of information.

The Evaluation Plan

An evaluation plan doesn't need to be a long document. When required by sponsors, it rarely exceeds 10 or 15 pages. Synthesis, conciseness, and accuracy should be your watchwords.

An evaluation plan should cover four dimensions:

1- Describing the Program and Activities: their relevance, use, objectives, and functioning.

2- Evaluating the Program: the type of evaluation selected, the strategies adopted, the methods used and their limitations, roles, and partnerships, the schedule, and the budget.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 205: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 9 6

3- Using the Results.

4- Ethical Considerations and Confidentiality.

1- DESCRIBING THE P R O G R A M AND A C T I V I T I E S

In this first section, you should describe the program succinctly, which includes the program itself and its context. The program's objectives and how the program will function must be described to highlight how the objectives will be achieved.

See Section 4 for details on how to formulate objectives.

Moreover, you should provide a rundown of information on the clientele, program personnel, and how the program functions.

You should also indicate how the program is relevant and useful. Depending on the program type, you could take one of several approaches.

/ In the case of prestructured programs, such as Nobody's Perfect, relevance and usefulness have already been demonstrated. These arguments should be restated, emphasizing that the organization is applying the program as designed and to the appropriate clientele.

/ In the case of other programs whose methods have earned general recognition for their results, existing literature in the field can be used (books, articles in trade journals, etc.). A synthesis of the information should make the scope of the program clear.

Reference can also be made to other programs offering a similar intervention in a similar population and which have been evaluated.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 206: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

1 9 7

/ Lastly, in the case of programs that represent new ways of doing things, you need to indicate the basis on which the decision to offer the program was made as well as how and under what conditions the intervention will be validated.

2- EVALUATING THE P R O G R A M

The second part of the plan is without contradiction the most important. The overall process to be undertaken must be dealt with in order to evaluate the program.

^ The various aspects that go into deciding on which evaluation approach to use is discussed in Section 2.

The Evaluation Type This section explains why a particular type of evaluation was selected based on the evaluation questions that must be answered.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 207: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

198

Evaluation Process Corresponding Types of Evaluation

Evaluation of Client Needs and Program Feasibility

Needs assessment O Feasibility assessment

Analysis of Program Articulation

Formative evaluation •=> Used occasionally in summative

evaluation

Study of the Characteristics of Participants, Clienteles, and Users

<=> Formative evaluation O Used almost always in summative

evaluation

Analysis of How Services Are Being Delivered or Activities Achieved

•=> Formative evaluation

Analysis of Services Produced by the Program (Outcomes)

«=> Formative evaluation •=> Used occasionally in summative

evaluation

Study of User Satisfaction

«=> Formative evaluation Used occasionally, but only in summative evaluation

Outcome Studies Summative evaluation

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 208: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

199

The Evaluation Strategies Chosen Different evaluation strategies are possible, depending on the type of evaluation chosen, as shown in the examples throughout the manual. Each organization selects its strategies based on its requirements.

There are five different evaluation approaches when evaluating the outcomes of a program, depending on the type of program to evaluate, the information available on the situation prior to the program, and the resources available to the organization.

•=> See Section 5.

The Tiny Tots organization wants to carry out a summative evaluation of its program. Project coordinators want to use a before-and-after design (pretest/posttest) to collect the observations of workers about the behavioural changes in the parents and children participating in the program. They also want to use the XXX standardized rating scale with the parents to measure parenting skills prior to and after the program. In addition, logbooks will be analyzed. Individual interviews will be carried out with certain parents at the end of the program. The organization indicates the reasons for its choices.

The important thing is being able to justify the choice made and to provide adequate information to convince the evaluation plan's audience that the choices are relevant.

The Selected Methods and their Limitations This section should present concrete aspect of the evaluation process:

• When should information be gathered during the program?

• Who will provide the information? • Exactly what information do you want to get?

Perceptions, observations, satisfaction? • How will the information be collected? Focus groups,

logbook analysis, registration records, and so on?

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 209: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

200

• What instruments will be used: in-house or existing? (When possible, current practice is to annex the instruments to the evaluation plan.)

• How will the information be treated and analyzed?

The Tiny Tots organization indicates that the information will be gathered from the workers once in the first session and in the last session of the program. Workers' observations will focus on child development (motor and language skills) and behaviour in groups. Practitioners will have to complete an in-house observation checklist (appended to the evaluation plan). The checklist will have been previously tested and the results analyzed by organization coordinators using the XXX computer program.

(...) On the other hand, the organization will be using students in the humanities from XXX University to analyze and process the XXX standardized rating scales.

This section should also make note of the limitations of the selected methods. What type of data can be collected with them? To what degree do the data reflect reality? Do the methods reveal participant progress? What points of view do they represent?

Role and Partnerships This section should indicate who is in charge of the evaluation, what everyone's roles are, and who the potential partners in the process are. Setting up a committee or team (with a limited number of members) in charge of the evaluation is an efficient way of working. One person should be responsible for overall co-ordination. Everything else depends on the organization's resources. It is a good idea, however, to have someone who is not directly associated with the program conduct the participant interviews. Moreover, if the organization doesn't have the resources required in-house, it shouldn't hesitate to seek outside assistance to ensure that the various instruments used can be adequately processed by computer.

Tiny Tots set up a committee to head up its program evaluation. This committee comprises a permanent staff member and two workers. One of

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 210: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

201

these individuals is specifically responsible for how the evaluation is carried out and for writing the report. Two students have been hired to conduct the interviews with parents. Resource persons will be used to analyze and process the data because the organization doesn't have the appropriate resources.

Schedule and Budget The schedule and research budget must not be neglected either. They are important if the evaluation proposal is to be presented to sponsors or funding agencies. They can also serve in mapping out the work and ensuring that the work is carried out on time and within budget.

Drawing up a schedule consists in:

1) identifying each of the research steps; 2) specifying the time required for each step.

You need a clear description of the research chronology and the dates for carrying out each of the steps, taking into account, of course, that some steps must be carried out before or at the same time as others.

The schedule must be clear, realistic, and, above all, consistent with the evaluation proposal. Leave a little extra time in each step for manoeuvring room. You have to plan for the unexpected. Changes in the schedule during the program are almost guaranteed.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 211: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

202

Schedule for the Tiny Tots Evaluation

ACTIVITIES DURATION

January March May September January

Research begins •••

Develop data-collection tools •••••••••

Collect data from participants before the program ••••

Collect data from participants

after the program ••••

Analyze logbooks

Analyze data . . . . . . . . . . . . . Discuss results with team and write report

Disseminate and use results

The budget required to conduct the evaluation is added to the schedule. Cost determination is a direct result of the proposed schedule and the resources required for each research step. It comprises two elements: the resources required and possible financial resources.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 212: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

203

Costs for the duration of the evaluation should be estimated and broken down by item:

— Human Resources Estimate the number and types of positions required (coordinator, interviewers, statistics. analyst, etc.). Indicate the salary and benefits for each position. Tasks may also be described. Show the linkage between the research step and the tasks for the positions.

— Material Resources Allow for stationery (envelopes, etc.), copying (questionnaires, etc.), stamps for mailing questionnaires, purchasing audiotape cassettes, and so on. There are also royalties for rating scales, if applicable. Don't forget costs for publishing the report (word processing, revision, layout, printing, advertising, and distribution).

— Travel You'll need to include travel costs for interviewers to carry out the interviews, for participants to attend group discussions, and so on.

— Office Space and Equipment Calculate the monthly costs for a telephone line, long-distance calls, and computer time, if necessary. You should also indicate if you have access to the offices, telephones, and furniture of another organization.

Lastly, you need to take into account the regulations of sponsors. Generally, they set a maximum per budget year. You'll also have to check the allowable categories of expenditure.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 213: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

2 0 4

3 - USING THE EVALUATION R E S U L T S

Here you should indicate how you plan on disseminating and using the evaluation results. Sponsors normally require a research report. The report, however, must be seen as a tool that can serve the group and the community, if you know how to use it. There are also a number of ways of disseminating and using the evaluation results within the organization, among participants, and in the community.

The Evaluation Report A written report is the traditional means for making known the results of an evaluation. It conveys how the program was evaluated, presents the judgments and conclusions about the program, and specifies modifications that could be implemented to improve quality.

For a report to be useful to anyone consulting it, it should have the following:

• a clear title that indicates the name of the evaluated program, the authors of the report, and the date of publication;

• a table of contents with page numbers for the various sections of the report;

• an executive summary, at the beginning of the report, presenting the main conclusions, observations, and recommendations;

• a structure broken down into chapters or long sections; • written in clear, simple language; • tables and graphs that are clearly presented with

appropriate tides and labels; • complete and full references for text or information cited

in the report.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 214: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

205

Generally speaking, the report should contain the following sections:

— Description of the evaluation goal; — Description of the context, organization, program,

clientele, and so on; — Description of the methodology used for collecting and

analyzing data; — Specification of the evaluation boundaries; — Presentation of the main results; — Conclusions and recommendations.

You have to be careful not to confuse observations and recommendations.

Excerpt taken from the Tiny Tots summative evaluation of their child stimulation workshops.

Observation: We have noted progress in language development, in fine motor skills, and behaviour of the children immediately after the program. Psychomotor development alone remained at the same level as before the program.

Recommendation: More activities promoting psychomotor development of children should be offered in the future.

Other Ways of Disseminating Evaluation Results Far from being the end of the evaluation process, results can serve as a springboard for the organization to further its work. This requires making full use of evaluation results and ensuring that they are widely disseminated. What an organization learns during an evaluation, the lessons that it draws from it, and the proposed program improvements are just as important as the program's achievements.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 215: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

206

The participative process begun (see Section 2 herein) should be continued when disseminating and using the results so that all stakeholders can learn from the evaluation. When properly presented and used, evaluation results can promote dialogue between program coordinators, staff, volunteers, and participants. Ultimately, evaluation results must be instruments of change to add value to the program.

Evaluation results can be disseminated to and used by:

/ all program staff in meetings for discussing evaluation contents. If the participative evaluation was carried out properly, all the information will have been shared with the practitioners throughout the process. They will also have been asked about their ideas, concerns, and reactions. When you inform practitioners about the evaluation results, keep in mind that they play a prime role in the program and that the program —not the people— was under evaluation.5

/ coordinators, managers and volunteers, so that they can share in what has been learned in the program.

/ program participants in assemblies. Participants have been involved in the evaluation process and should be apprised of its outcomes.

/ the organization sponsoring the program;

/ the local community (neighbourhood, town), in order to make known the organization's program and work.

5 See the Introductory Manual for more about this topic.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 216: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

2 0 7

The evaluation results should not be limited to a massive volume. Make them accessible in summaries, pull out summary tables, or publish some of the most relevant items. Here are a few ways you can disseminate and use your results:

• public meetings; • posted in the organization's offices; • articles in the local newspaper; • a short summary of the results sent to participants, and so

on.

The important thing to remember is that the last place that the results should be found...is on the library shelf!

14- ETHICAL C O N S I D E R A T I O N S

Lastly, you need to show how the evaluation process adhered to ethic standards.

E T H I C A L C H O I C E R

Informed Consent and Respecting Confidentiality In practice, practitioners are generally very careful to respect the needs and the rights of participants. They conduct interviews under conditions that ensure the inviolability of the person and maintain records so as to preserve participant privacy.

Ethical rules affecting the rights of the individual (among which, to receive services), freedom of choice, and inviolability must also be observed in carrying out evaluations.

This concern should be part of the entire data-collection process, evidenced by actions, attitudes, and commitments. The following are some guidelines or "golden rules" for ethics in evaluation.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 217: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

208

Before Collecting Information • Inform all participants and obtain their informed consent

and voluntary participation without pressuring them. • You have to accept that some people will refuse to take

part or withdraw from the process.

While Collecting Information • Protect the inviolability and privacy of participants. • Have consideration for the participants. • Do not deprive participants in the evaluation from

advantages that others enjoy or vice versa (such as evaluation participants receiving services under the

- program that are not offered to others).

After Collecting Information • The data must be kept anonymous and confidential.

Informing the Respondent and Obtaining his Consent When you ask someone to participate in the program evaluation, you have to inform him about aspects that might influence his decision to participate. You have to inform the individual about the following if he is to make an informed decision:

• the evaluation subject, its goal, and methodology; • the advantages or benefits in taking part in the evaluation; • any risks or consequences to physical health or

psychological well-being; • the steps for ensuring confidentiality; • the length of the respondent's participation; • his right to withdraw at any time and without prejudice

from the evaluation process.

While you may be able to get verbal consent, it is usually better to get written consent by having the participant sign a consent form. This procedure ensures that the individual is aware of the conditions affecting the evaluation process and that his participation is voluntary.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 218: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

209

Of course, you cannot pressure anyone to take part in the evaluation and you must allow them reasonable time to think over their possible participation. Giving the individual a copy of the signed consent form can help reassure him.

contained in the The consent form is essential:

1) If you need to obtain information person's file;

2) if you want to observe someone during the program; 3) if the evaluation involves children or the mentally

handicapped. In such cases, the permission of the individual's tutor, guardian, or curator is required.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 219: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

210

Consent Form

Î agree to take part in or agree to let my child ( ) take part in the evaluation being carried out by Helping Hand group. This will include an interview that should last about one hour.

I understand that I may terminate my participation in this process at any time and that I am free to answer or not answer any question that I might be asked Withdrawing from the evaluation will in no way affect my rights to receive services from Helping Hand.

The information that I provide will remain confidential and be used solely for the purposes of this evaluation. In no event shall it be associated with either my name or that of my child.

Name: Signature: Date:

Ensuring Data Confidentiality and Anonymity However data are received — such as recorded interviews or written questionnaires — their confidentiality must always be protected and the anonymity of respondents guaranteed.

You must ask for and get the respondent's permission before recording an interview and explain the measures that will be used to protect his anonymity and the confidentiality of the information provided.

"Whatever transpires during this interview is strictly confidential. No one will be able to identify you or whatever you say. Your name will appear nowhere in connection with this information, including in-house reports and the final report. "

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 220: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

211

Of course, you'll need names, addresses, and/or telephone numbers to get in touch with people. TTiis kind of infonmation must also be subject to strict security measures to protect respondent confidentiality and anonymity. Each respondent should be promptly assigned a code that can be used for identifying interviews and questionnaires. Identification codes must be kept in a separate file and protected from theft, copying, interception, and accidental dissemination.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 221: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 222: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

213

REFERENCES

Alter, C. and W. Evens (1990). Evaluating Your Practice. A guide to Self-Assessment. New York, Springer Publishing Co.

Anderson, S.B. and S. Ball (1978). The Profession and Practice of Program Evaluation. San Francisco, Jossey-Bass Publ.

Angers, M. (1992). Initiation pratique à la méthodologie des sciences humaines. Montréal, Les éditions de la Chenelière inc.

Association des centres jeunesse du Québec (1997). Répertoire des outils cliniques standardisés utilisés dans les centres jeunesse. Montréal, ACJQ.

Austin, M.J., G. Cox, N. Gootlich, D.J. Hawkins, J.M. Kruzich and R. Rauch (1982). Evaluating your Agency's Programs. Beverly Hills, Sage Publications.

Barnsley, J. and D. Ellis (1992). Research for change : participatory action research for community groups. Vancouver, The Women's Research Center.

Bingham, R.D. and C.L. Felbinger (1989). Evaluation in Practice : A Methodological Approach. New York, Longman.

Boruch, R.F. (1976). "On Common Contentions About Randomized Field Experiments", in Evaluation Studies Review Annual, edited by Gene V. Glass, Vol. 1, Newbury Park, Sage Publications.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 223: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

214

Brinkerhoff, R.O. et al. (1983). Program Evaluation : A Practitioner's Guide for Trainers and Educators. Boston, Kluwer-Nijhoff Publishing.

Brochu, C, L. Denhez with L. Gauvin and D. Nadeau (1991). Le projet-pilote québécois "Y'a personne de parfait". Trois-Rivières, UQTR.

Chambers, D.E., K. Wedel and M.K. Rodwell (1992). Evaluating Social Programs. Toronto, Allyn and Bacon.

Contandriopoulos, A.-P., F. Champagne, L. Potvin, J.-L. Denis and P. Boyle (1990). Savoir préparer une recherche. La définir, la structurer, la financer. Montréal, Les Presses de l'Université de Montréal.

Cook, T.D. and D.T. Campbell (1986). Quasi-experimentation : Design and Analyzing Issues for Field Setting. Chicago, Rand-McNally.

Cronbach, L.J. (1982). Designing Evaluations of Educational and Social Programs. San Francisco, Jossey-Bass Publ.

Denham, D. and J. Gillespie (n.d.) Guide d'évaluation de projet à l'usage des consultants en programme. Ottawa, Santé Canada, Direction de la santé de la population.

Deslauriers, J.-P. (1991). Recherche qualitative. Guide pratique. Montréal, McGraw-Hills, editors.

Dubé, N., F. Maltais and C. Paquet (1995). Félicitations pour votre beau programme! Guide pour bâtir un projet ou un programme. Gaspésie-Iles de la Madeleine, Régie régionale de Ja santé et des services sociaux, Direction de la santé publique.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 224: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

215

Ellis, D., G. Reid and J. Barnley (1990). Keeping on track : An Evaluation Guide for Community Groups. Vancouver, The Women's Research Centre.

Fischer, J. and K. Corcoran (1994). Measures for Clinical Practice : a sourcebook. Volume 1 : Couples, Families and Children. Volume 2 : Adults. 2nd edition. New York, The Free Press, Macmillan, Inc.

Fournier, D.M. (editor). (1995). Reasoning in evaluation : inferential links and leaps. San Francisco, Jossey-Bass Publ.

Gauthier, B. (editor). (1984). Recherche sociale. De la problématique à la collecte des données. Sillery, Presses de l'Université du Québec.

Gravel, RJ. (1983). Guide méthodologique de la recherche. Sillery, Presses de l'Université du Québec.

Grinnell, R.M. Jr. (1997). Social Work Research & Evaluation. Quantitative and Qualitative Approaches. 5th edition, Itasca, P.E. Peacock Publishers, Inc.

Guba, E.G. and Y.S. Lincoln (1981). Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. San Francisco, Jossey-Bass Publ.

Guba, E.G. and Y.S. Lincoln (1989). Fourth Generation Evaluation. Newbury Park, Sage Publications.

Hébert, Y.M. (1986). "Naturalistic Evaluation in Practice : A case Study", in D.D. Williams (ed.), Naturalistic Evaluation, New Directions for Program Evaluation. San Francisco, Jossey-Bass Publ.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 225: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

216

Holt, J.D. (1993). How about ... Evaluation! A Handbook about Project Self Evaluation for First Nations and Inuit Communities. Ottawa, Department of National Health and Welfare's Medical Services Branch, Mental Health Advisory Services.

Kerlinger, F.N. (1979). Behavioural Research: A Conceptual Approach. New York, Holt, Rinehart and Winston, 1979.

Kettner, P.M., R.M. Moroney and L.L. Martin (1990). Designing and Managing Programs. An Effectiveness-Based Approach. Newbury Park, Sage Publications.

Krueger, R.A. (1994). Focus groups : a Practical Guide for Applied Research. 2nd edition, Thousand Oaks, Sage Publications.

Ladouceur, R. and G. Bégin (1986). Protocoles de recherche en sciences appliquées et fondamentales. St-Hyacinthe, Edisem.

Lamoureux, A. (1995). Recherche et méthodologie en sciences humaines. Laval, Éditions Études Vivantes.

Lecomte, R. and L. Rutman (editors).(1982). Introduction aux méthodes de recherche évaluative. Ottawa, Université de Carleton. Distributed by Les Presses de l'Université Laival.

Létoumeau, J. (1989). Le coffre à outils du chercheur débutant. Guide d'initiation au travail intellectuel. Toronto, Oxford University Press.

Love, A.J. (editor). (1991). Evaluation Methods Sourcebook. Ottawa, Canadian Evaluation Society.

Lusthaus, C., R. Browne, F. Clark and A. Fochs Heller (1995). Guide d'auto-évaluation de l'Institut d'études à la retraite. Montreal, Me Gill University.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 226: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

217

Marsden, D. and P. Oakley (1990). Evaluating Social Development Projects. Oxfam Development Guidelines.

Massé, R. (1993). "Réflexions anthropologiques sur la fétichisation des méthodes en évaluation", in L'évaluation sociale: savoirs, éthique, méthodes. Proceedings from the 59th Congress of ACSALF, edited by Judith Légaré and Andrée Demers. Laval, Éditions du Méridien, p. 211-237.

Mayer, R. and F. Ouellet (1991). Méthodologie de recherche pour les intervenants sociaux. Boucherville, Gaétan Morin éditeur. (A second revised and improved edition is being prepared).

Meyers, W.R. (1981). The Evaluation Enterprise. San Francisco, Jossey-Bass Publ.

Miles, M.B. and A.M. Huberman (1994). Qualitative Data Analysis : A Sourcebook of New Methods. 2nd edition. Thousand Oaks, California.

Moreau, J. (1996). "Se donner des instruments d'évaluation", in Les nouvelles du Réseau conseil interdisciplinaire du Québec. Vol. 2, no. 1, avril, p. 5-10.

Neuman, W.L. (1997). Social Research Methods. Qualitative and Quantitative Approaches. 3rd Edition, Boston, Allyn & Bacon.

Pardeck, J.T. (1996). Social Work Practice. An Ecological Approach. Westport, Connecticut, London, Auburn House.

Patton, M.Q. (1980). Qualitative Evaluation and Research Methods. 2nd edition. Beverly Hills, Sage Publications.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 227: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

218

Pietrzak, J., M. Ramier, T. Renner, L. Ford and N. Gilbert (1990). Practical Program Evaluation. Examples from Child Abuse Prevention. Newbury Park, Sage Publications.

Poisson, Y. (1991). La recherche qualitative en éducation. Sillery, Presses de l'Université du Québec.

Posavac, E.J. and R.G. Carey (1989). Program Evaluation ; Methods and Case Studies. 3rd edition. Englewood Cliffs, N.J. Prentice-Hall.

Rossi, P.H. (1982). Standards for Evaluation Practice. San Francisco, Jossey-Bass Publ.

Rossi, P.H. and H.E. Freeman (1993). Evaluation: A systematic Approach. 5th edition. Newbury Park, Sage Publications.

Seidl, F. W. (1995). "Program Evaluation" in Encyclopedia of Social Work, 19th edition, Washington, NASW.

Simard, G. (1989). La méthode du Focus Group. Laval, Mondia éditeur.

Smith, M.F. (1989). Evaluability Assessment : A Practical Approach. Boston, Kluwer Academic; Norwell, Mass., Distributors for North America Kluwer Academic Publishers.

Touliatos, J., B.F. Perlmutter and M.A. Straus (editors). (1990). Handbook of Family Measurement Techniques. Newbury Park, Sage Publications.

Tousignant, R. and D. Morissette (1990). Les principes de la mesure et de Vévaluation des apprentissages. Boucherville, Gaétan Morin, éditeur.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 228: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Treasury Board of Canada (1991). Program Evaluation Methods. Measurement and Attribution of Program Results. Office of the Comptroller General, Program Evaluation Branch, Ottawa.

Tremblay, A. (1991). Sondages - histoire, pratique et analyse. Boucherville, Gaétan Morin éditeur.

VanderPlaat, M.M.L. (1989). Nobody9s perfect : Process and Impact Evaluation Report. Ottawa, Health and Welfare Canada.

Weiss, H.B. and F.H. Jacobs (editors) (1988). Evaluating family programs. New York, Aldine de Gruyter.

Yin, R.K. (1985). Case Study Research: Design and Methods. Beverly Hills, Sage Publications.

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 229: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Program Evaluation for Organizations : Evaluation Tools for Enhancing Program Quality

Page 230: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

APPENDIX

Useful Measurement Instruments

221

A number of measurement tools can be used by practitioners to evaluate the impact of a program. Several sourcebooks describe the instruments available to social practitioners and evaluators and explain how to use them.

The following are some examples:

Fischer Joel, Kevin Corcoran (1994| Measures for Clinical Practice: a sourcebook. Volume 1: Couples, Families and Children. Volume 2: Adults. 2nd Edition. The Free Press, Macmillan, Inc. New York.

Touliatos, John, Barry F. Perlmutter and Murray A. Straus (Editors) (1990). Handbook of Family Measurement Techniques. Sage Publications, Inc.

In French:

Répertoire des outils cliniques standardisés utilisés dans les centres jeunesse. Association des centres jeunesse du Québec, Montréal, 1997.

We have selected a few instruments that we will describe briefly. Some of these instruments can,be used to evaluate parenting skills and social support, while others serve to measure child development.

Before we do that, we wish to issue an important warning to practitioners using these tools.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 231: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

222

Using standardized instruments can give practitioners a false sense of security and should at no time be considered a substitute for practitioner judgment. Instruments should be seen as complements to other methods, such as interviews, direct observations, and reports to others.

Caution must therefore be exercised because:

• Standardized instruments may have good psychometric qualities but may not be appropriate for measuring change;

• Growth evaluation in not linear and it is not always easy to get a clear, overall picture of it;

• Instruments designed for diagnostic purposes are not inherently appropriate for producing intervention plans;

• Few instruments can be used alone without providing perspectives or hypotheses that are too narrow;

• Few instruments focus on the strengths and skills of youth and their families;

Using standardized tools in a "normal" population can skew the definition of objectives for high-requirement clienteles;

• The professional must have the training required to administer and score the instrument properly and be able to rigorously interpret it;

• Many instruments (especially those for early childhood and parents) arc based on observation. Observation requires sound standards good skills, and proper training to yield appropriate levels of objectivity and coherence.

(Source: Répertoire des outils cliniques standardisés utilisés dans les centres jeunesse, Association des centres jeunesse du Québec, 1997)

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 232: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

223

1- EVALUATING PARENTING SKILLS

Parenting Stress Index (PSI) The PSI, developed by Abidin, is for parents of children ages 0 to 10. It is used to determine the overall stress level of parents. The questionnaire contains 101 items broken down into two general areas of stress, under which are included a total of 13 subscales: Child characteristics: Child Adaptability/Plasticity (e.g. "My child gets upset over the smallest thing.") Acceptability of Child to Parents (e.g. "My child doesn't seem to learn as quickly as most children.") Child Demandingness/Degree of Bother (e.g. "My child is always hanging on me") Child Mood, Child Distractibility/Activity, Child Reinforces Parents and Parent Characteristics: Depression, Unhappiness, Guilt (e.g. "There a quite a few things that bother me about life.") Parents' Sense of Competence (e.g. "Being a parent is harder than I thought.") Parents' Attachment (e.g. "It takes a long time for parents to develop close warm feelings for their children.") Restrictions Imposed by Parenting Role, Social Isolation, Relationship with Spouse and Parental Health.

The PSI is designed to be self-administered by the parent. The instrument is accompanied by a manual explaining the administration procedures and suggested interpretation of scores for each subscale.

•Reference: • Abidin, Richard R. (1995) Parenting Stress Index: Professional Manual Psychological Assessment Resources.

•Availability: Richard R. Abidin, Institute of Clinical Psychology, University of Virginia, Charlottesville, Virginia, U.S. 22903 or Pediatric Psychology Press, 320 Terrell Road West, Charlottesville, Virginia, U.S. 22901

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 233: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

224

2- CHILD DEVELOPMENT SCALES

The Preschool Behavior Rating Scales This instrument was developed by W.R Barker and A. M. Doeff. The objectives are to identify children in preschool settings who show development problems. It also enables the monitoring of progress in three specific behavioral skills over time: psychomotor, cognitive and social.

Five dimensions aie treated: Coordination (e.g. Gross Motor, Fine Motor), Expressive Language (e.g. Vocabulary, Grammar), Receptive Language (e.g. Story Listening, Memory), Environmental Adaptation (e.g. Organization, Initiative), Social Relations (e.g. Cooperation, Consideration of Others).

The instrument is designed to be completed by childcare workers. They have to observe directly the concrete behaviors of children and to note their observations over a period of time. A written manual explains the administration and scoring of the instrument.

•Reference: Barker, William. F. and Annick M. Doeff (1980). Preschool Behavior Rating Scale: Administration and Rating Manual. New York, NY: Child Welfare League of America.

•Availability: Child Welfare League of America, do CSSC, 300 Raritan Center Parkway, Edison, NJ, U.S. 08818

The Child Behavior Checklist (CBCL) The Child Behavior Checklist developed by Achenbach is meant to facilitate the choice of appropriate services and to measure behavioral changes due to interventions.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 234: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

225

The instrument consists of 118 behavior-problems items in such areas as socialization, emotional state and destructive behaviour. These items are organized into separate and different profiles for various sex-age groups. The instruments also contain seven social competency items.

The CBCL is a relatively easy-to-administer instrument for parents as well as childcare workers and practitioners. A computer program exists for automated scoring.

•Reference: Achenbach, Thomas. M. (1991). Manual for the Child Behavior Checklist. Department of Psychiatry, University of Vermont, Sage Publications.

•Availability: Child, Adolescent, Family and Community Psychiatry, University of Vermont, 1 South Prospect Street, Burlington, Vermont, U.S. 05401.

The Eyberg Child Behavior Inventory Developed by Sheila Eyberg, this instrument's purpose is to measure conduct-problem behaviours in children between the age of 2 and 17. It can be used to screen children with or without conduct problems and to measure change in response to interventions.

It consists of 36 items, measuring a range of problem behaviors including aggression toward others (e.g. "Verbally fights with friends own age."), noncompliance (e.g. "Refuses to go to bed on time."), temper tantrums (e.g. "Has temper tantrums"), and disruptive and annoying behaviors.

This instrument is designed to be used by parents.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 235: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

226

•Reference: Burns, G. L., Patterson D. R. (1990). Conduct problem behaviors in a stratified random sample of children and adolescents: New standardization data on the Eyberg Child Behavior Inventory, Psychological Assessment, 2, 391-397.

•Availability: Dr. Sheila Eyberg, Department of Clinical and Health Psychology, University of Florida, Box J-165 HSC, Gainesville, FL, U.S. 32610.

Program Evaluation for Organizations :. Evaluation Tools for Enhancing Program Quality

Page 236: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

0 13,067 E-1967 V o 1 - 2 Tard, Caroline

Ouel let , Hector e t a l . Program Evaluation for Organization under CAPC (Community Action Pro-gram for Chi ldren): Vol .2 : Evalua-t i o B A i i o o l s for Enhan&iflg Program Quali ty — /j

Page 237: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

Û 13,067 V o l . 2

Page 238: EVALUATION TOOL FOS R ENHANCING PROGRAM QUALITY · 2012. 12. 10. · one way or another, ... • Summativ evaluatio (outcom evaluationn ee p1 11) 5 . 1 Othe manual r sometimes us

To o r d e r :

C e n t r e d e r e c h e r c h e s u r les se rv ices c o m m u n a u t a i r e s , U n i v e r s i t é Lava l , B u r e a u 2446, Pav i l lon C h a r l e s - D e K o n i n c k , Q u é b e c ( Q u é b e c ) G 1 K 7T4

This s e r i e s « P R O G R A M E V A L U A T I O N FOR O R G A N I Z A T I O N S UNCLE C A P C ( C o m m u n i t y Ac t ion P r o g r a m for C h i l d r e n ) » c o m p r i s e s t h r ee v o l u m e s :

© Introductory Manual

© Evaluation Tools for Enhancing Program Quality

© Presentation of Evaluation Guides

2 T h e p roduc t ion of these tloree d o c u m e n t s has been subs id ized £ by the C o m m u n i t y Act ion P r o g r a m for Ch i ld ren , Hea l th " C a n a d a , w i t h the a g r e e m e n t of the Prov ince of Quebec .

Te l ephone : (418) 656-2674 Fax: (418) 656-7787 E Mail : crsc@crsc. t i laval .ca W3: h t t p : / / w w w . c r s c . u l a v a l . c a

T