Upload
arnold-maxwell
View
218
Download
0
Embed Size (px)
Citation preview
1Northwest Center for Public Health Practice
CHILD AND FAMILY
DISASTER RESEARCH
TRAINING AND EDUCATION
2Northwest Center for Public Health Practice
Federal Sponsors
NIMH National Institute of Mental Health
NINRNational Institute of Nursing Research
SAMHSA Substance Abuse
and Mental Health Services Administration
3Northwest Center for Public Health Practice
Principal Investigators
Betty Pfefferbaum, MD, JD University of Oklahoma Health Sciences Center
Alan M. Steinberg, PhD University of California, Los Angeles
Robert S. Pynoos, MD, MPHUniversity of California, Los Angeles
John Fairbank, PhDDuke University
4Northwest Center for Public Health Practice
Evaluating Disaster Mental Health Programs
Part I
Clark Johnson, Ph.D.
Adopted / Modified from materials prepared by: Fran Norris Ph.D., Craig Rosen, Ph.D.
Helena Young, Ph.D.National Center for PTSD
5Northwest Center for Public Health Practice
Primary sources for presentation
Owen, J.M. (2007). Program Evaluation: Forms and Approaches. New York: Guilford Press.
Rosen, C., Young, H., & Norris, F. (2006). On a road paved with good intentions, you still need a compass: Monitoring and evaluating disaster mental health services. In C. Ritchie, P. Watson, & M. Friedman (Eds.), Mental health intervention following disasters or mass violence (206-223). New York: Guilford Press.
6Northwest Center for Public Health Practice
Learning Objectives
After completing this module you will be able to:
• Identify evaluation methods that support the intervention program from conception to outcome
• Engage in evaluation activities prior to a disaster
• Recognize the barriers and challenges in conducting evaluations of disaster mental health programs
• Understand the crucial role of both community and agency stakeholders as key informants and participants in all evaluation activities
7Northwest Center for Public Health Practice
Let’s start with your experience
Give an example (past present or future) of a program evaluation or one you wish would be evaluated(!)
Please focus on:• What is being evaluated?• Why
• what is the objective of this evaluation ?• what is the “product” this evaluation should generate?
• How• Method(s)
8Northwest Center for Public Health Practice
Evaluation: Traditional Perspective
Program evaluation as a “judgment of worth”
• How good is this program?
• Did the program work?
• Was the program worthwhile from a monetary perspective?
9Northwest Center for Public Health Practice
Logic of Evaluation Establish criteria of worth
• On what dimensions must the evaluand do well?
Constructing standards• How well should the evaluand perform?
Measuring performance/compare with standards• How well did the evaluand perform?
Synthesizing & Integrating evidence
10Northwest Center for Public Health Practice
Steps in Conducting Program Evaluation
1. Engage the stakeholders
2. Describe how the program works
3. Articulate evaluation questions & design
4. Gather credible evidence
5. Justify conclusions
6. Share results
11Northwest Center for Public Health Practice
Evaluation: Global Perspective
Before• What is needed?• How does this program meet these needs?
During• What is happening in this program?• How can we improve this program?
After• How good is this program?• Did the program work?
12Northwest Center for Public Health Practice
Categories of Evaluative Inquiry
Proactive• Guides the early planning so that it incorporates
the views of stakeholders and the accumulated knowledge from previous work in the field
Clarificative• Quantifies both the program’s process and
objectives – make program assumptions explicit
Interactive• Think of this as evaluation design to enable the
program to make “mid-course corrections”
Impact• The “traditional” evaluation category
13Northwest Center for Public Health Practice
Proactive Evaluation
Purpose: Synthesis• What is already known should influence action.
Typical Issues:• What is the “need”• What is known about this problem
• experience, • relevant literature, • conventional wisdom
• What is recognized as best practice in this area• Who are the stakeholders & how do their perspectives
differ
14Northwest Center for Public Health Practice
Engaging Stakeholders
Who are the “stakeholders?”
• program leaders and staff• communities who are served by the program• funding and administrative agencies
Identifying and engaging stakeholders helps to create a sense of ownership by ensuring that their perspectives are understood and that essential elements of the program are not being ignored
However, it is also important to identify the primary client at the start of the process: Who will “own” the data, and who gets to put the “spin” on results?
15Northwest Center for Public Health Practice
How Stakeholders are Engaged
Evaluators often begin by asking,
• What will this evaluation do for you? • What is it that you want to know? • Who do you have to answer to? • What does that mandating authority care about?”
Evaluators often invite discussion about immediate, intermediate and long-term concerns
Often evaluators explore policies the stakeholder is attempting to inform or influence and incorporate these choices into the design
16Northwest Center for Public Health Practice
Clarificative Evaluation
Purpose: Clarification• Define (make explicit) the internal structure and
functioning of an intervention or program. Typical Issues:
• Define program: • outcomes, • rationale, • methods
• How is program designed to achieve the outcomes• Is the program plausible?
17Northwest Center for Public Health Practice
Interactive Evaluation
Purpose: Improvement• Assist with ongoing service provision and structural
arrangements – with a strong emphasis on process
Typical Issues:• What is this program trying to achieve• Is the delivery:
• Working• Consistent with the program plan
• How could the delivery be changed to maximize efficiency & effectiveness
• Is program reaching the target population• Is there a site which needs attention to ensure effective delivery
18Northwest Center for Public Health Practice
Impact Evaluation
Purpose: Learning / accountability• Assess the effects of completed program. • Determine what did (not) work and why
Typical Issues:• Program implemented as planned?• Program achieved stated goals / objectives• What were unintended outcomes
19Northwest Center for Public Health Practice
So, what is our definition our of Program Evaluation
Program Evaluation is more than a “judgment of worth” – it also contributes to:• Planning• Fine tuning &• Execution
Expanded definition emphasizes the production of “useful knowledge for decision making”
20Northwest Center for Public Health Practice
Categories of Evaluative Inquiry Proactive Clarificative Interactive Impact
Great! But how / when is this done?
Next slide series will focus on the “Methods” associated with various categories
21Northwest Center for Public Health Practice
Proactive Evaluation
Major focus: Program Context
State of Program: None
Key approaches:• Needs assessment• Research synthesis (evidence-based practice)• Review of best practice (benchmarking)• Generate input from Stakeholders, key informants,
and target population
22Northwest Center for Public Health Practice
Needs Assessment Sidebar Focus on problems not solutions
A sampling of “needs assessment” field notes• “We need to Minimize psychological trauma following
a disaster• “For that purpose we need a new health center in the
neighborhood”. What kind of need is this?
• 1) Need as the difference b/w pre and post disaster• 2) Need as the solution
Always use the “need as discrepancy” definition when conducting a “needs assessment”.
23Northwest Center for Public Health Practice
Key words for Google search(and other useful references)
Concept mapping• Sutherland & Katz (2005). Concept mapping methodology: A
catalyst for organizational learning. Evaluation and Program Planning, 28, 257-269
Focus groups• Strickland (1999) Conducting Focus Groups Cross-Culturally:
Experiences with Pacific Northwest Indian People, Public Health Nursing, 16(3),190-197.
Needs Assessment• Roth (1990). Needs and the needs assessment process.
Evaluation Practice, 11(2), 39-44.
24Northwest Center for Public Health Practice
Clarificative Evaluation Major focus: All elements
State of Program: Development
Key approaches:
• Evaluability assessment• Stakeholders: Identify and determine their perceptions, concerns and
interests.• Logic development -- identify assumed cause and effect
relationships as well as interplay of resources and activities• Ex-ante
• An investigation undertaken to estimate the impact of a future situation
25Northwest Center for Public Health Practice
Describing How the Program Works
Evaluation is grounded in an understanding of how a program operates, known as
“program theory” or “logic model”
26Northwest Center for Public Health Practice
Example Logic Model
Event• Type of disaster• Estimated need
Community • Density, income
• Age & ethnic dist.
Inputs• Budget
• Other resources
Activities • Service mix
• Referrals• Training
• Diversity activities
Outputs • Number of
people served • Number of counseling contacts
• Number of minorities served
• Number of children served
Outcomes • Improved
functioning of individuals and
families • Improved community cohesion & resilience
• Reduced stigma about
seeking treatment
• Legacy of public mental health
orientation
27Northwest Center for Public Health Practice
Key words for Google search(and other useful references)
Evaluability Assessment• Smith (1989) Evaluability Assessment: A Practical Approach.
Norwell, MA: Kluwer.
Program Logic.• Patton (1997) Utilization Focused Evaluation. 3rd ed.
Thousand Oaks, CA: Sage.
Ex-ante evaluation• Ex-ante Evaluation: A practical guide for preparing proposals
for expenditure programmes (http://ec.europa.eu/budget/evaluation/pdf/ex_ante_guide_en.pdf)
28Northwest Center for Public Health Practice
Interactive Evaluation(New program)
Major focus: Delivery State of Program: Development Key approaches:
• Responsive• Action research• Developmental• Empowerment• Quality review
29Northwest Center for Public Health Practice
Key words for Google search(and other useful references)
Responsive• Stake (1980). Program evaluation, particularly responsive
evaluation. In Dockrell & Ganuktib (eds) Rethinking Evaluation Research. London: Hodder & Stoughton.
Empowerment• Fetterman & Wandersman (2004). Empowerment Evaluation
Principles in Practice. New York: Guilford Publications.
Also read summary overview sections in • Owen (2006). Program Evaluation: Forms and approaches.
New York: Guilford Publications Pg 217-236
30Northwest Center for Public Health Practice
Impact Evaluation
Major focus: Delivery / outcomes
State of Program: Settled
Sidebar on study design
31Northwest Center for Public Health Practice
Designs For Outcome Evaluation Pre-experimental or pre-post
• In the simplest case, consumers are compared with themselves before and after an intervention
Experimental• When people are randomly assigned to receive the
intervention or not, groups are equivalent in all ways others than receipt of the intervention. So it is reasonable to attribute differences to the intervention
Quasi-experimental • Sometimes it is possible to identify a reasonable
comparison group, even though people were not randomly assigned. When this is not possible, repeated measures are helpful
32Northwest Center for Public Health Practice
Pre-Post Designs
Longitudinal measures Change over time Better than retrospective
estimates of “change”
Symptoms over time
25
30
35
40
45
Pre Post Follow-up
Hypothetical Results
33Northwest Center for Public Health Practice
When Are Pre-post Designs Adequate, and When Not?
Pre-post designs are adequate to assess an immediate outcome, such as knowledge gained, that normally would not change with time
Pre-post designs are typically inadequate for evaluating intermediate or long-term outcomes. Other things not controlled for can account for the change. People receiving the intervention might have improved anyway because symptoms normally improve over time. People may be most like to seek help when their distress is at its peak
Pre-post designs are often used for pilot testing to justify the cost of experimental designs later
34Northwest Center for Public Health Practice
Experimental and Quasi-experimental Designs
The “gold” standard Randomized Treatment vs. Control
What can we infer• New > Service as usual• Persistent effect
Symptoms over time
25
30
35
40
45
Pre Post Follow-up
Service as usual
New intervention
Hypothetical Results
35Northwest Center for Public Health Practice
Outcome Evaluation:What Do You Do When There is No
Feasible Comparison Group?
An Example: “InCourage,”
The Baton Rouge Area Foundation’sMental Health Initiative
36Northwest Center for Public Health Practice
Repeated Assessment as an Quasi-Experimental Strategy
In the BRAF initiative, each client is receiving “Treatment for Postdisaster Distress,” which requires 8-10 sessions. The first two sessions are psycho-education, very much like crisis counseling. The heart of the treatment (including cognitive restructuring or “CR”) begins at Session 3.
Each client is assessed (briefly) at five time-points:• At point of referral• At enrollment (beginning of first session) • At beginning of third session• At beginning of last session• At follow-up (3 months after completion)
37Northwest Center for Public Health Practice
The treatment effect is plausible if, on average, the data looked something like this:
0
5
10
15
20
25
30
Referral 1stSession
3rdSession
LastSession
Follow-up
To
tal d
istr
ess
sco
re
Why?
38Northwest Center for Public Health Practice
We’d have less confidence if, on average, the data looked like this:
0
5
10
15
20
25
30
Referral 1stSession
3rdSession
LastSession
Follow-up
To
tal d
istr
ess
sco
re
Why?
39Northwest Center for Public Health Practice
What are the Lessons Here?
There is middle ground between “clinical trials” and simple “pre-post” designs. It is usually true that “something is better than nothing”.
Although there is only one group, the repeated assessments will allow us to evaluate competing explanations of observed improvements.
Comment – As indicated in the example, a quasi experimental design can be used to demonstrate that no effect exists but usually will not provide convincing evidence (beyond plausibility) that the observed effect was “caused” by the intervention
40Northwest Center for Public Health Practice
Let’s take a break
When we come back we’ll focus on moving these concepts:
From theory to practice
41Northwest Center for Public Health Practice
“Disaster research is different from most other fields in that much of the work is motivated by a sense of urgency and concern. Disaster research has both benefited and suffered from this. It has benefited because the cadre of researchers is fluid, and new ideas are accepted and welcomed. It has benefited also because the result has been an impressively diverse database that includes samples from all different regions of the United States[...]. However, disaster research has also suffered from this situation. Scholarship is not always the best because studies often are undertaken under conditions where there simply is not time to absorb a literature that is scattered across a variety of journals and is mixed in quality. Concerns about experimental designs and scientific rigor must often take a back seat to provider beliefs, consumer demands, and clinical necessities. Most of the research is atheoretical and little of it is programmatic. On the basis of this review, we will state our opinion unequivocally that we do not need more research that establishes only that severely exposed disaster victims develop psychological disorders or, worse, that barely exposed disaster victims do not. We need carefully conceived and theory-driven studies of basic process that are longitudinal in design. [...] We need more research that addresses the needs of diverse populations. We need more complex studies of family systems and community-level processes. We need to identify and investigate novel approaches to community intervention, where the intervention itself has been designed to produce collective rather than individual improvements.”
Source :
Norris, Friedman, & Watson. (2002) 60,000 Disaster Victims Speak: Part II. Summary and Implications of the Disaster Mental Health Research.
Psychiatry 65(3), 240-260
We will start the next session in about 10 minutes and will begin with a discussion of the following text
42Northwest Center for Public Health Practice
Blank