Upload
aldous-thomas
View
215
Download
0
Tags:
Embed Size (px)
Citation preview
This week we’ll focus on Learning about the different approaches of evaluation.
1
A closer look at specific forms/approaches
Objectives oriented determining the extent to which goals and objectives are
achieved
Management oriented providing useful information to aid in making decisions
Consumer oriented providing information about products to aid decisions
about purchases or adoptions determining product effectiveness from the user
perspective
Expertise oriented providing professional judgments of quality
2
1) objectives-Oriented Approach
Objectives oriented Purposes (of a program or product) are specified;
evaluation focuses on the extent to which they are achieved or attained
Purposes may be simple or complex Among the many next-steps decisions: modify the
purposes; modify the program or product itself; modify program/product rollout.
3
4
5
The NSF Golden Monkey Project
Educational ObjectivesTo have teachers
develop instructional activities
To implement these activities in their classes and improve students’ awareness of the environment
6
A closer look at specific forms/approaches: objectives
A logical process, “in tune” with traditional research
The evaluator uses measurement strategies and often relies on modern statistical analyses.
While “experimental” designs are sometimes advocated, “success” is generally measured in terms of program-specific criteria
rather than comparisons with control groups or other programs.
7
A closer look at specific forms/approaches: objectives
Strengths SimplicityObjectivity GeneralizabilityClear delineation of logical relationships
between objectives/activitiesAligns well with the Utility standards
8
A closer look at specific forms/approaches: objectives
Weaknesses May lead to “contrived” conditions/settings --
the process doesn’t always reflect the “real world” in which programs and products operate
Not sensitive to the subtleties of human interaction
“Single focus” on goals and objectives may cause important issues to be overlooked
Evaluator may ignore the efficacy of the goals/objectives themselves
9
A closer look at specific forms/approaches: objectives
A twist on an objectives orientation: the discrepancy approachA continuous improvement process -- where
the focus is on the difference between the program performance and the standards
Final decisions may be used toward several ends … including whether to improve an program (product), maintain it, or terminate it
10
A closer look at specific forms/approaches: objectives
Taxonomies -- help us determine at what “level” objectives aimBloom: cognitive domainKrathwohl: affective domain
Receiving, Responding, Valuing, OrganizingCharacterizing by a Value or set
NAEP assessments
11
2) Management-Approach•Management (or decision) oriented• Serves decision-makers• Focus on inputs, processes, and outputs• The idea is to provide data relative to different
aspects (or phrases) of a system, program, or product:• So that next-steps decisions are focused/targeted• The evaluator attends to the program development cycle, and is prepared to
provide “unique” information at different points in time.
12
A closer look at specific forms/approaches: management
• “Variations” with which you should be familiar:• CIPP • Discrepancy • Logic modeling/input/process/output• Kirkpatrick
13
14
CIPP Model (Stufflebeam)
Examples?• The National Science Foundation
15
Go to http://demo3.westat.com/graphics/nsf/archive/Module2c1a.htm
Pick one Case and Peruse Share with the class your discoveries
16
Management Approach• Open to an array of data gathering
strategies• Document review• Systems analysis• Delphi technique:• Panel of experts
• Case studies• Etc.
17
Management Approach:
•Strengths • Orderly and systematic• Gives focus to a study • Emphasizes utility … utilization• Promotes design/use of heuristics
18
Management approaches:
•Weaknesses• Can cause internal conflict for an evaluator:• What if the important issues turn out not to be the
ones in which decision-makers have expressed interest?
• A “top-down” framework • Not always easy to scale down • Assumes “use of results” are predetermined • May not always align with real-world decision-
making. 19
3) A CLOSER LOOK AT SPECIFIC FORMS/APPROACHES: CONSUMER
Consumer (user) orientedPotential users of information (a product or
program, an event, an intervention) are the focal points
The evaluator places primary emphasis on people and the way they use a product or program
An active, reactive, adaptive approach (Patton, 1996) in which the evaluator proposes ideas to the user groups, responds to their suggestions, and adapts the evaluation to their needs
20
CONSUMER-ORIENTED May address both formative and summative
issues -- akin to usability testing
Allows for innovative data gathering strategies Checklists Think aloud protocols Discourse analysis
Examples: Educational products information exchangeDOE’s program effectiveness Panel
21
A CLOSER LOOK AT SPECIFIC FORMS/APPROACHES: CONSUMER
State Dept. of Education -- textbook and software adoption protocolsAreas of focus: processes, content,
transportability, effectivenessCentral question:
What does one need to know about a product or program before deciding whether or not to adopt/implement it?
22
A CLOSER LOOK AT SPECIFIC FORMS/APPROACHES: CONSUMER
Strengths Its concern with individuals who “care” about a
program or product program, and its attention to information meaningful to them
Weaknesses Possible over-reliance on stable user groups Its susceptibility to greater influence from some
interests than others -- sometimes we lose sight of who the “real users” are
Can be costly
23
24
*Expertise-oriented*Depends upon professional expertise
(subjective?) to judge an institution, program, product, activity
*Often conducted by a team, not an individual
*Examples with which we’re familiar: *accreditation (e.g., the WASC and NCATE websites;
check out SDSU’s WASC Reaccreditation website)
*tenure panels
*“watch-dog” organizations
*NSF Review panel
*Advisory board for projects
25
Underlying assumption … that members of a profession are qualified to judge the activities of peers and that members are qualified to develop the standards/criteria by which judgments are made
26
*Some guiding questions to consider:*Is there an existing structure for operating the review?
*Are published standards used as part of the review?
*Are reviews scheduled at specified intervals?
*Does the review include opinions of multiple experts?
*In what ways will results be used?
*Common data gathering strategies: site visits, documents examination
27
*To ponder: *Can you argue that the expertise orientation
aligns with each of the following purposes?*Rendering judgments
*Facilitating improvement
*Generating knowledge
*How would one’s role be defined if s/he served
*… on an informal professional review panel?
*… on an ad hoc review panel?
*… as a connoisseur and/or critic?
28
*Strengths *Allows for institutionalization of well-grounded
standards/guidelines*Fosters self-reflection and self-study *Offers a perspective that cannot be equaled when
assessment is conducted by objective outsiders
*Weaknesses *Often stirs public concerns over credibility*Can lead to self-interest and protectionism *Can be financially burdensome*May cause confidentiality to be compromised
Is it the right time to evaluate?Owen Weiss and Patton (among many
other eval theorists and practitioners) advocate a determining process called evaluability assessment (EA).
EA, they believe, can be more than a filter to screen out programs not yet worthy of outcome evaluation; it really is akin to formative evaluation, in which the focus is program/product/process improvement.
29
Evaluability assessment: where evaluators work with program managers to
help them get ready for evaluation involves clarifying goals, finding out various
stakeholders’ view of important issues, and specifying the model or intervention to be assessed
often includes fieldwork and interviews to determine how much consensus there is among various stakeholders about goals and other important program/product factors
30
Utility: ensuring evaluation takes place only when
there’s good reason for its conduct--and that the information is useful to those who need it
Feasibility: ensuring that evaluation is feasible and
reasonable to conduct Propriety:
ensuring that potential conflicts of interest have been considered and bias eliminated … and that individual rights have been fully protected
Accuracy: ensuring that the data are analyzed so that a
“true” picture of the issues is presented31
As evaluators, we want to avoid the following traps or pitfalls: Making ourselves the decision-makers for the evaluation Pursuing our own research agenda under the guise of
evaluation Identifying our audiences organizationally and
anonymously (e.g., the government, the public, program staff)
Focusing on the decisions to be made instead of the people who will make them
Assuming that funders of the evaluation are always the primary intended users (to wit: forgetting that they generally mandate the process of evaluation, but not its substance)
Waiting until the “end” of the study to attend to and plan for use
Being disengaged (behaving as though we have narrow professional responsibilities)
32
Evaluator credibility derives from our behavior. For now, at least, evaluation isn’t a profession
predicated on certification, credentialing, licensure, or degree.
Check out definitions that the Oregon Network for Education provides: Degree Credential and certificate (see also:
http://en.wikipedia.org/wiki/Professional_certification)* License**Remember: earning a certificate is NOT the same as
earning or receiving certification.
33