74
By Gareth Thomson Canadian Parks and Wilderness Society and Jenn Hoffman Sierra Club of Canada, BC Chapter Measuring the Success of Environmental Education Programs Global, Environmental, and Outdoor Education Council

Measuring the Success of Env Edu

Embed Size (px)

DESCRIPTION

Pendidikan Lingkungan Hidup

Citation preview

By Gareth ThomsonCanadian Parks and Wilderness Societyand Jenn Hoffman Sierra Club of Canada, BC Chapter Measuring the Successof EnvironmentalEducation ProgramsGlobal, Environmental, and Outdoor Education CouncilMeasuring the Success of Environmental Education Programs Page 1 Table of ContentsExecutive Summary 3Introduction5About Environmental Education and Evaluation1. What is Good Environmental Education? 62. Elements of Excellent Environmental Education Programs 103. What is Evaluation?12A.Evaluation Planning: A Background 13B.Conditions Unfavourable for Evaluation 154. The Benefits of an Evaluation Program 16The Nuts and Bolts of Evaluating Environmental Education5.Choosing an Evaluation Model 176.Outcome-Based Evaluation 19A.What is Outcome-Based Evaluation? 19B.General Steps to Outcomes-Based Evaluation227. Building A Program That Includes Evaluation 258.Trying to Evaluate the Tough Stuff29A.How Learners (Sometimes) Get to Action29B.Why Are EE Programs So Difficult to Evaluate?31C.Measuring Values Shift 33D.Measuring Behaviour Change36E.Measuring Benefits to the Environment 38F.An Alternative Approach: What We Know About Good EE399. Conclusion: Implications of Conducting Evaluation 40Measuring the Success of Environmental Education Programs Page 2 AppendixesAppendix One:An Environmental Education Tool Kit 42A. What are the Instruments Available?42B. Pros and Cons of Each Instrument 47Appendix Two: Checklist for Program Evaluation Planning 50Appendix Three: Tips for Conducting an Evaluation 52Appendix Four: Evaluation Samples 53A.Teacher Written Questionnaire 53B.Teacher Interview/Focus Group Questions 55C.Student Questionnaire58D.Student Focus Group Questions61E.Student Class Action Plans Feedback Forms 63End NotesGlossary 65References 67Resources70Measuring the Success of Environmental Education Programs Page 3 Executive SummaryTodaymorethanever,societyneedshigh-qualityenvironmentaleducationprogramsthatsucceedinmovingvaluesandchangingbehavioursinthedirectionofsustainabilityandenvironmentalconservation.Effective, relevant evaluation offers a very powerful waytoimprovetheseeducationprogramsandenablesthemtosucceedinaccomplishing more of their objectives and goals.Funders and programmers alike strive for better techniques to evaluatethesuccessofenvironmentaleducation. Methodsofevaluationareoften poorly understood, particularly among professionals who deliverenvironmentaleducationprograms.Asurveyofboththeseprofessionals and academics found a scarcity of techniques to measurethe more challenging outcomes such as values shift, behaviour change,andbenefitstotheenvironment. Thisdocumentisanattemptatoutlininganddescribingpertinenteducationalevaluationmethodologies and tools. Its purpose is not to reinvent the wheel, butrathertoconnectenvironmentaleducatorswithsolid,practicalevaluation strategies, methods and advice.Outcome-BasedEvaluationisrapidlygrowinginpopularityanduseamongbothfundingandthenon-governmentalcommunity,andtheauthors describe a program logic model and an evaluation scheme thatflowsfromthismodel,usingillustrativeexamplesfromexistingenvironmentaleducationprograms.Finally,someoutcomeindicatorsare suggested that can be used to assess the hard to measure long-termoutcomesthatpertaintovalues,behaviour,andenvironmentalbenefits. Thisreportalsobrieflyreviewsthebasictenetsofenvironmentaleducation,reportsontenprinciplesofexcellentenvironmentaleducation,andincludesaglossaryandwrittenandon-line resources to assist the reader.Measuring the Success of Environmental Education Programs Page 4 AcknowledgementsThe authors would like to thank a several funders and contributors whomadethisworkpossible. AlbertaEcotrustandtheJ.W.McConnellFamilyFoundationbothprovidedfundstosupportthiswork. Deepthankstothefollowingindividualswhotooktimefromtheirbusyschedules to comment on this draft:Sue Staniforth, Staniforth & Associates Environmental EducationConsultingAndy Pool, EncanaMargaret Floyd, SPARKS StrategiesSusan Ellis, Ellis and AssociatesPhillip Cox, Plan:NetJill Kirker, Alberta EcotrustAlso,throughoutthisdocumentwehavecitedtheworksofseveralindividualsandorganizationsthathavepublishedagreatdealonmaterialsonevaluation. Someindividuals,suchasCarterMcNamara,havedoneextensiveworkonOutcomes-BasedEvaluation.Wegratefully acknowledge our debt to these workers especially those thatwehavereferenced. Formoreinformation,pleaserefertotheResources section at the end of this document.A Note About This DocumentThisisalivingdocument,andtheauthorswelcomeanyconstructivefeedbackorsuggestionsthatwouldhelpimprovesubsequentdrafts.ContacttheCanadianParksandWildernesss(CPAWS)GarethThomson with your feedback via email:[email protected]. Measuring the Success of Environmental Education Programs Page 5 IntroductionWeliveinanageinwhichenvironmentaleducatorsareincreasinglybeingchallengedbytheirfundersandtheiraudiencestodemonstratetheir results, and where accountability and performance measurementtechniquesareincreasinglybeingemphasized. Agoodevaluationprogram can satisfy this need and it can do more. Careful reflectionaboutandattentiontotheresultsofagoodprogramevaluationcanprovideenvironmentaleducationprofessionalswithconcreteideasonhowtoimprovethemanagementandultimateperformanceoftheirprograms. Thatis,agoodevaluationprogramcanimprovetheeducation that students receive.This report is aimed at two audiences: professionals working in the fieldof environmental education who strive to improve their programs andachievetheirgoalsandobjectives;andthefundingcommunity,whowrestlewithissuesofaccountabilityandresultswhenfacedwithenvironmental education proposals. The intent of this document is toprovideprogramdesignersandfunderswithasuccinctsetofrecommendations on the best instruments that can be used to measurethe success of education programs, all the while keeping the learner atthe centre of any evaluation. One of our goals is to make a documentthatallowsenvironmentaleducationprogrammersofanylevelandorganizationalsizetoovercometheirfearofevaluation,andtoplanforandcarryoutevaluationsoftheirprograms. Wehavetriedtocreateaone-stopshoppingcentrecompletewithdiscussionsandexamplesoftools,planningstrategies,andasolidexampleofoneparticular evaluation strategy that works well for programs. We hopethatenvironmentaleducationprofessionalswillcometoseeanevaluation plan as a critical part of their program, as important as theprovision of services, classroom presentations or the creation of teacheractivity guides.Measuring the Success of Environmental Education Programs Page 6 About Environmental Education and Evaluation1.What is Good Environmental Education?Environmental education has been defined and redefinedoverthelasttwenty-fiveyears. Definitionalissuesareinherentinafieldthisbroadandencompassing. Itisgenerallyagreedthatenvironmentaleducationisaprocessthatcreatesawarenessandunderstandingoftherelationshipbetweenhumansandtheirmanyenvironmentsnatural,man-made,cultural,andtechnological. Environmental education is concerned withknowledge,values,andattitudes,andhasasitsaimresponsible environmental behaviour.NEEAC, 1996Appreciation of, and concern for our environment is nothing new. Sincethe early writings of John Muir, Aldo Leopold and Henry David Thoreau,amongst others, concern over humankinds impact on the environmenthasbeenwelldiscussedanddocumented. In1962,RachelCarsonsreleaseofSilentSpring,aseminalworkdocumentingtheeffectsofpesticides in the environment, brought about a new sense of urgency inhow humankind interacted with their environment. As Daniel Einsteinnotes,aneweducationalmovementwasborn(1995).SilentSpringquickly became a catalyst for the environmental movement. From thismovement, a different emphasis began to emerge, one of awareness ofhumancomplicityinenvironmentaldeclineandtheinvolvementofpublicvaluesthatstressedthequalityofthehumanexperienceandhence of the human environment (NEEAC, 1996). Public concern overoureffectsontheworldaroundusbegantomount.Eventsthatbothcelebratedtheenvironmentaswellascalledtoattentiontheissuesaffectingitbecameincreasinglypopular. EarthDaywasborn. Thosethat taught about the environment called for a new type of curriculumthat included an examination of the values and attitudes people used tomakedecisionsregardingtheenvironment(Einstein,1995).Andenvironmental educators began work towards a common definition forenvironmental education.Much of the work on environmental education within the last quartercenturyhasbeenguidedbytheBelgradeCharter(UNESCO-UNEP,1975) and the Tbilisi Declaration (UNESCO, 1978). These two documentsfurnishaninternationallyacceptedfoundationforenvironmentaleducation.Belgrade Charter, 1975TheBelgradeCharterwasdevelopedin1975attheUnitedNationsEducational,Scientific,andCulturalOrganizationConferenceinMeasuring the Success of Environmental Education Programs Page 7 Yugoslavia,andprovidesawidelyacceptedgoalstatementforenvironmental education:The goal of environmental education is to develop a worldpopulationthatisawareof,andconcernedabout,theenvironmentanditsassociatedproblems,andwhichhastheknowledge,skills,attitudes,motivations,andcommitment to work individually and collectively towardsolutionsofcurrentproblemsandtheprevention ofnewones.(UNESCO, 1976)Tbilisi Declaration, 1977Following Belgrade, the world's first Intergovernmental Conference onEnvironmental Education was held in Tbilisi, Georgia. Building on theBelgrade Charter, representatives at the Tbilisi Conference adopted theTbilisi Declaration, which challenged environmental education to createawarenessandvaluesamongsthumankindinordertoimprovethequalities of life and the environment.A major outcome of Tbilisi was detailed descriptions of the objectives ofenvironmentaleducation. Mostenvironmentaleducatorshavesinceuniversally adopted these objectives.Awarenesstohelpsocialgroupsandindividualsacquireanawarenessandsensitivitytothetotalenvironment and its allied problems.Knowledge to help social groups and individuals gainavarietyofexperiencein,andacquireabasicunderstandingof,theenvironmentanditsassociatedproblems.Attitudes to help social groups and individuals acquireasetofvaluesandfeelingsofconcernfortheenvironmentandthemotivationforactivelyparticipatinginenvironmentalimprovementandprotection.Skills to help social groups and individuals acquire theskillsforidentifyingandsolvingenvironmentalproblems.Participation to provide social groups and individualswith an opportunity to be actively involved at all levelsinworkingtowardresolutionofenvironmentalproblems. (UNESCO, 1978) Measuring the Success of Environmental Education Programs Page 8 Characteristics of Environmental Education The outcomes of Tbilisi and Belgrade have, in many ways, provided thebasisformanyenvironmentaleducationprograms. Certainly,havingbothacommonlyacceptedgoalstatementandassociatedsetofobjectiveshasallowedmanyeducatorstobetteraddressthedesiredoutcomesoftheirprograms. Equaltotheneedtoidentifybothacommongoalandsetofobjectivesistheneedtoconsiderthecharacteristics of environmental education. In Environmental Education Materials: Guidelines for Excellence (1996)the North American Association for Environmental Education (NAAEE)identifyanumberofspecificcharacteristicsofenvironmentaleducation. According to NAAEE, environmental education:islearner-centred,providingstudentswithopportunitiestoconstructtheirownunderstandingsthrough hands-on, minds-on investigationsinvolvesengaginglearnersindirectexperiencesandchallenges them to use higher-order thinking skillsis supportive of the development of an active learningcommunitywherelearnersshareideasandexpertise,and prompt continued inquiryprovidesreal-worldcontextsandissuesfromwhichconcepts and skills can be used (NAAEE, 1996) Thesecharacteristics,whenappliedinconjunctionwiththeabove-mentionedgoalandobjectivesforenvironmentaleducation,haveallowed environmental educators to develop programs that lend to theformationofpositivebeliefs,attitudesandvaluesconcerningtheenvironmentasabasisforassumingawisestewardshiproletowardsthe earth (Caduto, 1985). As a program planner or environmental educator, however, its a lot totakein.Goals,objectives,characteristicsthequestionquicklyarises:howdoweattempttodevelopaprogramthatmeetsallofthesecomponents, without losing sight of what we originally set out to do?One such framework, offers a clear approach: Environmental education has long been defined to includethreecriticalcomponents:awareness,leadingtounderstandingwhichinturncreatesthepotentialandcapacityforappropriateactions.Morespecifically,environmental education includes developing personal awareness of the environment andone's connections to it;developinganunderstandingofenvironmentalconcepts and knowledge of ecological, scientific, social,political and economic systems;Measuring the Success of Environmental Education Programs Page 9 thecapacitytoactresponsiblyuponwhatapersonfeelsandknows,inordertoimplementthebestsolutions to environmental problems. (Staniforth & Fawcett, 1994) Ultimately, environmental education as it is practiced in the 21st centuryis largely based on a rich legacy of existing environmental educationdeclarations, frameworks, definitions, and models as a foundation. Thefield as a whole owes a great deal to those who have worked to createthese documents. Each document is based on a different set ofassumptions and priorities, yet the commonalities are considerable. Adetailedbackgrounderthatsummarizestheseframeworks,entitledWhatIsGoodEnvironmentalEducation?canalsobedownloadedfrom the Canadian Parks and Wilderness Society (CPAWS) website (seeResources for more information). And to further explore the diversityof this area visit also Excellence in Environmental Education GuidelinesforLearning(K-12)bytheNorthAmericanAssociationforEnvironmental Education (see Resources).Measuring the Success of Environmental Education Programs Page 10 2. Elements of Excellent Environmental Education Programs Whatconstitutesanexcellentenvironmentaleducationprogram?Arigorous definition would be very difficult to give, and beyond the scopeofthisdocument.Notwithstandingthis,itisusefulforpractitionersand funders of environmental education to consider the answer to thisquestion. As an example, the following ten principles of excellent environmentaleducationprogramsweredraftedbyagroupofexperiencedenvironmental educators involved in the Green Street initiative, as partof a visioning and capacity-building exercise.Green Street is a standardof excellence for quality environmental education programs provided byprominentCanadianenvironmentaleducationorganizations.(SeeResources for more). Theintentionoftheseprinciplesisnottobeprescriptiveorexclusionary, nor are they necessarily a checklist (indeed, no programscan claim to exhibit 100% of these principles!). Excellent environmental education programs Arecredible,reputable,andbasedonsolidfacts,traditionalknowledge,oronscience. Values,biases,and assumptions are made explicit. Create knowledge and understanding about ecological,social,economic,andpoliticalconcepts,anddemonstratetheinterdependencebetweenahealthyenvironment,humanwell-being,andasoundeconomy. Involve a cycle of continual improvement that includestheprocessesofdesign,delivery,evaluation,andredesign. Are grounded in a real-world context that is specific toage,curriculum,andplace,andencourageapersonalaffinitywiththeearththroughpracticalexperiencesout-of-doorsandthroughthepracticeofanethicofcare. Like the environment itself, programs transcendcurricular boundaries,strivingtointegratetraditionalsubject areas and disciplines. Provide creative learning experiences that are hands-onandlearner-centred,wherestudentsteacheachotherandeducatorsarementorsandfacilitators.Theseexperiences promote higher order thinking and providea cooperative context for learning and evaluation.Measuring the Success of Environmental Education Programs Page 11 Createexcitingandenjoyablelearningsituationsthatteach to all learning styles, promote life-long learning,and celebrate the beauty of nature. Examineenvironmentalproblemsandissuesinaall-inclusivemannerthatincludessocial,moral,andethical dimensions, promotes values clarification, and isrespectfulofthediversityofvaluesthatexistinoursociety. Motivate and empower students through the provisionofspecificactionskills,allowingstudentstodevelopstrategiesforresponsiblecitizenshipthroughtheapplication of their knowledge and skills as they workcooperativelytowardtheresolutionofanenvironmental problem or issue. Engagethelearnerinalong-termmentoringrelationship, transforming them as they examine theirpersonal values, attitudes, feelings and behaviours. Promoteanunderstandingofthepast,asenseofthepresent, and a positive vision for the future, developinga sense of commitment in the learner to help create ahealthierenvironmentandasustainablehome,community, and planet.Another approach, recently launched by NAAEE in November of 2002 isthepublishingofthedraftdocument:GuidelinesforExcellenceinNonformalEnvironmentalEducationProgramDevelopmentandImplementation.Theseguidelinespointoutsixkeycharacteristicsofhigh quality environmental education programs:1.Theysupporttheirparentorganizationsmission,purpose,and goals.2. Theyredesignedtofillspecificneedsandproducetangible benefits.3.They function within a well-defined scope and structure.4.They require careful planning and well-trained staff.5.Theyarebuiltonafoundationofqualityinstructionalmaterials and thorough planning.6.Theydefineandmeasureresultsinordertoimprovecurrentprograms,ensureaccountability,andmaximizethe success of future efforts.(NAAEE, 2002)Astute readers will note that this current document is directly aimed athelping organizations achieve Characteristic #6!Measuring the Success of Environmental Education Programs Page 12 3.What is Evaluation?Evaluation is a scary word. My experience has been that assoon as you use that word, people get their backs up andfeel like they are going to be evaluated, and hence judged.It gets personal very easily.Margaret Floyd, Sparks Strategies, 2002Weashumansevaluateallthetime.Listeninonconversationsandyoull hear: I loved that movie last night. He is a terrible cook! Thatcar isnt worth the price theyre charging. In more formal terms, mostof us have been evaluated by teachers through the school system or byemployersintheworkplace oftenleavinguswithnegativeconnotations about both the process and the end results.Evaluation is a term that is used to represent judgments of many kinds.Whatallevaluationshaveincommonisthenotionofjudgingmerit.Someoneisexaminingandweighingsomethingagainstanexplicitorimplicityardstick.Theyardstickscanvarywidely,andincludecriteriasuch as aesthetics, effectiveness, economics, and justice or equity issues.One useful definition of program evaluation is provided below, with ananalysis of its components:Evaluation is the systematicassessment of the operationand/or the outcomes of a program or policy, compared toasetofexplicitorimplicitstandards,asameansofcontributing to the improvement of the program or policy.(Weiss,1998)Dr. Weiss breaks the definition down into several key elements, whichserve to highlight the specific nature of evaluation:Systematicassessment:thisemphasizestheresearchnatureofevaluation,stressingthatitshouldbeconductedwithrigorandformality,accordingtoacceptedresearchcanons.Therefore,anevaluationofanenvironmentaleducationprogramshouldfollowspecific,well-plannedresearchstrategies,whetherqualitativeorquantitative in nature. Scientific rigor can take more time and be morecostlythaninformalmethods,yetitisanessentialcomponentofsuccessfulevaluations.Thisisespeciallysoineducation,whereoutcomes are complex, hard to observe, and made up of many elementsthat react in diverse ways.Measuring the Success of Environmental Education Programs Page 13 Theactivitiesandoutcomesofaprogramarethe actualfocus of theevaluationsomeevaluationsstudyprocesswhileothersexamineoutcomes and effects. An educational program evaluation would usuallylook at both the activities of the program (how its delivered, by whom,etc.)anditsoutcomesforparticipants(skills,knowledge,attitudes,values change, etc.).Standardsforcomparison:thisisasetofexpectationsorcriteria towhich a program is compared. Sometimes it comes from the programsowngoalsormissionstatement,aswellasfromtheobjectivesofprogram sponsors, managers and practitioners.Improvementoftheprogram:theevaluationshouldbedonenottopoint fingers or assign blame but to provide a positive contribution thathelpsmakeprogramsworkbetterandallocatesresourcestobetterprograms.A.Evaluation Planning: A BackgroundOne does not plan and then try to make circumstances fitthose plans. One tries to make plans fit the circumstances. General George S. Patton, 1947Begin at the beginning, the King said gravely, and go ontill you come to the end: then stop.Lewis Carroll, 1865Evaluationhasbecomeverypopularoverthepasttwodecades,asanimportanttoolforprogramfundinganddecision-making,organizationallearning,accountabilityandprogrammanagementandimprovement.Howdoweasenvironmentaleducatorsgoaboutdeveloping evaluation programs that work for us?Evaluationplanningcanbeacomplexandcyclicalprocess.Onemustidentify the key questions for a study, decide on the best measurementsandtechniquestoanswerthequestions,figureoutthebestwaytocollectthedata,developanappropriateresearchdesign,implementitandpromoteappropriateuseoftheresults.Herearesomeevaluationdefinitions and descriptions to provide some background.Measuring the Success of Environmental Education Programs Page 14 Formative and Summative EvaluationMichael Scriven introduced these two terms, formative and summative,in 1967, to describe the evaluation of educational curriculum. Formativeevaluation produces information that is fed back during the course of aprogram to improve it. Summative evaluation is done after the programisfinished,andprovidesinformationaboutitseffectiveness.Scrivenlatersimplifiedthisdistinction,asfollows:Whenthecooktastesthesoup,thatsformativeevaluation;whentheguesttastesit,thatssummative evaluation. (In Weiss, 1998, p. 31)Programs are seldom finished; they continue to adapt and modify overtime,inresponsetointernalandexternalconditions.Therefore,theneed for formative information continues to be fed back to programstaff to improve the program.Outcome and Process-Based EvaluationFocusingontheresultsofaprogramoritsoutcomesisstillamajoraspectofmostevaluations.Outcomesrefertotheendresultsofaprogramforthepeopleitwasintendedtoservestudents,teachers,and volunteers whoever your audience is. The term outcome is oftenusedinterchangeablywithresultandeffect.Someoutcomesofaprogramaretheresultstheprogramplannersanticipated.Otheroutcomeshoweverareeffectsthatnobodyexpectedandsometimesthatnobodywantedyetareimportantinformationforprogramimprovement. Changeisakeywordherewhatisthechange thatresults from a particular program? Is it an increase in something, suchasknowledge?Oradecreaseinsomething,suchasenvironmentallydetrimental behaviour? The process of a program is also important to evaluators a systematicassessmentofwhatisgoingon.Evaluatorsneedtoknowwhattheprogramactuallydoeswhatisactuallyhappeningontheground.Sometimes process is the key element of success or failure of a programhowisitdelivered, whatservices doesitprovide,istherefollow-up,dostudentslikeit?Studyingprogramprocessalsohelpsonetounderstand outcome data.Initially,thereseemstobealotofsimilaritybetweenformative-summative and process-outcome evaluations. However, the two sets ofterms have quite different implications. Formative and summative referto the intentions of the evaluator in doing the study to help improvethe program or judge it. Process and outcome have nothing to do withtheevaluatorsrole,butrelatetothephaseoftheprogramstudied.Oftenthereisacombinationofevaluationsgoingonthestudyofaprogramsprocessorwhatgoesonduringaprogram,inaformativesense,combinedwithalookatoutcomestheconsequencesforparticipants at the end.Measuring the Success of Environmental Education Programs Page 15 B. Conditions Unfavourable for EvaluationThere are four circumstances where evaluation may not be worthwhile:review your program with these in mind before beginning.1.When the program has few routines and little stability.This might be the case with a new program that needs to be pilotedandestablishedbeforeanysystematicevaluationcanoccur.Itcanalsooccuriftheprogramhasnoconsistentactivities,deliverymethods,ortheoriesbehindit.However,aformativeevaluationdonethroughapilotphasemayhelpidentifythesegapsmoreclearly.2.Whenthoseinvolvedintheprogramcantagreeastowhatitistrying to achieve.Iftherearebigdiscrepanciesinperceivedgoals,staffareprobablyworking at cross-purposes. Again, the coherence of the program is indoubt,andwhileaprocess-basedevaluationmightbehelpful,anOutcomes-Based Evaluation would have no criteria to use.3.Whenthesponsororprogrammanagersetslimitsastowhattheevaluation can study, putting many important issues off limits.Thisoccursrarely,whenthesponsorormanagerwantsawhitewash job from an evaluation. Avoid at all costs!4.Whenthereisnotenoughfunds,resources,orstaffexpertisetoconduct the evaluation.Evaluationscallfortime,money,energy,planningandskill:itisimportant to ensure these are in place for a successful product. Thisisbeyondadoubtthesinglemostprevalentreasonthatprogramevaluations are either not done or are not adequate.Measuring the Success of Environmental Education Programs Page 16 4. The Benefits of an Evaluation ProgramNowthatIvewrittenmylogicmodelformyEducationProgram,Ivegotasingletablethatcapturesalltheactivitiesandtheoutputs,outcomes,andimpactsthatIhopetoachieve.NowwheneveraboardmemberorteacherasksmeWhatdoyoudo?Icanshowthemmyplan on one sheet of paper.Jenn Hoffman, Sierra Club of Canada, BC Chapter, 2002Manyenvironmentaleducationprofessionalswouldidentifyoneprimaryreasontoevaluatetheirprograms:tosatisfyoneormoreoftheirfundersandaudiences. Whileitistruethatasoundevaluationcan strengthen accountability for the use of resources, evaluating yourprogram can do more than just satisfying the reporting requirements ofa funding agency.Plan:NetLimitedremindsusthatevaluationcanhelporganizationsmakewiseplanningandmanagementdecisions. Itwillhelporganizations:Know what to expect from project activitiesIdentify who will benefit from the expected resultsGather just the right information to know whether theproject is achieving what you wantKnowhowtoimproveprojectactivitiesbasedonthisinformationKnow how to maximize positive influences (referred toasEnablers),andtoavoidorovercomenegativeinfluences (referred to as Constraints)Communicate plans and achievements more clearly topeople and other organizationsGain from the knowledge, experience and ideas of thepeople involvedProvideaccurateandconvincinginformationtosupport applications for funding.(Plan:Net Limited, 2002)Seenaspartofthelargerprocessofcontinualimprovementofenvironmental education programs,goodevaluation hasthepotentialtoimproveprogramqualityoverthelongterm. Assuch,agoodevaluationprogramcanimproveprogramquality,improvestudentlearning, andultimately assist theprogramtoachieve itsgoals,whichmayincludesuchthingsashigherdegreeofstudentinvolvementandbenefits to the environment.Measuring the Success of Environmental Education Programs Page 17 The Nuts and Bolts of Evaluating Environmental Education5.Choosing an Evaluation ModelChoosingawayofevaluatingyourenvironmentaleducationprogramcanbeintimidatingorevenbewilderingforthosewhomhaveneverdonesobefore. Evaluationmodelscomewithvariousnames:NeedsAssessments, Cost/Benefit Analysis, Effectiveness, Goal-Based, Process-Based the list goes on and on.A bewildering array of over 35 differenttypesofevaluationisdescribedintheliteraturetoomanytolistinthis document!At its core, program evaluation is really all about collecting informationaboutaprogramorsomeaspectofaprograminordertomakenecessarydecisionsabouttheprogram. Itsnotrocketscienceandisentirelypossibletodo,evenforthenoviceevaluatorwithlimitedresources,experienceandskills. CarterMcNamara,authorofseveralprogram evaluation-related resources, offers the following advice:Itsbettertodowhatmightturnouttobeanaverageeffortatevaluationthantodonoevaluationatall.Thetypeofevaluationyouundertaketoimproveyourprogramdependsonwhatyouaredoingworryaboutwhatyouneedtoknowtomaketheprogramdecisionsyouneedtomake,andworryabouthowyoucanaccurately collect and understand that information.(McNamara, 1999)Regardless of the type of model you use, there is a common set of stepsthatcanbeusedtocompleteprogramevaluation. Considerthisasasuggested framework that you can expand upon if necessary1.Decide what you want to assess.2.Select an evaluation design to fit your program.3.Choose methods of measurement.4.Decide whom you will assess.5.Determine when you will conduct the assessment.6.Gather, analyze, and interpret the data.(SAMHSA-CSAP-NCAP, 2000)Measuring the Success of Environmental Education Programs Page 18 Obviously, there will be many questions that you will need to considerwhendesigningyourprogramevaluation. Forexample,whyareyoudoingtheevaluation? Fromwhowillyougatherinformation? Howwill you get this information?And who will be reading this evaluationin the end?AnothergoodresourcetoreferencewhendesigninganeducationalprogramevaluationisTheProgramEvaluationStandards by the JointCommitteeonStandardsforEducationalEvaluation(1994).Thirtystandards were developed by several international panels of researchers,as a tool to improve the practice of educational program evaluation andconsequentlyimproveeducationalpractice.Thestandardsthemselvesareorganizedaroundthefourimportantattributesofanevaluation:utility,feasibility,propriety,andaccuracy.Eachstandardhelpsdefineone of these four attributes, and they can be used to focus, define andassess your evaluation plan. For example,The first Utility Standard is as follows:StakeholderIdentification Persons involved in oraffectedbytheevaluationshouldbeidentified,sothat their needs can be addressed.The first Feasibility Standardis:PracticalProceduresTheevaluationproceduresshouldbepractical,tokeepdisruptiontoaminimum while needed information is obtained.(Sanders, 1994)The 30 standards can be used as a useful and well-referenced checklisttohelpyoudesignanevaluation,collectinformation,analyzeinformationandreport,manageand/orstaffanevaluation. Refertothe Resources section for more information on these standards.Andfinally,toassistyouinansweringmanyimportantevaluationquestions,weveincludedacopyofCarterMcNamarasChecklistforProgram Evaluation Planning in Appendix Two.Measuring the Success of Environmental Education Programs Page 19 6.Outcome-Based EvaluationInthisdocumentwehavefocusedononeparticularmodelofevaluationOutcomes-BasedEvaluationthatisquiteeffectiveinevaluating environmental education programs, especially amongst non-profit groups. This technique is rapidly growing in popularity amongstthefundingcommunityandamongnon-profitgroups. Itshouldbenotedthatthistechniqueisalsoknownbyothernames,amongthemresults-based management (popular with the Government of Canada),outcome or performance measurement, or even performance mapping!AsMcNamarapointsout,Outcomes-BasedEvaluationhasprovenparticularlyeffectiveinassistingnon-profitsinansweringwhethertheir organization is really doing the right program activities to bringabout the outcomes they believe to be needed by their clients (1999).AsMcNamaranotes,outcomesareusuallyintermsofenhancedlearningsuchasincreasedknowledgeorimprovementsinperception,attitudes or skills, or improved conditions, such as increased ecologicalliteracy.A.What is Outcomes-Based Evaluation?Outcomes-BasedEvaluationisquicklybecomingoneofthemoreimportantmeansofprogramevaluationbeingusedbynon-profitorganizations. Isyourprogramreallydoing the right activities to bring about the outcomes youwant? Orareyoujustengaginginbusyactivitiesthatseemreasonableatthetime? Fundersareincreasinglyquestioningwhethernon-profitprogramsarereallymaking a difference.(McNamara, 1999)Outcomes-Based Evaluation looksattheimpacts,benefits, orchangestoyourclientsstudents,teachers,etc.asaresultofyoureffortsduringand/oraftertheirparticipationinyourprogram. Ithelpsyoufindoutifyourereallydoingtherightprogramactivitiestoachievesome pre-specified outcomes. Outcome-Based Evaluation is a method ofevaluation that is based on a program logic model; the measurement ofthesuccessofaprogramreliesonthemeasurementofseveralcomponents of the logic model system.Program Logic ModelAlogicmodelisanapproachtoplanningandmanagingprojectsthathelpsustobeclearbothaboutwhatourprojectsare doing andwhattheyarechanging. The word logic is used because of the logical linkbetween the system components: inputs are a necessary precondition toactivities; activities need to take place before outputs are possible, etc.Thinkofyourprogramasasystemthathasinputs,activities,outputsand outcomes:Measuring the Success of Environmental Education Programs Page 20 Input:Thematerialsandresourcesthattheprogramusesinitsactivities.Theseareofteneasytoidentify,andarecommontomanyorganizationsandprograms.Forexample:equipment,staff,facilities,etc.These are the resources you need to get the resources you seek.Activities:Activitiesarewhatyoudotocreatethechangeyouseek;theyarewhatyoudowiththeinputsyouhave. Undertheheadingspromotion,networking,advocacy,ortraining,youdescribewhattheproject is doing.Outputs: Outputs are the most immediate results of your project, andeachrelatesdirectlytoyouractivities. Moreimportantly,outputscreatethepotentialfordesiredresults;theycreatepotentialforyouroutcomes to occur. Outputs are usually measured as are statistics, andindicatehardlyanythingaboutthechangesinclients.(Example:61students attended our Ecology Camp).Outcomes:Outcomesdescribethetruechangesthatoccurtopeople,organizations andcommunities asaresult ofyourprogram.Thesearethe actual impacts, benefits, or changes for participants during or afteryourprogram,expressedintermsofknowledge,skills,valuesorbehaviours.Outcomes may be expressed in terms of enhanced learning, such asincreased knowledge, a positive change in perceptions or attitudes, orenhanced skills. Forexample,anobjectiveofyourprogrammightbetodemonstratedincreaseawarenessofthecausesandpreventionmeasuresofclimatechange.Outcomes many also be expressed in terms of physical conditions, suchas the development of school-grounds garden.Impact: This describes your vision of a preferred future and underlineswhy the project is important. It refers to the longer-term change thatyou hope your project will help create.Inputs Activities Outputs Outcomes(a.k.a.Objectives)Impact(a.k.aGoals)(Note: Indicators are not part of the logic model but they are a criticalpart of the evaluation process.See text below for more information.)Measuring the Success of Environmental Education Programs Page 21 Measuring OutcomesThe success of a program is measured using indicators that measure anyorallofthethreelogicmodelcomponentsofoutput,outcome,orimpact.OutcomeIndicators:Thesearewhatyoucansee,hear,read,etc.andsuggestthatyouremakingprogresstowardyouroutcometargetornot. Indicators can be established for outputs, outcomes, and impacts.Indicatorsaremeasuredusinginstrumentssuchasquestionnairesorsurveys, and may be either quantitative or qualitative.Indicators can beenvisionedastheflagsthatletusknowwereonthecorrectpath.They answer the question, How will you know when youve achievedtheoutcome?Theyarethemeasurable,observablewaysofdefiningyour outcomes. They define what the outcome looks like.OutcomeTargets: These specify how much of your outcome you hopeto achieve.Splash and RippleHere is one image to help people understand and use outcomemeasurement.The rock is like a material Input; the person holdingthe rock is like a human resource Input.The act of dropping the rockis like an Activity.When the rock reaches the water, it creates aSplash these are your Outputs.The ripples spreading out from thesplash are like your Outcomes, then later your Impacts.The edge ofthe pond represents the geographic and population boundaries ofyour project. (Plan: Net Limited, 2002)OutputsOutcomesImpactsMeasuring the Success of Environmental Education Programs Page 22 B.General Steps to Outcomes-Based EvaluationWith a basic understanding of the process and language of Outcomes-Based Evaluation, we can start to think about how one would set aboutconductingit. CarterMcNamarahasproducedmanyhandyresourcesonhowtounderstandandconductOutcomes-BasedEvaluation. ThefollowingisadaptedfromhisBasicGuidetoOutcomes-BasedEvaluationinNonprofitOrganizationswithVeryLimitedResources(1999)andprovidesaverysuccinctsetofstepsthatenvironmentaleducation organizations can easily follow.Step 1.Get Ready!Startsmart!Pickaprogramtoevaluate. Chooseonethathasareasonablycleargroupofclientsandclearmethods to provide services to them.Ifyouhaveamissionstatement,astrategicplanoragoal(s)statementforthisprogram,haulitoutofthefiling cabinet youve jammed it into, and consider it.Askforsomemoney. Considergettingagranttosupport developing your evaluation plan while this isnot vital to the process, evaluation expertise to reviewyourplanscanbeextremelybeneficialandinsightful.Manygrantorswillconsiderassigningupto15%oftotalbudgetamounttoanexternalevaluation;makesure you ask about this. Step 2.Choose Your Outcomes.Choose the outcomes that you want to examine.Makea priority list for these outcomes, and if your time andresourcesarelimited,pickthetoptwotofourmostimportant outcomes for now.Itcanbequiteachallengetoidentifyoutcomesforsome types of programs. Remember, an outcome is thebenefitthatyourclientreceivesfromparticipatinginyourprogram(notastatistic). McNamarasuggestsusingwordssuchasenhanced,increased,more,new, oraltered to identify outcomes.Consideryourtimeframeandwhatyoucanevaluatewithin it. McNamara suggest that within 0-6 months,knowledgeandskillscanbeevaluated;within3-9months,behaviours;andwithin6-12monthsvaluesandattitudes. Trylinkingtheseshort-termoutcomes(0-3) months, with long-term outcomes (6-12 months).Measuring the Success of Environmental Education Programs Page 23 Step 3.Selecting IndicatorsSpecify an indicator for each outcome.Choose indicators based on asking questions like: WhatwouldIsee,hear,readaboutclientsthatmeansprogress toward the outcome?For example: 30 of the200studentswhoparticipateinCPAWSGrizzlyBearForever!Programwilldemonstrateonenewenvironmental stewardship activity within two monthsof the program.If this is your first outcomes plan that youve ever doneor the program is just getting started, dont spend a lotoftimetryingtofindtheperfectnumbersandpercentages for your indicators. Step 4.Gathering Data and InformationFor each indicator, identify what information you willneedtocollectormeasuretoassessthatindicator.Considerthepracticalityofcollectingdata,whentocollectdataandthetools/instrumentsavailabletocollecteddata. Basically,youretryingtodeterminehowtheinformationyouregoingtoneedcanbeefficiently and realistically gathered.Data collection instruments could include, but are notlimitedto:surveys,questionnaires,focusgroups,checklists,andobservations.SeeAppendixOneformore information on the pros and cons of each surveymethod. Step 5.Piloting/TestingMostlikelyyoudonthavetheresourcestopilottestyourcompleteoutcomesevaluationprocess.Instead,thinkofthefirstyearofapplyingyouroutcomesprocessasyourpilotprocess.Duringthisfirstyear,identifyanyproblemsandimprovements,etc.anddocumentthese.Thenapplythemtoyourprogramevaluation the following year for a new and improvedprocess (see Step 6). Step 6.Analyzing and ReportingAnalyzeyourdata. Youmayhavecollectedquantitative(i.e.numerical)orqualitative(i.e.comments)data. Toanalyzecomments,etc.(thatis,data that is not numerical in nature): read through allthedata,organizecommentsintosimilarcategories,e.g.,concerns,suggestions,strengths,etc;labelthecategoriesorthemes,e.g.,concerns,suggestions,etc;andidentifypatterns,orassociationsandcausalrelationships in the themes.Report your evaluation results. The level and scope ofinformation inreportdepends forwhomthereportisintended, e.g., funders, board, staff, clients, etc.Measuring the Success of Environmental Education Programs Page 24 Rememberthatthemostimportantpartofanevaluationisthinkingthroughhowyouregoingtoincorporatewhatyoulearnedfromitintothenextroundofprogramming.Thatstherealvalue,beyondreports to funders, accountability, etc.: its the learningthat comes through looking at what worked and whatdidnt and applying that learningA Note About Getting Outside Evaluation ExpertiseIf you have the resources to do so, bringing in an outside consultant toassist you in your evaluation process can be extremely beneficial. Theycanhelpyouavoidcostlymistakes,andtheirexternalrolehelpsthemchallengeyourassumptionsandbringavaluableperspectivetowhatyou are trying to achieve.Measuring the Success of Environmental Education Programs Page 25 7.Program Planning Building A Program Plan That IncludesEvaluationThe best time to plant a tree was a decade ago; the nextbest time to plant a tree is today.Author UnknownJust like planting a tree, the most logical time to build evaluation intoyour program plan is at the outset of your program, when you are stillin the planning stages.Why?Because youll know ahead of time whatsort of data you need to collect in order to evaluate your program theway you want.However,ifyouareinthemidstofaprogrambutwishtocreateabetterevaluationplandontdespair. Atreeplantedtodaystillprovides benefits, and so will your evaluation plan.To help you visualize this process, refer to the example below, from theCPAWSEducationprogramcalledGrizzlyBearsForever!(GBF!),inwhich CPAWS staff visit junior high students in their classroom to teachabout ecosystems, using the grizzly bears as an entry point. They tookthe following table, which summarizes a logic model, and tried to fill inthe blanks.Table OneComponent CPAWS Planning and Evaluation StrategyACTIVITIES What you do to create the change you seek.INPUTS Materials and resources used.OUTPUTSThe most immediate results of your project.Each relates directly to your activities.OUTCOMES Actual benefits/impacts/changes for participants.IMPACTSThe longer-term change you hope your project will helpcreate.To help themselves, CPAWS asked the following questions:What activities doweproposetodoin this program?This was the easy part they had already thought outwhat they wanted to do (including program marketingand delivery mechanisms).What will our program outputs be?Each activity has acorrespondingoutput. Theseeasy-to-measureitems,Measuring the Success of Environmental Education Programs Page 26 such as number of students contacted were all CPAWShad ever successfully measured before they adopted anOutcome-Based planning approach.Until recently, thislevelofmeasurementwasallthatfundershadeverasked for!Whatwillourprograminputsbe? Thisisalistofthingstheywillneedtocarryouttheirproject.Thisinformationincludeshumanandmaterialresources:staffing, budget, etcWhatoutcomesdowehopetoachievewiththisprogram? Thesearethedesirablechangesresultingfromtheoutputs. Someofthesestatementsweregleanedfromtheobjectivestheyhadgeneratedintheir proposal. It isimportant tonote that objectivesaregenerallyfewerinnumberthantheoutputsthatcreatethem(CPAWSdecidedtobendthisrule,however).Whatimpactdowehopetohavewiththisprogram?Tocreatethisstatement,theyremindedthemselveswhyCPAWSinvestsitsresourcesineducationandended up using the goal statement they had generatedin the proposal to funders. When they had answered all of these questions and filled in the table,CPAWS had an Outcome-Based Plan for their education program (TableTwo:Outcomes-BasedPlanningTableForCPAWSGBF!SchoolVisitProgram). A couple of observations about their work.All inputs and activities tendto focus on the CPAWS program, whereas outcomes and impacts tendto centre on changes to the learner.This is as it should be; CPAWS hasattemptedtokeepthelearnerinthecentreofthisprocess,asfarasimportant outcomes are concerned. Also,theoutcomesidentifieddisplayaninterestingrange. Outcomespertaining to knowledge, skills, and attitudes can generally be expectedtobeachievedinthedaysorweeksimmediatelyfollowingapresentation;theinculcationofvalues,andtheoccurrenceofdesiredbehaviours, will take longer (perhaps several months).Their evaluationplan must take this into consideration. What they did not have yet was any idea on how to measure thesethings they hope to achieve.They needed to create an evaluation plan.Todothis,theyassignednumberstoallthemeasurableitemsonthistable(outputs,outcomes,andimpact). Theythencameupwithavariety of quantitative and qualitative indicators for each of these items and to make sure that these were reasonable and feasible indicators,they identified how these indicators would be measured, and by whom.TableThree: EvaluationTableforCPAWSGBF!SchoolVisitProgramshows these details (for brevity, only a few of the outcomes are shown.)Measuring the Success of Environmental Education Programs Page 27 ThenextstepforCPAWSwillbetakingallthesemeasurementtechniques thatneedtobedone(usuallybyCPAWSSchoolProgramsSpecialist) and inserting these into her work plan so that they occur.Astudent from the local university assisted with the details of conductingthissurvey,intheprocessreceivingcreditforanindependentstudy.Thecreationofmeasurementinstruments,suchasthepreandpostsurveys of students, will take some time (see Implications). Oncetheresultsareinandcompiled,areportcanbeproducedsummarizing results and lessons learned. Such a report can be used byCPAWSasanaccountabilityexercise,demonstratingtoafunderthelegitimacy of the program.The true potential of this process, though, isto enhance the future performance and management of the program but CPAWS will have to take the time to process this information andpermit the fruits of the evaluation process to be ploughed back in, andused to improve the effectiveness and impact of the education program. Table Two: Outcomes-Based Planning Table For CPAWS GBF! School visit program ACTIVITIES Develop kit slides, graphics, visuals, game-piecesDevelopment of an illustrated 80 page teacher activity guideE-mail and posters are distributed to market the programIn-class presentations (80 minutes in length) by CPAWS staff INPUTS Staff 0.4 FTE (full time equivalent) to deal with administration(marketing, phone calling, bookings) and presentationsBudget of $20,000 to cover staff time and expenses, office spaceOUTPUTS 1. Presentation kit and activity guides are completed.2. E-mail sent to 200 addressees, and 125 posters are delivered toCalgary-area schools.3. Deliver presentations to 50 classes (2500 students). OUTCOMES 4. Students know more about grizzly bear ecology and conservation.5. Teachers enjoy the presentation and believe it to be a high-qualityproduct.6. Through the grizzly bear, students will learn to value andappreciatethe importance of ecosystem processes and ecosystemservices.7. Students behaviours change as a result of the education program(e.g. less consumerism, increased wilderness experiences, morepolitical/extracurricular involvement).8. Students will develop a sense of responsibility and a personalcommitment towards grizzly bear and environmentalconservation.9. Teachers incorporate lessons on the grizzly bear in future years. IMPACTS 10.CPAWS Education Team creates a community of knowledgeable,empowered citizens who engage in positive environmental actionto protect ecosystems, wilderness and protected areas.Measuring the Success of Environmental Education Programs Page 28 Table Three: Evaluation Table for CPAWS GBF! School Visit Program Output orOutcome #MeasurementIndicatorDatacollectionmethodDataSourceCollected bywhom?Collected when?OUTPUTS1Comparecompleted kitto originalmaterials listComparisonof kitsKitsCPAWSstaffJune20032Check recordsfrom e-mailsand mailReviewrecordsE-mails,mailUniversitystudentJune20033Collatestatistics fromdatabaseSimple statsreviewDatabaseCPAWSstaffJuly2003OUTCOMES4, 6, 7, 8Teachers willdistribute Preand PostProgramStudentQuestionnaire,which asksvalues-relatedandbehaviours-relatedquestions usinga Likert scaleCPAWS willmail andanalyze preand postsurveys;students fillout andreturnquestionnairesSurveysCPAWSstaffBy June2003 analysisby Aug20039Teachersdevelop andimplementlesson plansand activitiesincorporatinggrizzly bearconservationCPAWS willmail andanalyzesurveys;teachers fillout andreturnquestionnaires.Selectedteachers willparticipate ininterviews.Surveys,InterviewCPAWSstaffBy June2004 analysisby Aug2004Measuring the Success of Environmental Education Programs Page 29 8. Trying to Evaluate the Tough StuffA.How Learners (Sometimes) Get to ActionThegoalofthissectionistoremindthereaderofhoweducationleading to action actually works, and provides an explanation for whyenvironmentaleducationprogramssometimesdonotachievetheresults they hope for.As an illustrative example consider Mary, a student in a Calgary Juniorhigh school who learns about grizzly bears through a program offeredbyCPAWS.ThisprogramwasofferedbecauseCPAWShasthegoalofcreatingacommunityofknowledgeableempoweredcitizenswhoengageinpositiveenvironmentalactiontoprotectecosystems,wildernessandprotectedareas(takenfromtheImpactsectionofCPAWS Evaluation Plan for this program).Anyway, back to Mary. On October 23rd she sits through an interesting,curriculum-oriented classroom presentation by a CPAWS staff member,and in the following days receives several subsequent lessons from herteacher. Throughthisprogramshebecomesawar eofandunderstands something about grizzly bear ecology and behaviour, andsome of the conservation issues that surround this animal.Uptothispoint,CPAWSandherteacherhaveafairlyhighdegreeofcontrol over what goes through Marys mind. Notice how the locus ofcontrol starts to decrease through the followingAtagethirteenMaryhasdevelopedafairlywidespreadsystemofbeliefsabouthowtheworldworks. (TwoofthesebeliefsarethatgrizzlybearsexistinthemountainstothewestofCalgary,andthatthesebearsareneat). Thesebeliefs,oncelumpedandgroupedandcombinedwithsomeemotionaltendencies,compriseattitudes(forexample, Mary has the attitude that bears should be allowed to exist inthemountains). Attitudes cometolight mostfrequently asopinions,which Mary - like most 13 year olds has no trouble expressing.Caduto(1985) can help us here: Values are in turn formed by a meld of closelyaligned attitudes.A value is an enduring conviction that a specific modeof conduct is preferable to an opposite mode of conduct (p.7). Forexample,Maryvaluesthewildareasfoundinthemountains,andtheintrinsic right of animals to live there.Furthermore, when all of Marysvalues are considered, they form a value system. As it happens, in thiscase CPAWS is fortunate: for the most part, Marys values are consistentwith CPAWS, and the program actually helped confirm, validate, and ina small way perhaps even shift Marys values.This is necessary for CPAWS to achieve its goal, but unfortunately it isnotsufficient. CPAWSnexthurdleisahugeone:thatofpersuadingMarytomakeherbehaviourconsistentwithhervalues. Marymayvaluewildnature,butshemaydonothingtoprotectitworse,herconsumerhabitsandfailuretoacttoprotectnaturemayMeasuring the Success of Environmental Education Programs Page 30 unintentionallyharmnature. Ofcourse,thisgapbetweenvaluesandbehaviourisnotuniquetoMaryshelivesinasocietywhereacceptanceofthisgaphasreachedepidemicproportions.Environmentalistsandpsychologistsaliketalkaboutthecognitivedissonance that results when people lead lives in which their behaviourdoes not mirror their values.But again, lets get back to Mary.By a stroke of luck, it turns out that inMarys case her values are coupled with a strong sense of motivation tomake things better.She has been raised to believe that her actions canmakeadifferenceintheworld,andherresolveisstrengthenedbyasympatheticteacherwhohasfuelledhergenerallyproactivenature,two classmates that will support her, and a case history (provided, as ithappens, by CPAWS) of another classes successful campaign to protectbear habitat in the Bow Valley.ItisnowDecember20,andMarydecidestodosomethings.SomeofthechangestoherbehaviourwereunanticipatedanddecidedlyperipheraltoCPAWSgoals(shebuysapairofbearslippersforhermotherforChristmas). HerNewYearsResolutiontoeatonlyvegetarian meals and use less electricity is a small but significant gain tothe environment and, it can be argued, to CPAWS goal.But the truly exciting thing for CPAWS is that on January 4th,thedaybefore she heads back to school, Mary watches a show on TV about theplight of the Panda bear in China something that CPAWS had nothingtodowith. Atthispoint,Marybecomesconvinced thatshemustgetinvolved.Inherscienceclassthenextdayshemakesanannouncement,invitingfellowstudentstoalunchtimemeetingtodiscuss ways in which they can help the grizzly bears of the KananaskisValley through helping protect the bear habitat in this area.WillMaryandhernewlyformedactionclubhelpCPAWSachieveitsgoal? Perhaps. As any environmental advocate will tell you, it is hardto be truly effective. Many hurdles lie in the way of this group: otherdemandsonstudentstime,burnout,attrition,andsimpleineffectiveness(forexample,theymaylaunchaletter-writingcampaign, but send letters to the wrong Minister).Measuring the Success of Environmental Education Programs Page 31 B.Why Are Environmental Education Programs So Difficult toEvaluate?ClayoquotSoundisaforestedwatershedonthewestcoastofVancouver Island in British Columbia, Canada. In 1993, Clayoquot SoundwasthesiteofthelargestcivildisobedienceprotestsinCanadianhistory,withalmost900arrestedandjailedforprotestingloggingwithin this area.Asyouknow,whatusuallyhappens[intheevaluationofenvironmental education] is that we can only measure simplethings, and then because that is what we can measure, we saythat those simple things are the only real things. So we countnumbers,dosimplepre-posttreatmentsurveys,lookforshort-termchanges,measurethings,andthenwriteourreport.Therealthings,thewaysinwhichenvironmentaleducationcan change someones life, are much more subtle and difficulttomeasure.Youcanaskquestionsaboutmeaning,aboutinfluence, about impacts, and look at things that arent visiblenecessarily over a short time, but become apparent over thelongterm.Thisiswhatwehavetoconsideraswelookateffectiveness of environmental education.You know, when the Clayoquot trials were going on, and theytook place right outside my office, my boss the director ofpublic affairs from our Ministry- asked me if all of that wasmy fault. IS people going to trial or jail a good indicator of oursuccess?Dr. Rick Kool, Environmental Educator, 2000Rick Kool said it best: The real things, the ways in which environmentaleducation can change someones life, are much more subtle and difficultto measure.Whyisthis? Foronething,manyofthoseintheenvironmentaleducation field are talented teachers with a passion for the environmentandagiftforinterpretingnaturetostudents. Evaluationisnotsomething they have received training in, nor is it something they arenecessarily drawn to frankly, they would rather be in the classroom orin the field!Despitethis,evaluationoftheirprogramsissomethingthateventually most environmental educators deal with, either because ofthereportingrequirementsattachedtoareceivedgrant,orbecausethey and their program recipients have legitimate questions about theefficacyoftheirprograms. Mostprogramshavetheambitionofamongotherthings-creatingsomechangestostudentsbehaviours,and/orcausing netbenefits totheenvironment andtherethey hitaMeasuring the Success of Environmental Education Programs Page 32 significantstumblingblock.Whatmeasurementinstrumentsarebestused to measure behavioural change? How does one avoid such pitfallsastestwisdomandinterviewerbias,whichthreatentorenderresultsmeaningless? And if, a decade after receiving an education program, astudentbecomesanenvironmentaladvocate,orsponsorsenvironmentallegislationinparliament(or,touseDr.Kool'sexampleabove, is arrested in a peaceful blockade) can any reasonable claim bemadethattheeducationprogramcontributedtothisoutcome? Torelate this discussion to the Outcomes-Based schema described above we are attempting to measure relatively complex outcomes and impacts always trickier to measure than mere outputs.Awidespreadsurveyofenvironmentaleducationprofessionalsaskedthem to suggest detailed indicators or instruments that they have usedtomeasuresuchthingsasvaluesshift,behaviouralchange,environmental action, or even discrete benefits to the environment thatresultfromanenvironmentaleducationprogram. Theresultswererevealing. Not one respondent could suggest such an indicator, and allagreed that work in this field is sadly lacking.That said, this area has been discussed briefly in some places, usually inacontextthatdoesnecessarilyfitintoanyparticularevaluationschema. Thetextboxbelowsummarizessuggestionsforevaluatingactionprojectsdevelopedtocomplimentawidelyusedandwell-designed program known as Project Wild. Ideas for Measuring Success of Taking Action (p. 19) / ProjectWILD 1995.Takingtimetoevaluateanactionprojecthelpsstudentsunderstandwhattheyveaccomplishedandallowsrecognitionofhowtheirprojectfacilitatedtheirpersonalgrowth.Assessing Student Knowledge, Values, and Behaviours:Keep a video or photo log of project highlights.Collect memorabilia (articles about the project, photos, planning schedules, and soon)tocreateanactionprojectscrapbookthatstudentscansignandwritecomments in.Havestudentswriteessaysand/orkeepajournalabout anychangesintheirthinking or behaviours as a result of the project.Have students evaluate other members of their group, as well as themselves. Givestudentspointersonpositiveconstructivefeedbackandfocusthesessiononspecificpoints,suchascontributiontotheproject,effort,conflictresolutionapproach, etc.Have community members involved in the project assess student performances.Assessing Project Success:Havestudentsdescribehowwelltheythinktheirprojectaccomplishedtheobjectives they outlined at the start.Have students conduct surveys, field studies, or interviews to assess the success oftheir completed project. What worked? What didnt and why?Evaluate how students planned for ongoing maintenance and sustainability of it.Havecommunitymembersandothersinvolvedintheprojectassessprojectoutcome.Measuring the Success of Environmental Education Programs Page 33 The writers of this document have taken two approaches.Section F: AnAlternativeApproach is a less direct approach that acknowledges thatsome of the outcomes of environmental education are very difficult orperhapsevenimpossibletomeasure. Thisapproachsimplydescribesgood environmental education, creating a sort of checklist of programelements. The more features of positive environmental education thatare present in a program, the higher the likelihood of behaviour changeand positive action.Our principal approach, though, is to review the process through whichlearners can sometimes get to action, and suggest some ways in whichoutcomespertainingtovalues,behaviours,andbenefitstotheenvironmentmaybemeasured,usingsomeofthemeasurementinstruments described in Appendix One.Those using the following sections to develop an evaluation plan shouldkeepinmindthenotionoftriangulation:theuseoftwoormoretechniques to measure outcomes.Two independent measurements thattriangulate, or point to the same result, are mutually complementaryand strengthen the case that change occurred.Itisimportanttonotethatwithallthefollowingareasweareinterestedinchangethathasoccurred(changeinvalues,achangeinbehaviour, etc.) by far the best way to measure change is to use testinginstrumentsthatexaminethesubjectattwodifferenttimes,bothbeforeandafteralearningexperience. Thisisreferredtoaspre/posttesting. Any techniques used to make claims about change that do notrelyonpre/posttestingmustinsteadrelyreconstruction,inwhichsubjectsmakeclaimsaboutthewaythingsusedtobe.Often,theseclaimstendtoremainunsubstantiated. Althoughwerefertobothtechniques below, those that rely on reconstruction tend to be of lesservalidity than those based on pre/post testing. C.Measuring Values Shift Peoplemayholdthousandsofbeliefs,whichcombinetoform attitude orientations toward responding to an objector set of circumstances in a certain way, and attitudes arethe building blocks of values. By adulthood, a person willholdhundreds,ifnotthousandsofbeliefs,asmallernumber of attitudes and only dozens of values. (Caduto, 1985) Whymeasurevaluesandvalueshifts? Asdescribedabove,beliefsabouthowtheworldworks,oncelumpedandgroupedandcombinedwithemotionaltendenciescompriseattitudes. Andattitudesinturncombine to form a set of values, or a value system.As a larger category,valuessubsumethesub-conceptsofattitudesandbeliefs:bymeasuring values we can make inferences about beliefs and attitudes.Measuring the Success of Environmental Education Programs Page 34 Should we strive to measure attitudes or values? Opinions vary: someworkersinthefieldofevaluationfeelmorethatmakingstatementsaboutattitudes,ratherthanvalues,mightbemorerealistic. Thissectionconcernsitselfwiththemeasurementofchangesinvaluesorvalueshiftsthatmaytakeplaceasaresultofanenvironmentaleducationprogramorexperience.Giventhesomewhatsubjectivedistinction between attitudes and values, the authors believe that thesetechniques can be used to measure both attitudes and values. Isitimportant tomeasurevalueshifts? Somemight arguetoinsteadfocus on behaviours (see next section), and try to measure action thatattemptstohelptheenvironmentsurely,thisargumentgoes,thesenew behaviours come about as a result of a values shift.The advantageofmeasuringvalueshiftsisthatinsomecasesthismightbeallthatchanges:therearemanygoodreasonswhythoughtdoesnotalwaystranslateintoaction(Section8A).TableFoursummarizesomeoftheinstruments and associated outcome indicators for measuring a valuesshift. AppendixOnediscussesthestrengthsandweaknessesofeachinstrument. Table Four:Measuring A Shift In Values MeasurementInstrument Pre/ PostTest 1 Outcome Indicator 2 Questionnaires (Likertscale or multiplechoice) Quantitative shift in individuals/group for questionspertaining to values. Interview Student responses reveal a higher appreciation ofnatural values. Focus Group Unprompted, at least 15% of students will commentthat their values are more supportive of theenvironment. Review of Peers Students comment on changes in the values of theirpeers through formal and informal interviews andassignments at both the individual and team level. Journals Students make written reference to changes theyfeel have occurred in their own beliefs, attitudes, orvalues. Student art work Students drawings of their schoolyard give moreemphasis (using colour and perspective) to naturalobjects. Feedback form (ex. Aletter to yourorganization) Unsolicited, students comment on how theprogram influenced/changed how they feel aboutan aspect of nature. 1. Pre/Post. Often, an objective measurement of a change in values requires a baseline, pre-programmeasurementaswellasapost-programmeasurement.Acheckmarkisusedtoindicate if this is necessary. 2. Outcome Indicators:These are quantitative or qualitative statements that result inthe desired results we get after using the relevant measurement instrument.Measuring the Success of Environmental Education Programs Page 35 Sample Activity: Values Shift Measurement using Take a Stand Take a Stand is a common activity in which students take a stand on acontroversial statement by physically standing beside a sign that bestreflectstheiropinion(signssaysoneofthefollowing:stronglydisagree,somewhatdisagree,cantdecide,somewhatagree,orstrongly agree).Repetition of an identical statement before and after a program, andkeeping track of the number of students that stand beside these signs,can be used to measure both values clarification and values shift. Theoutcomeindicatorforthisactivitymightbeatargetindicator: Atleast20%ofstudentsshifttheirpositiononthestronglydisagree-stronglyagreespectrum,towardsapositionthatshowsagreatersupport of the protection of nature.*TodownloadaCPAWSversionofthisactivity,visittheCPAWSwebsite (listed in the Resources section of this document).Measuring the Success of Environmental Education Programs Page 36 D.Measuring Behaviour Change Itisacommonambitionofeducationprogramsthatsomenewbehaviour arises as a result of activities. In the case of Mary describedabove,notonlydidtheprogramactonhervaluesystembutitalsohelpedhermakethejumpacrosstheverysignificantdividebetweenthought and action. It is important to remember that not all new behaviour can be classifiedas environmental action (for example, Mary bought bear slippers for hermotheranewbehaviour,certainly,butnotonethathelpedtheenvironment).Environmentalactioninvolvesstudentsintacklinganenvironmentalissueorproblem,orworkingtoimproveanenvironmentalsetting.Actionscanbeassimpleasmakingandmaintainingacommunitynoticeboardofenvironmentaleventsorascomplexasdevelopingandimplementingaplanforwalkingschoolbuses. Environmental action is behaviour that intentionally tries to dosomething to help the environment, and is a subset of other behaviourchange. Note that actual benefits to the environment do not necessarilyfollow; the next section deals with environmental benefits and how toevaluate them. So which types of behaviour can be classified as environmental action?R.J.Wilkesummarizestheprimarymethodsthroughwhichapersonmay engage in action.Persuasion: educating or lobbying other members of thepublic.Consumerism:eitherchangingonesownconsumerhabits or encouraging others to do so.PoliticalAction:actionthatisaimedatinfluencingadecision-maker.Ecomanagement:actionstorestore,remediate,orimprove a natural area.Legal Action: action taken through legal avenues. (In Hammond, 1997) Theauthorssuggestthatanybehaviourchangefocusingontheenvironment that falls into one of these five categories can be classifiedasenvironmental action. A new difficulty presents itself when measuring behavioural outcomes:time.Whereas changes in values tend to occur during or shortly after aprogram,itmaytakelongerforbehaviourstomanifestthemselves.This not only calls for a long-term approach to evaluation that spans anumberofyears,butalsoopensthedoortothepossibilitythatsomeinfluence other than the program caused the behaviour. Measuring the Success of Environmental Education Programs Page 37 Table Five: Measuring Behaviour Change Measurementinstrument Pre/Post? 1 Outcome Indicator 2 Questionnaires Respondents listbehaviours that theybegan after the program. Interviews Open-ended questionsprompt interviewees toremark on changes totheir behaviour. Observations Observer tests for thepresence or absence of anumber of behaviouralcriteria (i.e. classroomrecycling program. Focus Group Over a quarter ofstudents agree that theirbehaviour has changed ina specific way. Student art work Student artwork createdin response to assignmentidentifies specificbehavioural changes (i.e.human interaction withecosystems mural). Feedback form Students describe anddocument action projects/changes they have made/special events: in theform of postcards,photographs, videos,journals, and web pageentries. 1. Pre/Post. In many cases, an objective measurement of a change in values requires abaseline, pre-program measurement as well as a post-program measurement. If this isnecessary, a check mark is placed in the table below. 2. Outcome Indicators: These are quantitative or qualitative statements that result inthe desired results we get after using the relevant measurement instrument. Measuring the Success of Environmental Education Programs Page 38 E. Measuring Benefits to the Environment Many funders application forms ask the following question: Please describe in concrete terms how your project will benefit theenvironment. Thosewhohavereadthisfarwillunderstandwhyenvironmentaleducatorshaveadifficulttimeansweringthisquestion!Eventhosestudents who have values sympathetic to the environment and whosenewbehaviourscanbecategorizedasakindofenvironmentalactionmay not ever do anything that directly benefits the environment or atleast not within the time period that we have capacity to measure! The following table provides a discussion of the five kinds of action andtheir evaluation. Table Six: Types and Benefits of Environmental Action Type of EnvironmentalAction Ease of Evaluating Benefits to theEnvironment Persuasion: educating orlobbying other members ofthe public. Benefits may never be demonstrable,and/or may not exist.The possibilityexists that this persuasion may not infact change anyones behaviour. Consumerism: eitherchanging ones own consumerhabits or encouraging othersto do so. Several measurement instruments canbe used to identify changes inconsumerism, and resourcesdocumenting the relationship betweenconsumer habits and environmentalimpact are readily available* Political Action: action that isaimed at influencing adecision-maker. Decision-makers may never respond topressure or a pro-environmentaldecision they make may be due to otherfactors.Interviews of decision-makerscan be helpful in determining this. Ecomanagement: action torestore, remediate, orimprove a natural area. This is easily documented and measuredusing a before and after scenario.Funders who emphasize easyaccountability, such as EcoAction, placehigh emphasis on activities of this sort. Legal Action: action takenthrough legal avenues. Action of this sort can be easilydocumented, through such things asjudicial decisions. *Forexample,theUnionofConcernedScientistsrecentlypublishedthe ConsumersGuidetoEffectiveEnvironmentalChoices. SeetheResourcessectionformoreinformation.Measuring the Success of Environmental Education Programs Page 39 Butwhatdoesbenefitstotheenvironmentmean,youmightask?Somefundingbodiescomeatthistopicfromarestorationoranti-pollution perspective, and want to know how the project will improvetheenvironmentorhelpitgetbetter.Otherscomefromapointofview that assumes the environment is healthy, (e.g. intact ecosystems)andwantapplicantstodemonstratehowhigherprotectionoftheenvironmentwillresultfromtheproject. Boththeseoutcomescanproduce benefits to the environment. F.An Alternative Approach: What We Know About Good EE The following approach has been validated through conversations withprofessionalsinbothfundingandenvironmentaleducationorganizations.Thisapproachacknowledgesthatsomeofthebenefitsandoutcomesofenvironmentaleducationareeithersodifficulttomeasure that they are impractical, or truly intangible. For these cases,our approach is to simply describe what good environmental educationis. Anextensiveanalysisoftherelevantpeer-reviewedprofessionalpapers is beyond the scope of this document however; what we can relyonisthebestjudgmentofpractitioners. Recently,anumberofenvironmentaleducatorsusedtheirbestprofessionaljudgmenttoidentify elements of environmental education programs that help leadlearnerstowardsaction. ThisworkissummarizedinHowtoGetActioninYourEnvironmentalEducationProgramwhichcanbedownloadedfromtheCPAWSwebsite. SeetheResourcessectionformore information. Usingthisapproach,thosetryingtoeitherdesignasoundenvironmentaleducationprogramormakedecisionsaboutfundingsuchaprogramneedonlyexaminetheproposedprogram,lookforelements that contribute towards behaviour change and positive action,andmakeasubjectivedecisionaboutthelikelihoodoftheproposedprogrameverachievingthesegoals. Itisworthreiterating,however,thatasoundevaluationplanissuperiorinanumberofways,asitallows a far more empirical assessment of success than the theoreticalapproach described in this section. Measuring the Success of Environmental Education Programs Page 40 Concerns AboutOutcome-Based EvaluationMost organizations (thatrecently encountered Outcome-Based evaluation) over the pastfew years harbour concernsabout it. Commonly emergingthemes are:Questioning and confusionabout what exactly is beingasked of recipientorganizations by fundingbodies.Frustration with perceivedlack of coordination amongfunding bodies with regardto the introduction of theseschemas.Concerns about definitionsof key terms (output,outcome).Doubts about the capacity ofresults-based performancemanagement schemas tocapture the so-called softresults that are intrinsic tosocial developmentinitiatives.Recognition that it isimportant to keep track andlearn from the work, butconcern that expectationsaround informationgathering and reportingexceed previousexpectations and requirefunds, time, skills andinformation systems thatthe organization does nothave.Expectation that fundingbodies should recognize thatif they desire higher levels ofaccountability andperformance management,they should assist withcapacity building support.(Cox, Plan:Net )9.Conclusion: Implications of Conducting Evaluations Theauthorswouldliketomentionseveral trends that are relevant here.Wefeelthattoday,morethanever,soci etyneedshi gh- qual i tyenvironmentaleducationprogramsthatsucceedinmovingvaluesandchanging behaviours in the directionofsustainabilityandenvironmentalconservation.Wealsobelievethateffective, relevant evaluation offers avery powerful way to improve theseeducation programs and enable themto succeed in accomplishing more oftheir objectives and goals. For a variety of reasons, funders ande nv i r o nme nt a l e d uc a t i o nprofessionalsalikefeelincreasedpressuretoconductbetterprogramevaluations.Further, there is a lot ofsupportinthesecommunitiesforOutcome-BasedEvaluation.Thatsaid,thereadershouldrefertothesidebar, ConcernsAboutOutcome-BasedEvaluation,thatsummarizessomeoftheconcernsfeltbyorganizations. The following implications arise fromthese concerns: There is a real need within theenvironmental educationcommunity for training in thearea of evaluation.Environmental educationprofessionals are experts inprogram design and delivery, butnot necessarily in conductingevaluations of these activities.Measuring the Success of Environmental Education Programs Page 41 Evaluationisanexpensiveandtime-consumingactivity. Practitionersneedtorecognizethisintheirbudgeting and fundraising activities, and funders needtobepreparedtofundgoodevaluationactivities.Manyresponsiblefundersnowwillprovidefundsforevaluation;upto15%ofthetotalbudgetoftheprogram.Practitionersandfundersalikeshouldrecognizetheimportance and validity of external evaluators, despitetheincreasedcost. Ifathingisworthdoing,itisworth doing well!Funders who deal with numerous education programsshouldconsiderefficiencies. Thismightrangefromgivingworkshopstoallgrantees,tokeepinganexternalevaluatoronretainerwhocouldengageinaparticipatorywaywithgrantees. Suchapersonorgroup could provide invaluable advice to groups settingup and implementing their evaluation plans, as well asbeing the external evaluator at the end of the process.It may take months or years for participant behavioursto change as a result of an education program. Giventhis,discerningwhetherornotbehaviourchangeoccurs requires long-term study. Practitioners need toacknowledgethisintheirevaluationdesign,andfunders need to recognize this reality when consideringfunding requests.The creation of measurement instruments, such as thepreandpostsurveysofstudents,takesconsiderabletimeandexpertise. Fundersshouldbeawareofthis,perhaps providing groups with templates developed bythosewithbackgroundsinthepsychologicalsciences,orattheveryleastreferringgroupstomodelsthathave been developed by others. Measuring the Success of Environmental Education Programs Page 42 Appendixes Appendix One: An Environmental Education Took Kit If the only tool one has is a hammer, then one tends to treat everything as if it is a nail. Mark Twain A. What Are the Instruments Available? So now what? Well, youve picked a method for evaluating your data,buthowexactlyareyougoingtogoaboutgatheringtheinformationnecessary for doing the actual evaluation? What tools (instruments)willyouuse?Fromwhomareyougoingtoobtaininformation? Andhow will you gather it?Through a survey? An interview? A focus groupmeeting? There is a wide array of data collection instruments for you tochoose from, which can be used to evaluate outputs and outcomes. Thepurposeofthissectionistohelpyoudetermineappropriatemeansofdata collection for your evaluation. 1.Questionnaires and Surveys Questionnairesandsurveysarepopulartoolsinconductingenvironmentaleducationprogramevaluation.Theyareusefulforobtaining information about opinions and attitudes of participants, andfor collecting descriptive data. They are particularly useful when quickandeasyinformationisneeded.Theycanbeappliedinthefollowingways: Self-Administered:aquestionnaireisdistributedinpersonorbymail.Examples:TeacherWrittenQuestionnaire, used to determine teacher satisfaction,post-program; Student Written Questionnaire. Four Quick Hints Before You Begin:Obtain necessary clearances and permission before you begin.Consider the needs and sensitivities of the respondents.Make sure your data collectors are adequately trained and willoperate in an objective, unbiased style.Cause as little disruption as possible to the ongoing effort.Measuring the Success of Environmental Education Programs Page 43 Telephone: best when answers are more numerous andmorecomplexquestionsareneeded.Examples:TeacherVerbalQuestionnairefortelephoneinterviews, to investigate changes in attitudes, values,orbehavioursofstudents;StudentVerbalQuestionnaireforpre/postsurveyofattitudesandvaluesFace-to-Face:bestforwhenanswersaremorenumerousandmorecomplexquestionsareneeded.Aface-to-face questionnaire (where a series of preciselywordedquestionsareasked)isdifferentfromaninterview (usually a more open-ended discussion). Questionnairesandsurveyscanreadilybecomequantitativemeasurementinstruments. Forexample,aLikertscale(inwhichrespondentscircleanumberbetweenoneandfive)canbeusedtomeasureagreementwithcertainstatements.Amultiple-choicequestionnairecanlikewisebeusedinaquantitativemanner.Quantitativetoolslendthemselvestosuchthingsasmakinggeneralstatementsaboutanindividualorclassofstudents,ormeasuringchanges to an individual or class as a result of an educational program. Design Tips Before designing your questionnaire, clearly articulate what youll usethe data for: what problem or need is to be addressed using theinformation gathered by the questions.Remember to include/consider:Directionstorespondents(purpose,explanationofhowtocompletethequestionnaire,issueofconfidentiality).QuestionsContentaskwhatyouneedtoknowandconsider whether respondents will be able (or want) toanswer.Wordingofthequestionsnottootechnical,useappropriate language levels for your audience, ask onequestion at a time, avoid biased wording or using notquestions/double negatives.Order of questions watch the length; start with fact-based and go on to opinion-based questions. 2.Interviews Interviewsareparticularlyusefulforgettingthestorybehindaparticipant'sexperiences.Theinterviewercanpursuein-depthinformationaroundatopic.Interviewsmaybeusefulasfollow-uptocertainrespondentstoquestionnaires, e.g., to further investigate their responses.Usually open-ended questions are asked during interviews. (McNamara, 1999) Measuring the Success of Environmental Education Programs Page 44 Interviewsthemselvescanrangeinstylefrominformal,casualconversations to standardized, open-ended questions, to a closed, fixed-response style. Design Tips Regardless of the format used, there are several design tips that can beappliedtoall,toensureboththequalityofdatayouaregatheringaswellasthequalityoftheexperienceforboththeinterviewerandtheinterviewee remains high. Choose a setting with little distraction.Explain the purpose of the interview.Address terms of confidentiality.Explain the format of the interview.Indicate how long the interview usually takes.Tell them how to get in touch with you later if they want to.Don't count on your memory to recall their answers. (McNamara, 1999) Remembertoaskyourquestionsoneatatime,inordertogivetheinterviewee sufficient time to think and respond. Keep the wording ofthe questions as neutral as possible, and remain as neutral as possible.When closing, ask the interviewee if he/she has anything theyd like toask you. 3. Focus groups Commonly, focus groups involve bringing together a number of personsfrom the population to be surveyed to discuss, with the help of a leader,topics that are relevant to the evaluation. Essentially, focus groups areinterviews,butofalargernumberofpeopleatthesametime,inthesamegroup. Focusgroupscanallowyoutoexploremanydifferentaspects of your program. For example, student focus groups can allowyoutoaskquestionsthatletyoulookforchangeinattitudesandvalues, plus any action that may have ensued. Focus groups also enablea deeper exploration of issues through creating a venue for the sharingof experiences: in a successful focus group session, a synergy or catalysteffectoccurs,whereparticipantsbuilduponeachothersresponsestopaint a rich context of your program. Design TipsThe usefulness of your focus group will depends on skills ofthe moderator and on your method of participant selection.Its important to understand that focus groups are essentiallyan exercise in-group dynamics.When preparing for your focus group session, clearly identifythe objective of your meeting, and develop a set number ofquestions (six is a good target number).Keep it short but sweet one to 1.5 hours is long enough.Hold it in a stress-free environment and providerefreshments. Pay attention to room set-up: have chairsarranged well, provide paper/pens, etc.Measuring the Success of Environmental Education Programs Page 45 At the beginning set out ground rules, remember to introduce(or better yet, let them introduce themselves) all participants,and clarify the agenda.At the end, thank everyone forcoming.After the group has answered each question, summarize whatyou think their answer was, and repeat it back to them.Thisis to ensure that you have a correct interpretation of theiranswer.Dont forget to keep minutes! Tape-recording focus groupsessions may be the best way to capture the often rapiddiscussions, comments and ideas.Explore a topic in depth through group discussion, e.g., aboutreactions to an experience or suggestion, challenges andsuccesses of a program, understanding common complaints,marketing needs.4. TestsMany feel that improvements in test scores are the best indicators of aprojectssuccess. Testscoresareoftenconsideredharddataandtherefore (presumably) objective data,morevalidthanothertypesofmeasurements such as opinion and attitude data. (EHR/NSF EvaluationHandbook)Various Type and Their ApplicationsNorm-referenced:Measuringhowagivenstudentperformed compared to a previously tested population Cri teri on-reference:Measuringifastudenthasmasteredspecificinstructionalobjectivesandthusacquired specific knowledge and skillsPerformanceassessmenttests:Designedtomeasureproblemsolvingbehavioursratherthanfactualknowledge; asked to solve more complex problems andto explain how they go about arriving at answers andsolvingtheseproblems;mayinvolvegroupaswellasindividualactivities,andmayappearmorelikeaproject than a traditional test.(EHR/NSF)Whenconsideringusingtestsaspartofyourdatagatheringstrategy,there are a few fundamental questions that must first be addressed: Isthereamatchbetweenwhatthetestmeasuresandwhattheprojectintends to teach i.e. if a science curriculum is oriented toward teachingprocessskills,doesthetestmeasuretheseskillsoronlythemoreconcrete scientific facts)? Has the program been in place long enoughfor there to be an impact on test scores? (EHR/NSF)Measuring the Success of Environmental Education Programs Page 46 Design TipsEvaluatorsmaybetemptedtodeveloptheirowntestinstrumentsratherthanrelyingononesthatexist.Whilethismayattimesbethebestchoice,itisnotanoptiontobeundertakenlightly.Testdevelopment is more than writing down a series of questions, and therearesomestrictstandardsformulatedbytheAmericanPsychologicalAssociation that need to be met in developing instruments that will becredibleinanevaluation.Ifatallpossible,useofareliable,validated,test is best.5.ObservationsThe basic goal behind conducting observations is for the researcher togatheraccurateinformationabouthowaprogramactuallyoperates,particularlyaboutprocessesinvolved. Observationsinvolvewatchingothers while systematically recording the frequency of their behaviours,usuallyaccordingtopre-setdefinitions. (SAMHSACAPNCAP,2000)Many researchers employ Participant Observation strategies, wherebytheresearcherjoinsintheprocessthatisbeingobserved,inordertoprovidemoreofaninsidersperspective.(SAMHSACAPNCAP,2000)6.Document and Record ReviewOften,researcherswanttoobtainanimpressionofhowaprogramoperateswithoutactuallyinterrupting theprogramitself.Areviewofsecondarydatasuchasapplications,journals,memos,minutes,andother such forms of data provides one means of doing so.7.Case StudiesCase studies are used to organize a wide range of information about acase and then analyze the contents by seeking patterns and themes inthe data, and by further analysis through cross comparison with othercases.Acasecandescribeanyunit,suchasindividuals,groups,programs, or projects, depending on what the program evaluators wantto examine. (McNamara, 1999)Design TipsMcNamara has identified five steps towards developing a case study:1.All data about the case is gathered.2.Data is organized into an approach to highlight the focus ofthe study.3.A case study narrative is developed.4.Thenarrativecanbevalidatedbyreviewfromprogramparticipants.5.Case studies can be cross-compared to isolate any themesor patterns.(McNamara , 1999)Measuring the Success of Environmental Education Programs Page 47 B. Table Seven: Pros and Cons of Evaluation InstrumentsThe following table is a summary of the advantages and disadvantagesof the evaluation instruments mentioned above.Instrument Advantages Disadvantages1. Questionnairesand SurveysTypes include:1.Self-administered.2.Interviewadministeredby telephone.Inexpensive.Easy to analyze.Easy to ensureanonymity.Can be quicklyadministered to manypeople.Can provide a lot of dataEasy to model afterexisting samples. rWording of questionsmight bias responses.rNo control formisunderstoodquestions, missingdata, or untruthfulresponses.rNot suitable forexamining complexissues.rCan be impersonal.rBy telephone:respondents may lackprivacy.2. InterviewsTypes include:1.Informal,conversationalinterview.2.Standardized,open-endedinterview.3.Closed, fixed-responseinterview.Can allow researcher toget a full range anddepth of information.Develops relationshipwith client.Can be flexible withclient.Can allow you to clarifyresponses.Interviewer controlssituation, can probeirrelevant or evasiveanswers.With good rapport, mayobtain useful open-ended comments.Usually yields richestdata, details, and newinsights.Best if in-depthinformation is wanted.rAs a rule not suitablefor younger children,older people, and non-English speakingpersons.rNot suitable forsensitive topics.rRespondents may lackprivacy.rCan be expensive.rMay present logisticsproblems (time, place,privacy, access,safety).rOften requires lengthydata collection periodunless project employslarge interviewerstaff.rCan take much time.rCan be hard to analyzeand compare.rInterviewer can biasclients responses.Measuring the Success of Environmental Education Programs Page 48 Table Seven: Pros and Cons of Evaluation Instruments (contd) 3. Focus groupsUseful to gather ideas,different viewpoints,new insights, and forimproving questiondesign.Researcher can quicklyand reliably obtaincommon impressionsand key informationabout programs fromgroup.Can be efficient way toget much range anddepth of information inshort time.Information obtainedcan be used to generatesurvey questions. rNot suited forgeneralizationsabout populationbeing studied.rIt can often bedifficult to analyzeresponses.rA good facilitator isrequired to ensuresafety and closure.rIt can be difficult toschedule peopletogether.4. TestsTypes include1.Norm-referenced.2.Criterion-referenced.3.Performanceassessmenttests.Test can provide the"hard" data thatadministrators andfunding agencies oftenprefer.Generally they arerelatively easy toadminister.Good instruments maybe available as models. rAvailableinstruments may beunsuitable.rDeveloping andvalidating new,project-specific testsmay be expensiveand time consuming.rObjections may beraised because oftest unfairness orbias.5. ObservationsTypes include1.Observations.2.Participantobservations.If done well, can be bestfor obtaining data aboutbehaviour of individualsand groups.You can view operationsof a program as they areactually occurring.Observations can beadapted to events asthey occur. rCan be expensiveand time-consumingto conduct.rNeeds well-qualifiedstaff to conduct.rObservation mayaffect behaviour ofprogramparticipants anddeliverers.rCan be difficult tointerpret andcategorize observedbehaviours.rCan be complex tocategorizeobservations.Measuring the Success of Environmental Education Programs Page 49 Table Seven : Pros and Cons of Evaluation Instruments (contd) 6. Documentationand Record Review Can be objective.Can be quick (dependingon amount of datainvolved).Get comprehensive andhistorical information.Doesnt interruptprogram or clientsroutine in program.Information alreadyexists.Few biases aboutinformation. rCan also take muchtime, depending ondata involved.rData may bedifficult to organize.rCan be difficult tointerpret/comparedata.rData may beincomplete orrestricted.rNeed to be quiteclear about whatlooking for.rNot a flexible meansto get data. 7. Case StudiesFully depicts clientsexperience in programinput, process and results.Can be a powerful meansto portray program tooutsiders. rUsually quite time-consuming tocollect, organize anddescribe.rRepresents depth ofinformation, ratherthan breadth.Theabovetableisacompilationofinformationtakefromthefollowingdocumentsand sources:Carter McNamaras Basic Guide to Program Evaluation; EHR/NSFs User-Friendly HandbookforProjectEvaluation; and SAMHSA CSAP NCAPs GettingtoOutcomes. See the References section for more information.Measurin