126
1 THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE PROGRAM EVALUATION SKILLS OF IN-SERVICE SCHOOL COUNSELORS By NICOLE MERLAN CARR A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2010

THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

1

THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE PROGRAM

EVALUATION SKILLS OF IN-SERVICE SCHOOL COUNSELORS

By

NICOLE MERLAN CARR

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL

OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

OF THE REQUIREMENTS FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2010

Page 2: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

2

© 2010 Nicole Merlan Carr

Page 3: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

3

Logan and Bryce,

Thank you for your inspiration

I love you

Page 4: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

4

ACKNOWLEDGMENTS

First and foremost, I thank G-d for inspiring me to live my greatest life. Special

appreciation goes to my advisor, Dr. Mary Ann Clark. She has been a mentor and a friend

throughout this process. I appreciate her belief in me. Her wisdom and support were essential in

the completion of this process. I thank my committee members, Dr. Sondra Smith, Dr. David

Miller, and Dr. Harry Daniels. Their guidance and direction with my work helped the forge the

path for not only this project, but my future goals.

I thank the school counselors for the hard work they do every day. Participants in this

study were both gracious, and eager. I value the time we shared developing and implementing a

process so dear to my heart. I am grateful to have had Dr. Behrokh Ahmadi as a mentor and dear

friend. I am thankful to have worked with such impeccable members of the Research and

Accountability Department of Pinellas County. I am fortunate to work in Pinellas

County Schools where compassionate colleagues have always surrounded me.

I am thankful for my friends and family. They helped sustain me. Always it was a friend

who could swoop in and wrap a blanket of support around me when I began to doubt myself. I

am thankful to have been the child of my mother. Her inspiration has been a powerful force in

all I do. Without her unconditional love I could not be the woman I hope to become.

I thank my sons Logan and Bryce, who constantly provide me with opportunities to put life

into perspective. The life that shines in them constantly inspires me to learn more. I am grateful

to have been blessed to share the world with them.

Page 5: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

5

TABLE OF CONTENTS

page

ACKNOWLEDGMENTS ...............................................................................................................4

LIST OF TABLES ...........................................................................................................................8

LIST OF FIGURES .........................................................................................................................9

ABSTRACT ...................................................................................................................................10

CHAPTER

1 INTRODUCTION ...................................................................................................................11

Accountability .........................................................................................................................11

Early Educational Accountability Systems .....................................................................11 Accountability Today ......................................................................................................12

School Counselor Accountability ....................................................................................13 Scope of the Problem ..............................................................................................................16 Statement of the Problem ........................................................................................................18

Need for the Study ..................................................................................................................19 Theoretical Rationale ..............................................................................................................20

Program Evaluation Theory ............................................................................................20 Social Cognitive Theory ..................................................................................................21

Purpose of the Study ...............................................................................................................22 Research Questions .................................................................................................................22

Definition of Terms ................................................................................................................22 Overview of the Remainder of the Dissertation .....................................................................23

2 REVIEW OF THE LITERATURE .........................................................................................24

Theoretical Framework ...........................................................................................................24 Social Cognitive Theory ..................................................................................................24 Self-Efficacy ....................................................................................................................27 Determining Self-Efficacy ...............................................................................................27

Research on Self-Efficacy ...............................................................................................28 Professional Development ......................................................................................................29

Best Practices in Professional Development ...................................................................30 Methods of Evaluating Professional Development .........................................................33 Professional Development in the Area of Program Evaluation .......................................36

Program Evaluation ................................................................................................................38 Historical Overview .........................................................................................................38

Program Evaluation Models ............................................................................................39 Systematic Approach to Program Evaluation ..................................................................40

Current State of Program Evaluation Among School Counselors ..........................................43

Research Methodology ....................................................................................................44

Page 6: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

6

Instrumentation ................................................................................................................46

Program Evaluation In School Counseling .....................................................................47 Summary of the Literature ......................................................................................................51

3 METHODOLOGY ..................................................................................................................52

Overview of the Study ............................................................................................................52 Relevant Variables ..................................................................................................................52 Population ...............................................................................................................................52 Sampling Procedures ..............................................................................................................53 Research Design .....................................................................................................................55

Research Questions .........................................................................................................56 Hypotheses ......................................................................................................................57

Measurement Procedures/ Instrumentation ............................................................................57 Personal Data Sheet .........................................................................................................57 Essential Competencies for Program Evaluators Self-Assessment .................................58 School Counselor Self-Efficacy Scale .............................................................................60

Implementation Survey ...................................................................................................61 Professional Development Unit ..............................................................................................62

Data Analyses .........................................................................................................................64 Summary .................................................................................................................................67

4 RESULTS ................................................................................................................................69

Demographic Characteristics ..................................................................................................69 Descriptive Statistics ..............................................................................................................71

Inferential Statistics ................................................................................................................72 Research Question One ...................................................................................................73

Research Question Two ...................................................................................................74 Research Question Three .................................................................................................76

Summary .................................................................................................................................78

5 DISCUSSION ..........................................................................................................................88

Overview of the Study ............................................................................................................88 Research Question One ...................................................................................................88 Research Question Two ...................................................................................................89 Research Question Three .................................................................................................90

Implications ............................................................................................................................91

Practice ............................................................................................................................91 Theory ..............................................................................................................................94

Limitations ..............................................................................................................................95 Recommendations ...................................................................................................................97 Summary .................................................................................................................................99

APPENDIX

A EMAIL FROM GUIDANCE SUPERVISOR ......................................................................101

Page 7: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

7

B EMAIL FROM PRINCIPAL ................................................................................................102

C SCRIPT PRESENTED BY RESEARCHER AT GUIDANCE MEETING .........................103

D PERSONAL DATA SHEET ................................................................................................104

E SCHOOL COUNSELOR SELF-EFFICACY SCALE .........................................................105

F ESSENTIAL COMPETENCIES FOR PROGRAM EVALUATORS SELF-

ASSESSMENT .....................................................................................................................106

G IMPLEMENTATION SURVEY ..........................................................................................108

H INFORMED CONSENT ......................................................................................................110

I OUTLINE OF PROFESSIONAL DEVLOPMENT UNIT ....................................................111

LIST OF REFERENCES .............................................................................................................115

BIOGRAPHICAL SKETCH .......................................................................................................126

Page 8: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

8

LIST OF TABLES

Table page

3-1 Research design ......................................................................................................................68

4-1 Participants‘ demographic characteristics ...............................................................................79

4-2 Item statistics ECPE ................................................................................................................79

4-2 Item statistics ECPE continued................................................................................................80

4-3 Summary item statistics ECPE ................................................................................................81

4-4 Item-total statistics ECPE ........................................................................................................81

4-4 Item-total statistics ECPE continued .......................................................................................82

4-5 SCES item statistics .................................................................................................................83

4-6 Summary item statistics SCES ................................................................................................84

4-7 Item-total statistics SCES ........................................................................................................84

4-8 Paired samples statistics pretest posttest ECPE .......................................................................84

4-9 Paired samples test (T Test) ECPE ..........................................................................................85

4-10 Paired samples statistics pretest posttest SCSE .....................................................................85

4-11 Paired samples test (T Test) SCSE ........................................................................................85

4-12 Test between subject effects (ANCOVA) posttest difference ECPE ....................................85

4-12 Test between subject effects (ANCOVA) pretest posttest difference ECPE continued ........86

4-13 Test between subject effects (ANCOVA) pretest posttest difference SCSE .........................86

4-14 Implementation survey results items 8-13 .............................................................................86

4-15 Implementation survey comments .........................................................................................86

Page 9: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

9

LIST OF FIGURES

Figure page

3-1 Professional development unit................................................................................................68

Page 10: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

10

Abstract of Dissertation Presented to the Graduate School

of the University of Florida in Partial Fulfillment of the

Requirements for the Degree of Doctor of Philosophy

THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE PROGRAM

EVALUATION SKILLS OF IN-SERVICE SCHOOL COUNSELORS

By

Nicole Merlan Carr

December 2010

Chair: Mary Ann Clark

Major: School Counseling and Guidance

The purpose of this study was to examine the impact of a professional development unit on

program evaluation on in-service elementary school counselors‘: (a) knowledge of program

evaluation, (b) self-efficacy level towards performing program evaluation, and (c) ability to

develop and conduct program evaluation in the school setting. This study provided a four-

session professional development unit on program evaluation to a group of elementary school

counselors in Pinellas County, Florida. There were 29 participants who completed the study.

Results indicated after participating in a the professional development unit, based on Social

Cognitive Theory, elementary school counselors increased their knowledge of program

evaluation, increased their perceived self-efficacy toward program evaluation skills, and applied

the learning from the professional development in their school settings. Although further

research is necessary, the findings of this study suggest that implementation of a professional

development unit, like the one tested in this study might be a useful step toward increasing the

program evaluation skills of school counselors. Additionally, these results further validate the

applications of Social Cognitive Theory to professional development.

Page 11: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

11

CHAPTER 1

INTRODUCTION

In 2001, the Elementary and Secondary Education Act (ESEA) was reauthorized to include

No Child Left Behind (NCLB). The reauthorization stemmed from the standards based reform

movement, which proposed (a) high academic standards for all students, (b) assessments to

measure those expectations, and (c) accountability for those who work with students to meet

these standards (Eakin, 1996). In 1988, Congress established a National Assessment Governing

Board who worked to create a standardized national assessment of what American students

know. The National Assessment of Educational Programs (NAEP) assessment instrument was

administered nationally to students in grades 4, 8, and 12. Before NCLB, the results of the

NAEP indicated achievement gaps between white students and black students and between white

students and Hispanic students (National Center for Educational Statistics, 2009). The increased

documentation of the achievement gap provided a heightened awareness, which strongly

influenced the stringent regulations outlined in NCLB.

Accountability

Early Educational Accountability Systems

Prior to the NCLB, the state accountability systems used to monitor student achievement,

were not standardized across states nor did they accurately reflect the progress of all children.

For example, some states reported student achievement data without disaggregating (Florida

Department of Education, 2008). Often times, there were no minimum participation rates set,

allowing some students or groups of students not to be assessed and thereby not represented in

the reported data (Florida Department of Education, 2008). NCLB requires each state to have an

approved accountability plan in place, which is based primarily on annual academic assessments

and has established aggregate groups and minimum participation rates (U. S. Department of

Page 12: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

12

Education, 2008). Annual measurable objectives are established by each state with ensuing

consequences on districts and schools for failing to meet standards.

NCLB provides states, districts and schools with very specific requirements, such as

detailed expectations of how funds are to be allocated, how specific programs such as Reading

First and Even Start Family Literacy program (U.S. Department of Education, 2010a) are to be

implemented, and how targeted populations, like disadvantaged youth, homeless, migrant, and

neglected and delinquent youth are to be supported (U.S. Department of Education, 2008).

Failure to meet federal guidelines can result in loss of federal funding. This level of legislated

accountability is unprecedented and has altered all aspects of the educational system in America.

Accountability Today

The role of accountability continues to play a significant role in American education. In

March of 2008, the United States Department of Education (USDOE) offered states the

opportunity to participate in a Differentiated Accountability Pilot Program (United States

Department of Education, 2010b). USDOE requested that each state propose an accountability

system that differentiated schools and apply appropriated interventions and consequences to

support student achievement. Seventeen states, including Florida‘s, accountability model

proposals were approved. In 2010, Differentiated Accountability was written into Florida state

statute (Florida Senate, 2010). It established criteria to differentiated schools with the greatest

need of improvement from schools with less intense needs for interventions based on student

achievement data over time. Along with the categorizing criteria, came specific strategies and

supports to be implemented by schools and school districts.

In Florida, one required strategy to be implemented is a mandatory, comprehensive

instructional review of schools. Each district is required to develop an instructional monitoring

process that includes classroom, school leadership team and school-wide monitoring. Data is to

Page 13: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

13

be reviewed to determine effectiveness of all programs and class offerings (Florida Bureau of

School Improvement, 2010). Schools categorized as having the greatest need for intervention,

based on student achievement data, are required to have an instructional review completed by the

state department of education. These state instructional reviews include the review of all

programs within a school (Florida Bureau of School Improvement, 2010). Another requirement

of the Florida state accountability model is the establishment of a defined school based

leadership team, which is to include the principal and assistant principal(s), school counselor,

social worker, psychologist, and other school based personnel (Florida Bureau of School

Improvement, 2010). This team reviews data and uses problem solving to determine what is and

what is not working to accomplishing highest student achievement.

The ongoing increased levels of state and federal government monitoring have enhanced

the need for school personnel to expand their knowledge of program evaluation. Schools need to

know if the programs they implement are doing what is required to accomplish the highest

student achievement.

School Counselor Accountability

Specifically, the evaluation of school counselor programs has an important role in

contributing to the accountability system in education (Fitch & Marshal, 2004; Loesch &

Ritchie, 2008). State accountability systems are based heavily on student achievement in the

areas of reading, math, writing, science and sometimes social studies (U.S. Department of

Education, 2010b). School counselors do not have a prescribed set of curriculum standards that

are measured in a state assessment. However, school counselors have a role in student success

(Stone & Dahir, 2006). The mission of the school counselor is to contribute to the mission of the

school and promote students‘ academic, career, social and personal development (American

School Counseling Association, 2010). The importance of prevention and intervention programs

Page 14: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

14

is evident in the prescribed interventions found in the state accountability models. Many of the

interventions address the social and emotional needs of students (US Department of Education,

2010b). As a result, the school counselors must be increasingly responsible for the evaluation of

their school counseling programs.

The school counselor‘s school based curriculum is often a core element of school reform

(Educational Trust, 2007; House and Hayes, 2002). In 2002, The Educational Trust, an

independent non-profit organization, implemented the National School Counselor Training

Initiative (NSCTI). NSCTI collaborated with private organizations, state departments, school

counseling associations, higher education institutions, and school districts to promote school

counselors as agents of change at their schools and in their districts. NSCTI has established the

inclusion of the school counselor in the accountability system by defining the school counselor

as a change agent who fosters student academic achievement. NSCTI works to promote the

school counselor as a change agent by providing in-service school counselors with information

to increase data driven decision making skills, and promoting research on effective school

counseling practices to be used in standards based systems.

Comprehensive school counseling programs reflect an understanding that school

counselors are working in a climate of standards based education and accountability. The

American School Counseling Association (ASCA) defines comprehensive school counseling

programs as ―driven by student data and based on standards in academic, career and

personal/social development, promote and enhance the learning process for all students‖ (ASCA,

2009). The value placed on set standards and assessments is clearly a foundation of

comprehensive school counseling programs.

Page 15: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

15

The ASCA National Model (2005), a framework for school counseling programs, specifies

standards of an effective school-counseling program. It focuses on four areas: foundation,

delivery, management, and accountability. Accountability includes being engaged in continuous

program evaluation activities (ASCA, 2010). It is expected that school counseling programs be

evaluated and are linked to student achievement.

The ASCA National Model (2005) provides a Program Audit to assist in this evaluation

process. It is a rubric that assesses how closely a school-counseling program aligns to the ASCA

National Model (2005). The ASCA performance appraisal for school counselors is based on

ASCA‘s recently established counseling competencies and states the audit should be conducted

annually. These competencies reflect the four components of The ASCA National Model

(2005) and includes the statement: ―School counselors should possess the knowledge, abilities,

skills and attitudes necessary to monitor and evaluate the processes and results of a school

counseling program aligning with the ASCA National Model‖ (ASCA, 2007). It is clear that

ASCA recognizes the value of program evaluation knowledge to the school counselor.

The Council for Accreditation of Counseling and Related Educational Programs

(CACREP) identifies the need for studies which provide an understanding of research methods,

statistical analysis, needs assessment, and program evaluation as a program element in the

revised CACREP accredited programs standards (CACREP, 2009). The CACREP organization

sets the standards for school counselor preparation programs and has recognized the importance

of research and program evaluation. Ed Trust, ASCA, and CACREP see the need for school

counselors to engage in program evaluation and illustrate the impact standards based

performance has had on the school counseling profession.

Page 16: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

16

Beyond external accountability, there is an ethical obligation for school counselors to be

proficient in the area of program evaluation. If school counselors are to determine their

effectiveness, then they must be able to evaluate what they are doing (ASCA, 2005). Ethical

standards lay out the expectation that the counselor will ―promote the welfare of clients‖ (ACA,

2005). The school counselor conducting a small group guidance unit aimed at decreasing truancy

needs to know if the group has achieved the set objective, and school counselors need to have the

ability to use that information to make future decisions about the program. School counselors

are frequently in schools with very little clinical supervision (Borders, 2005; Borders & Brown,

2005). School principals, who regularly function as the school counselor supervisor, often lack

the counseling theory to judge counselor effectiveness (Lambie & Williamson, 2004). Without

clinical supervision the school counselor is left without a method to monitor and evaluate the

quality of services (Bernard and Goodyear, 1998, 2004; Borders, 2005, Borders & Brown, 2005).

This lack of supervision leaves the school counselor ethically bound to the task of evaluating

their program (Borders, 2005). The school counselor must know if the guidance curriculum they

are implementing is effectively meeting the needs of their population (Loesch & Ritchie, 2008;

Myrick, 1990, 1997, 2003; Vacc & Loesch, 2000).

Thus, legislative requirements, professional counseling organizations, and ethical code

expect school counselors to engage in the practice of program evaluation. Yet, despite the

identification of program evaluation as an essential means to provide accountability, it is not

fully implemented in school counselor practice. (Studer, Oberman, & Womack, 2006; Trevisan,

2002a; Walsh, Barrett & DePaul, 2007)

Scope of the Problem

As of June 2010, all fifty states, the District of Columbia, and Puerto Rico have

―implemented high quality standards and assessment systems‖ (U.S. Department of Education,

Page 17: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

17

2010b). Schools are now held accountable for students‘ academic achievement, and responsible

to close existing achievement gaps. The decisions made at the state, district and school level are

to be determined by data based evaluations. In 2007, Florida identified 937 Schools Identified in

Need of Improvement (SINI) for failing to meet the set annual measurable objectives for student

achievement (Florida Department of Education, 2009). These schools in SINI have sanctions

imposed on them, including increased monitoring and evaluating of school and district level

programs (U.S. Department of Education, 2009). Continued failure to demonstrate student

achievement has resulted in 623 schools being categorized as schools into the ‗Corrective

Action‘ or ‗Restructuring‘ levels of SINI status (Florida Department of Education, 2009). These

levels of sanction result in a decrease in decision-making powers at the school and district level

(U.S. Department of Education, 2009). Increased oversight for these schools includes

requirements such as linking school personnel‘s performance appraisals directly to student

achievement and targeting individual professional development plans to school reform efforts.

In order to be in compliance with these mandates, school counselors must have the capacity to

evaluate their programs and determine the impact on student achievement. These sanctions have

a direct connection to the role of the school counselor and the level of program evaluation

knowledge (Florida Department of Education, 2009). The broad scope of these mandates

illustrates the great need for school counselors to have an understanding of program evaluation.

The ASCA is an ever-expanding professional organization for pre-service and in-service

counselors as well as counselor educators and other professionals and currently has over 23,000

members (ASCA, 2008). CACREP has accredited 184 school counseling university training

programs (CACREP, 2009). In 2006, there were 121,608 elementary and secondary school

counselors with a projected increase of 10.3 % by 2016 (Bureau of Labor Statistics, 2008). With

Page 18: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

18

increasing numbers of in-service school counselors as well as pre-service counselors in training,

working in highly accountable school systems, it is important to further explore methods to

increase school counselors‘ knowledge and use of program evaluation.

Statement of the Problem

While there is increasing emphasis on program evaluation, school counselors are not

systematically implementing program evaluation into their way of work (Fairchild & Steeley,

1995; Isaccs, 2003; McGannon, Carey & Dimmitt, 2005). School counselors are not monitoring

the implementation processes nor are anticipated program outcomes being evaluated. Program

evaluation includes process evaluation, to address the monitoring of implementation and

outcomes evaluation, and to assist in determining effectiveness (Patten, 2004). Both the process

and outcomes are important elements if the program evaluation is going to be utilized (Patton,

2004). However, the lack of practice of program evaluation will no longer be tolerated in the

current educational climate (Educational Trust, 2007; Loesch & Richie, 2008; McGannon, Carey

& Dimmitt, 2005; U.S. Department of Education, 2001).

As a social science profession, it is essential that a program evaluation be more than just a

checklist of duties and that value be assigned to the tasks (Scriven, 2004). Because school

counselors work with people, largely children, it is imperative to determine the effectiveness of

the school-counseling curriculum (Lipsey, 2008). The school counselor is expected to go

beyond determining effectiveness and establish that what is being done meets the needs of the

population (Loesch & Ritchie, 2008; Myrick, 1990, 2003; Vacc & Loesch, 2000). Engaging in

ongoing program evaluation is the first step in making those determinations. Program evaluation

allows school counselors to utilize the evaluation process to develop more effective practices. In

order to determine how counselors are effectively serving students in the academic, career and

social emotional domains as outlined in the standards promoted by ASCA, the program

Page 19: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

19

evaluation process must be valued and understood by the school counselor and their supervisors

(Clark & Amatea, 2004; Eisner, 2004).

It is often difficult to define the effectiveness of the school counselor. Being engaged in

program evaluation activities allows school counselors to constantly reflect on goals of the

program. Without program evaluation, school counselors lack clear measurable objectives for

their program. The school counselors who fail to demonstrate the impact of their program on

student achievement begins to lose control of their program and are relegated to perform tasks

and duties determined by others in the building (Adelman & Taylor, 2002).

Need for the Study

Evaluation of the guidance curriculum has always been an element in the comprehensive

guidance models (Gysbers & Henderson, 2000; Myrick, 1993, 2003; Vacc & Loesch, 2000).

The ASCA National Model (2005) further established the expectation that school counselors will

provide program evaluation (McGannon, Carey & Dimmitt, 2005). The ASCA has made

tremendous strides to create and promote effective tools school counselors can use to determine

if their program is aligned to The ASCA National Model (2005). If school counselors use the

ASCA program audit, performance appraisal, and the standards prescribed in the framework they

will be able to assess if their school counseling programs are aligned to The ASCA National

Model (2005). However, it is possible that a lack of holistic understanding of program

evaluation makes even the best instruments ineffective to the user (Guba & Lincoln, 2004).

Without the ability to link student achievement to the school counseling program, school

counselors are at risk of ―losing their jobs‖ (McGannon, Carey & Dimmitt, 2005 p. 4). Program

evaluation skills are required to make those connections.

What causes school counselors not to practice program evaluation? Trevisan (2002a)

identifies lack of training, mistrust of the evaluation process, and perceived difficulty in

Page 20: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

20

measuring outcomes as contributing factors to the lack of program evaluation implemented

among school counselors. CACREP programs do require coursework intended to provide the

school counselor with the appropriate training. However, not all counselor education programs

are CACREP accredited. Even with some of the best pre-service training, there is no guarantee

that the in-service school counselor will be effective in carrying out program evaluation

(Bandura, 2007; Wittmer & Loesch, 1986).

Another difficulty may be in a lack of motivation by the school counselor to conduct

evaluation. Motivational factors are categorized as those that push and those that pull (Trevisan,

2002b). For the school counselor the external push factors are evident in legislation and

professional organizations. However, the push may not be as strong at the school level where

counselors‘ duties are largely defined by the principal (Baggerly & Osborn, 2006). The internal

pull factors exist in counselors as the professional ethics, but the deficiency of skills may hinder

the utilization of evaluation. In order to address the lack of motivation, training, trust, and

perceived difficulty surrounding school counselor program evaluation, a training needs to be

conducted and evaluated, which addresses these potential causes (Trevisan, 2002b).

Theoretical Rationale

Program Evaluation Theory

Program evaluation is largely viewed as the assessing of program outcomes (Patton, 1997;

Rossi, Lispey and Freeman, 2004). However, evaluation has a broader scope than outcome

assessment. Rossi et al (2004) identify five areas of assessment in Program Evaluation:

(a) Need for the Program, (b) Program Theory, (c) Program Process, (d) Impact of the Program,

and (e) Cost Analysis.

Each of these dimensions builds upon the next. Identifying the need is the first step.

Without a clearly defined need for the program the evaluator cannot proceed in evaluating the

Page 21: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

21

appropriateness of the program design and theory. The process can only be effective when it is

based on a theoretically sound practice. The impact cannot be determined if the implementation

of the process has not taken place. Finally, without clear outcomes it is not possible to determine

the cost effectiveness. Each level of assessment is interwoven with the one before it when

determining the value of a program. Understanding of the relationships between these elements

is the precursor to determining if a program is effective.

According to Rossi, Lispey and Freeman (2004), program evaluation should review these

dimensions with a Systematic Approach, defined as the employment of ―social research

procedures for gathering, analyzing, and interpreting evidence about the performance of a

program‖ (Rossi et al, 2004 p.16). The evaluation should be utilized to inform future decisions

related to the program (Patton, 1997; Rossi et al, 2004; Schwitzer, 1997). It is the assignment of

value to the goals, processes and impacts of a program or practice that gives the evaluation a

purpose (Lipsey, 2008). Rossi, Lispey and Freeman‘s (2004) five dimensions of assessment with

a Systematic Approach to program evaluation will provide a theoretical framework for this

study.

Social Cognitive Theory

Social cognitive theory proposes that learning is strongly influenced by observation

(Bandura, 1986, 1994, 2007). Through modeling and comparing oneself to someone else, an

individual can cultivate his or her thoughts and feelings about a skill (Bandura 1986, 1994;

Zimmerman & Schunk, 2004). Social cognitive theory asserts an individual‘s thoughts and

feelings about a skill influence the learning. In social cognitive theory, the internal belief that one

could change is termed self-efficacy (Bandura, 1977, 1994, 2007). Self-efficacy is the

perception an individual has of their ability to complete an action (Bandura 1986, 2007). Social

Page 22: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

22

cognitive theory, specifically the concept of self-efficacy, will also provide a theoretical

framework for this study.

Purpose of the Study

The purpose of this experimental study will be to examine the impact of a Program

Evaluation training unit on in-service school counselors‘: (a) knowledge of program evaluation,

(b) self-efficacy level towards performing program evaluation, and (c) ability to develop and

conduct program evaluation in the school setting. The professional development unit will be

delivered to a voluntary sample of elementary school counselors from a large school district in

Florida.

This research will provide initial data and implications regarding strategies for increasing

the application of program evaluation knowledge and skills of in-service counselors

Research Questions

This study will address the following research questions:

1. Will school counselors have a greater understanding of program evaluation principles after

taking part in a professional development unit? Will demographic factors influence performance

on the Essential Competencies for Program Evaluators Self-Assessment?

2. Will the professional development unit increase school counselors‘ self-efficacy towards

program evaluation implementation? Will demographic factors influence school counselors‘ self-

efficacy towards program evaluation implementation as measured by the School Counselor Self-

Efficacy Scale?

3. Will school counselors be able to develop a program evaluation in their own school setting as

a result of having received professional development training?

Definition of Terms

The following terms are defined as they are used in this study:

Page 23: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

23

Accountability- is in the broadest sense being responsible for ones‘ actions. In the field of

education, individuals are responsible to the public as well as the individuals they are

serving. School Counselors are accountable to all stakeholders (Astramovich & Coker,

2007; Herr. 2001; House & Hayes, 2002; & Loesch, 2001; Loesch & Ritchie, 2008)

In-service school counselor - a counselor who has completed the training to be a state

certified school counselor and is working as a school counselor in the school setting.

Instrument- the tool used to measure a defined concept, skill, or variable (Sink and

Spencer, 2007; Struder, Oberman & Womak, 2006; Whiston & Aricak, 2005, 2007)

Professional Development Unit- a unit in this study is a multiple session professional

development. It is not a one-time meeting where information is shared, but instead

information is progressive and builds upon prior sessions.

Program Evaluation- the systematic approach to determining the value of a program,

policy or procedure (Rossi, Lipsey, & Freeman, 2004)

Self- Efficacy- is a component of Social Cognitive Theory. It is the perception an

individual has of their ability to complete an action (Bandura 1986, 2007).

Social Cognitive Theory- is based on the concept that human actions are the result of the

relationship between the individual‘s behavior, external environment, and internal factors

(Bandura 1986, 2007).

Overview of the Remainder of the Dissertation

This first chapter provided an introduction to the study. Chapter 2 reviews the literature in

several important areas: school counselors as evaluators, program evaluation theory,

professional development theory and self-efficacy theory. The variables, population, sampling

procedures, research design, measurement procedures, professional development unit, and data

analysis, are presented in Chapter 3. The results of the study are presented in Chapter 4.

Chapter 5 discusses the study‘s significant findings, limitations, implications, and

recommendations for further research.

Page 24: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

24

CHAPTER 2

REVIEW OF THE LITERATURE

The purpose of this study will be to examine the impact a professional development unit on

program evaluation has on in-service school counselors‘: (a) knowledge of program evaluation,

(b) self-efficacy level towards performing program evaluation, and (c) ability to develop and

conduct program evaluation in the school setting. This chapter reviews literature relevant to this

study including the theoretical framework of social cognitive theory; self-efficacy theory and

instrumentation; effective characteristics of professional development as revealed in the

evaluation of professional development; program evaluation theory; and the current state of

program evaluation amongst school counselors.

Theoretical Framework

Social Cognitive Theory

Social Cognitive Theory evolved from Albert Bandura‘s early work with phobias when

he realized that behavior was influenced by more than external rewards and punishments

(Bandura, 1977, 2007). Social Cognitive Theory is based on the concept that human actions are

the result of a relationship between the individual‘s behavior, external environment, and internal

factors (Bandura 1986, 2007). The attribution of behavior to the external factors is evident in

Bandura‘s postulation that people do not live their lives in isolation and in fact they work

together to get what they want (Bandura, 2007). In Social Cognitive Theory, this concept is

known as reciprocal determinism; the idea that behaviors impact the environment and the

environment impacts the behaviors (Bandura, 1986, 2007). The internal factors are defined as

thoughts, feelings, and biological make-up (Bandura, 2007). Social Cognitive Theory views the

person both the strategic thinker about how to manage the environment and later the evaluator of

Page 25: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

25

the adequacy of his or her knowledge, thinking skills, capabilities, and action strategies

(Bandura, 2007).

According to Social Cognitive Theory, learning is strongly influenced by observation

(Bandura, 1986, 1994, 2007). By modeling, a process of comparing oneself to someone else, an

individual can develop his or her thoughts and feelings about a skill (Bandura 1986, 1994;

Zimmerman & Schunk, 2004). To learn from modeling, the individual must have the capacity to

pay attention; retain what is being modeled; reproduce what is being modeled; and be motivated.

Strong and effective modeling allows an individual to acquire skills and have the capacity to

apply those skills in an appropriate way in different settings. Mindful application of the skills

implemented in a routine is required to develop efficient acquisition of those skills (Bandura,

1999). The reinforcement of the skill provides the opportunity to self-regulate the learning

(Wang & Lin, 2006) . Making the application a routine can be beneficial when the skills can be

applied in a variety of settings, but falls short and detracts from the individual‘s learning when

the skills cannot be adapted (Bandura, 2007).

Since people do not always do everything they learn Social Cognitive Theory makes a

distinction between learning and performing (Bandura, 2007). Social Cognitive Theory

postulates that there are three main motivational incentives that influence performance: direct

motivation, vicarious learning, and self-produced skill (Bandura 1986, 1994; Zimmerman &

Schunk, 2004). Direct motivation is the influence the consequence has on the person‘s

performance. Vicarious learning is the influence the observation of others has on the

performance, and the self-produced skill is the identification the person has on the performance

(Bandura 1986, 1994, 2007). Perceived barriers can also adversely affect the individual‘s

Page 26: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

26

performance. People are more likely to perform if they anticipate positive outcomes than if they

anticipate negative outcomes (Bandura, 2007).

The application of Social Cognitive Theory is examined with adult learners‘ acquisition

and application of computer skills. Early work of Compeau & Higgins (1995) and later Wang, &

Lin (2006) examine the impact of personal, behavioral and environmental influences. Compeau

& Higgins (1995) compared a behavior modeling to a lecture based approach and found the

modeling the more effective training model. Additionally, watching others perform the skills

attributed to higher levels of self-efficacy and influenced performance (Compeau & Higgins,

1995).

Wang & Lin (2006) also applied the idea of self-regulated learning in the development of

the application used to teach the skills. Adult learners were able to monitor their learning and

adjust to acquire the skills. Findings indicated the participants with higher levels of motivation

were able to respond more appropriately and effectively in the web based learning environment

(Wang & Lin, 2006).

Social Cognitive Theory based interventions with teachers are also found in the literature.

The impact of a Social Cognitive Theory based intervention with Physical Education teachers

indicated an increased efficacy in teaching Exemplary Physical Education Curriculum (EPEC)

and overcoming barriers (Martin, McCaughtry, Kulinna, & Cothran, 2009). The study applied

modeling, and vicarious learning by pairing experienced teachers with new teachers during the

intervention, which included professional development on teaching EPEC.

Zimmerman and Schunk‘s (2004) cognitive model was applied to an instructional model

used in a Business English course with 24 freshman university students (VanSteendam,

Rijlaarsdam, Sercu, & VandenBergh, 2010). The approach used observation, modeling, and self-

Page 27: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

27

regulated learning. Findings indicated if emulation happened then observation and practice were

equally effective in impacting the acquisition of the strategy (VanSteendam et, al., 2010).

Social Cognitive Theory is also prevalent in teacher preparation texts. Practical

application and use of Social Cognitive Theory, including the use of modeling, self-regulated

learning, and vicarious learning are seen in these texts (Kumpulainen, 2001; Pajares, 2002).

Social Cognitive Theory, including, modeling, self-regulated learning and vicarious learning is

the framework used in the development of the professional development unit for this study.

Self-Efficacy

Social Cognitive Theory discusses how modeling, motivation, and the experience of doing

can each lead to an internal belief that one could change a behavior (Bandura, 1977, 1994, 2007).

The internal belief that one could change a behavior--Self-efficacy-- is a central component of

Social Cognitive Theory (Bandura, 1977, 1994, 2007). Self-efficacy influences a person‘s

actions. Individuals who believe they will be effective will expect a positive outcome and those

who believe they will be ineffective will expect a negative outcome (Bandura, 1977, 1994,

2007). Personal efficacy contributes to the acquisition of the knowledge and development of

skills and can be influenced by the choice of activities and the individual‘s motivational level

(Bandura, 2007). If the school counselor receives the training to implement a new program, but

has a low self-efficacy towards their ability to put that training into practice, they are unlikely to

implement the program. Without initial implementation, feedback cannot occur and ongoing

change in practice will not be sustained.

Determining Self-Efficacy

Self-efficacy measures have ranged from broad overviews to domain specific assessments

(Pajares, 1991). Unless the self-efficacy instrument is specific to the expected outcome it is

unlikely to predict the future behaviors (Bandura, 1986). The skills being tested and the skills

Page 28: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

28

one hopes to assess should be similar skills (Pajares, 1991). There are a number of instruments

developed to determine the self-efficacy of teachers (Brouwers & Tomic, 2004; Pajares, 1991;

Riggs & Enochs, 1990). Bodenhorn‘s research has contributed to the methods for measuring

self-efficacy of school counselors (Bodenhorn, 2005, 2010) A review of Bodenhorn‘s work on

instrument development related to counseling self-efficacy is found in chapter 3. Determining

the self-efficacy of the school counselor following a professional development will not only

assess the participant‘s reaction, but also begin to ascertain the participant learning, and assess if

there is the potential for the learning to be applied (Guskey, 2002).

Research on Self-Efficacy

The investigation into self-efficacy is a highly visible area of research in counseling and

teaching (Paulsen & Betz, 2004). A substantial body of research can be found in the area of

self-efficacy and career counseling (Bodenhorn, Wolf, & Airen, 2008; Cantrell & Hughes, 2008;

Guskey, 2002; Kosin, Steger, & Duncan, 2008; Larson & Daniels, 1998; Paulsen & Betz, 2004;

Yuen & Ma, 2008). The relationship between self-efficacy and career decision-making has been

examined in the areas of self- appraisal, collecting occupational information, setting goals, career

planning, and problem solving (Paulsen & Betz, 2004). Numerous studies examining the self-

efficacy career decision-making have identified the important relationship that exists between the

expectation for success in a career and the decision to pursue that career (Betz & Voyten, 1997;

Feldt & Woelfel, 2009; Kosine, Steger, & Duncan, 2008; Paulsen & Betz, 2004).

The relationship between self-efficacy and acquisition and application of skills are

examined in the literature. Work done by Yuen and Ma (2008) targeted a group of in-service

teachers studying in a graduate program at a university in Hong Kong. The study sought to

identify factors that contribute to teachers‘ acceptance of e-learning technology. The findings

Page 29: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

29

from 152 questionnaires indicated computer self-efficacy was one of the two significant

determinants for perceived ease of use (Yuen & Ma, 2008).

Cantrell and Hughes (2008) examined self-efficacy of teachers in their evaluation of an

extended professional development on content literacy. The study examined the effects of a year

long professional development with sixth and ninth grade teachers. Results indicated an

improvement in the teachers‘ self-efficacy toward the literacy teaching and concluded that

teachers who had a higher self-efficacy prior to the professional development were more likely to

implement what they had learned (Cantrell & Hughes, 2008).

In Larson and Daniels (1998) review of the counseling self-efficacy literature, it was

concluded that higher counseling self-efficacy was related to perseverance and the increased

capacity to integrate evaluative feedback. They found mindfulness was a significant predictor of

counseling self-efficacy. In a national study of 860 American School Counseling Association

members, school counselor self-efficacy was shown to have a relationship to a greater awareness

of achievement gap data (Bodenhorn, Wolfe, & Airen, 2010). Greason and Cashwell (2009)

examined the predictive relationship between mindfulness and counseling self-efficacy in 179

counseling students. Literature with a focus on self-efficacy supports the concept that self-

efficacy effects the professional pathway and practice. It is evident that the level of self-efficacy

influences career decisions and the acquisition and implementation of professional skills.

Professional Development

Professional development is a broad term used to describe learning to develop an

individual‘s professional knowledge or skills. Examples of Professional Development include a

formal course leading to a specific skill, an embedded on-going reflection for continuous

improvement, or mandated training on specific skills or procedures (Guskey, 2002). Many

professional development models are grounded in social cognitive theory (Guskey, 2002; Fullen,

Page 30: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

30

2001; Shaw et al., 1991). The concept that the leaders of professional development can change

participants‘ attitudes and beliefs by providing participants feedback from the implementation of

the skills and knowledge learned in the professional development is a first step (Guskey, 2002).

Three key areas of professional development were considered in designing the

professional development unit used in this study: overall best practices in professional

development, methods of evaluating professional development, and professional development in

the specific area of program evaluation. Each of these three areas will be considered before

developing and evaluating the impact of a professional development unit on in-service school

counselor‘s program evaluation skills.

Best Practices in Professional Development

Although this study involves a professional development unit with school counselors, the

theoretical basis of the study must draw on research on professional development done with

teachers. This is because most of the literature surrounding professional development in

education comes from the work done with teachers. Historically, the theoretical framework of

professional development has focused on teacher change theory (Bechtel & O‘Sullivan, 2006).

Teacher change theory is often based on psychotherapeutic change models (Bechtel &

O‘Sullivan, 2006; Guskey, 2000; Guskey, 2002). Here professional development, which brings

about improved student academic achievement, is considered effective (Guskey, 2002). Guskey

(2002) argues that a broader definition of student learning outcomes should include not only

academic achievement, but also other measures such as behavior and attitude. These other

measures of student learning outcomes could be measured with standardized assessments,

attendance records, behavior evaluations, and motivation evaluations.

Guskey (2002) cites the study by Harootunian &Yargar (1980), which found that teachers

defined success in terms of their students‘ achievement, not in terms of their own professional

Page 31: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

31

development. School counselors are similar to teachers in that they also define success in terms

of student accomplishment, and have found value in linking their impact on students to their

sense of mattering. (Rayle, 2006)

Professional development should be ―well organized, carefully structured and

purposefully directed‖ (Guskey, 2003a p. 12). Being purposeful will better align the program to

the needs of the targeted audience (Lipsey, 2008). In designing a successful professional

development unit, it is helpful to review relevant research on theories about why professional

development units sometimes fail. Historically, Fullan‘s (2001) work found that earlier

professional development models failed because they lacked teacher ownership. Without teacher

buy-in, change would not occur. Fullan (2001) emphasizes need for both support and pressure

for change in the teachers‘ behaviors and beliefs. Shaw, Davis, and McCarty (1991) later placed

change in the context of being a result of a ‗perturbation‘. The teacher has become perturbed

enough to embrace a change and must be active in the development of a new vision for change.

This model suggests that the change occurs over time and is a repeated process for the teacher.

Both of these theories explain change as the result of a change in the teacher‘s attitudes or

beliefs. Once the attitude and beliefs of the teacher change, the behavioral change can occur.

The more recent work of Guskey (2002) asserts that most professional development work

fails because it does not properly address what motivates change and the process of change.

Guskey (2002) asserts that motivation to change arises from both the belief that the professional

development will enhance teaching and that it will be pragmatic (Guskey, 2002). Like other

models, this model asserts that sustained change results from a change in beliefs and attitudes

(Fullan, 2001; Shaw et al, 1991). However, this model of teacher change reverses the usual

order of change. Whereas change in belief and attitudes is followed by implementation in other

Page 32: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

32

models, in this model professional development is followed by change in teacher classroom

practice, then change in student outcomes, and then finally change in teacher beliefs and attitudes

(Guskey, 2002). This concept stems from the idea that changes in attitudes and beliefs are the

result of the experience with the implementation of the skills and knowledge obtained in the

professional development. For example, when teachers have implemented a change and can see

results, they respond with a change in their attitudes and beliefs.

In order to implement a successful professional development unit using this model,

continual feedback needs to be provided to the participants (Guskey, 2002). One way to provide

feedback so the teacher can monitor the change that is a result of the professional development

unit is to implement a monitoring system and support for this system as a part of the professional

development unit.

A recent noteworthy contribution to professional development research was the Institute

of Education Science‘s (IES) report: Reviewing the Evidence on How Teacher Professional

Development Affects Student Achievement (2007). This report identified 1,359 studies that

investigated the effect of professional development on student achievement. Of all the studies

investigated, only nine studies met the What Works Clearinghouse (2002) evidence standards.

From those nine studies, several important conclusions were drawn about what make

professional development programs effective (Guskey & Yoon, 2009).

The effective professional development models identified in the IES study have common

elements related to time and duration. ―Studies that had greater than 14 hours of professional

development showed a positive and significant effect on student achievement from professional

development‖ (Yoon, Duncan, Lee, Scarloss, & Shapley, 2007 p. 12). Of the four studies with

greatest impact on student achievement, the average contact time was slightly over 53 hours

Page 33: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

33

(range: 30 hours to 83 hours), over a period of time from four months to 12 months (Yoon, et al.,

2007).

With the exception of one 4-week intensive summer training, all of the professional

development models in the review were workshops that provided follow-up (Yoon, et al., 2007).

In all of the professional development models, the training went directly to the teacher; that is,

none of the studies utilized the common practice of training one person to go back and then train

other groups. This is often referred to as a ‗train the trainer‘ model (Bax, 2002; Guskey, 2000).

The professional development sessions were all conducted by the researcher or a trainer external

to the school site. A review of the content description of the nine studies indicated that

successful activities focused on increasing content knowledge and pedagogic practice (Guskey &

Yoon, 2009). Successful activities also contained well-planned, purposeful learning (Guskey

&Yoon, 2009). The above findings from the Institute of Education Science‘s (IES) report,

Reviewing The Evidence on How Teacher Professional Development Affects Student

Achievement (2007), highlighting the importance of duration of training, follow-up, delivery

method and content, provide useful guidelines for what should be included in an effective

professional development model.

Methods of Evaluating Professional Development

In addition to providing useful guidelines for professional development research based on

published research, the IES study demonstrates the importance of sound evaluation of

professional development (Yoon, et al., 2007). Many of the 1,350 quasi-experimental and

randomized controlled design studies in the study were excluded because they lacked baseline

data. Because studies without baseline data were devalued by the IES study, the researcher was

careful to include baseline data obtained through a pretest in this study.

Page 34: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

34

Although scholars agree that professional development training programs benefit from

comprehensive evaluation, there is evidence that many professional development programs lack

appropriate evaluation (Guskey, 2000; Trevisan, 2002b; Trevisan, 2004). Many professional

development programs are not evaluated (Trevisan, 2002b; Trevisan, 2004). Some programs

utilize tally sheets of attendance or narrative descriptions of the programs as the only evaluation

(Dahir & Stone, 2003; Guskey, 2000). Other evaluations of professional development are simple

questionnaires asking the participants if they enjoyed the workshop or felt they had learned

something from it. These self-assessments lack the baseline data to assert any real evidence

(Yoon et al., 2007).

Although many self-assessment methods used to evaluate professional development have

lacked the depth needed to draw conclusions, self-assessment has been studied to determine its

usefulness in evaluating professional development (D‘Eon, Sadownik, Harrison, & Nation, 2008;

Pratt, McGugian, & Katzev, 2000). D‘Eon et al. (2008) show some early evidence of the

construct validity of self-assessment of professional development workshops based on effect

size. The approach taken in the D‘Eon et al. (2008) study along with earlier works, which

looked at correlation between retrospective self-assessments and objective ratings provide

potential resources in evaluating the effectiveness of professional development (Skeff et al.,

1992; Pratt et, al, 2000).

Guskey‘s (2000) text on evaluating professional development provides a broad

theoretical framework of what he defines as the levels of evaluating professional development.

He does not provide an evaluation model. Instead he offers a theoretical framework grounded in

the Joint Committee Evaluation Standards, a set of standards for evaluation. The Joint

Committee is a group professional association with the common purpose to develop standards

Page 35: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

35

for program evaluation. Since being formed in 1975 the committee has published program

evaluation standards (Yarbrough, D. B., Shulha, L. M., Hopson, R. K., and Caruthers, F. A.,

2011).

Guskey (2000) offers a perspective to consider when evaluating professional development.

There are five levels in Guskey‘s (2000) view: (1) Participant Reaction; (2) Participant Learning;

(3) Organizational Support and Change; (4) Participant Use of Knowledge and Skills; (5)

Student Learning Outcomes.

Participant Reaction, the first level in this theoretical framework, is the one most often

investigated. While this first level is the most basic and the most easily performed, it remains a

fundamental step in the evaluation of any professional development program. Participant

reaction determines what perceptual information needs to be gathered and helps decide the best

method of data collection.

Participant Learning, the second level in this theoretical framework, is frequently referred

to in educational pedagogy as the assessment of the measurable learning objective (MLO). The

MLO of the professional development would relate to the expected learning outcomes.

Identifying the importance of MLO as a level reinforces Guskey‘s (2000) assertion that the

professional development must be systematic and purposeful. The participants need to know

what is expected and how it will be measured.

Organizational Support and Change, the third level in this framework, involves assessing

the organizational support. This third level is not always addressed in the evaluation but is

identified as a factor in teacher change (Fullen, 2001; Shaw et al, 1991; Guskey 2003b). The

idea that support and pressure influence teacher change, makes this level of investigation a key

aspect of any professional development program. An understanding of the system the training

Page 36: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

36

will be implemented within provides the evaluator with a broad perspective and allows the

evaluator to better assess the professional development program. Knowing how the professional

development participant is going to be received when he or she tries something different at

school can impact the type of ongoing support and follow-up required from the professional

development program.

Participants‘ Use of Knowledge and Skills, the fourth level in this framework, is a required

level in an assessment of a professional development program. An evaluation of the

participants‘ knowledge of the content delivered during the professional development unit is an

important component of gauging the success of the program. The participants use their

knowledge and skills learned, suggesting that they have mastered the content, and value it

enough to use it in their teaching.

Student Learning Outcomes, the fifth and final level in this framework, requires sufficient

baseline data and frequent measurement of the students‘ predetermined skills or behaviors that

are expected to change. Guskey‘s (2000) framework is supported by program evaluation theory.

The concept of one level being interrelated and required before the next level can be investigated

is congruent with work done by Rossi, Lispey, and Freeman‘s (2004) Systematic approach to

program evaluation. This theoretical framework also functions within a context, considering the

inputs and process along with products. The concept of considering inputs, process and products

is grounded in the work done by Stufflebeam. Stufflebeam‘s CIPP Model (2004) incorporates

context, input, process, and product.

Professional Development in the Area of Program Evaluation

Ghere, King, Stevahn, and Minnema (2006) developed taxonomy of program evaluator

competencies. These competencies were crosswalks between The Program Evaluation Standards

Page 37: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

37

(1994); the Essential Skills Series in Evaluation (1999); and the Guiding Principles for

Evaluators (1995). They include six categories:

―(a) professional practice: professional norms and values; (b) systematic inquiry: the

technical aspects of evaluation; (c) situational analysis: understanding and attending to the

contextual and political issues of an evaluation; (d) project management: the nuts and bolts of

managing an evaluation; (e) reflective practice: an awareness of one‘s program evaluation

expertise as well as needs for professional growth; and (f) interpersonal competence: the skills

people need to work with diverse groups of stakeholders to conduct program evaluations.‖

(Ghere et al., 2006)

Their study used these competencies as a starting point for the participating teachers to

reflect upon. They strove to create awareness and develop knowledge of these competencies.

Participants first reflected on these competencies in the context of their environment and then

self-assessed their level and subsequently developed a professional development plan based on

their reflections. The two-hour session has been delivered to multiple audiences at a variety of

times. Anecdotal data was collected to determine satisfaction and perception of the professional

development but implementation and impact were not assessed. While there is a lack of

evaluation, this study does provide the initial development of some expected competencies,

which were collected using a system of crosswalks between multiple established essential skills.

Trevisan‘s (2004) review of over 18 articles related to the teaching of evaluation is aimed

at providing guidance to faculty assigned to teach program evaluation coursework, and offers

some insight on what should be included in a professional development unit on program

evaluation. In general, practical experience for students has been recommended consistently in

the literature (Trevisan, 2004). The examples of hands-on teaching strategies addressed in the

Page 38: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

38

review include: simulation, role-play, single course projects, and practicum experiences. In

order to include any of these approaches support and supervision is required (Trevisan, 2004).

Based on what is discussed above regarding best practices in professional development, the

current program evaluation frameworks for professional development, and the previous

knowledge garnered from studies on training individuals in program evaluation, there are some

clear expectations and directions set for the development of affective professional development

unit on program evaluation.

Program Evaluation

Before developing affective professional development unit focusing on program

evaluation, there needs to be an understanding of the history of Program Evaluation in education.

An understanding of the history can guide any decisions made about the best program evaluation

model to be used by in-service school counselors.

Historical Overview

The early days of program evaluation in American Education began in 1845, when

students in Boston were first tested (Madaus & Stufflebeam, 2000). The concept of using testing

scores as a method for assessing student anticipated outcomes has a rich history, starting in 1887

with Joseph Mayer Rice, who first began to formally evaluate students by comparing test scores

between groups of students (Shadish, Cook, & Leviton, 1995).

There have been periods of growth in program evaluation related to the historical context

in which they occurred. One peak came with Ralph W. Tyler who began to move program

evaluation beyond the teacher and local level and developed larger scale evaluations in the

1940s. The development of standardized testing in the 1950s along with the large amount of

federal funding that came with The National Defense Education Act (1958) brought about

additional growth in program evaluation. The Elementary and Secondary Education Act (1965)

Page 39: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

39

was quick to follow and with it came Title I evaluation requirements. These increases in

mandated educational evaluation began to define program evaluation as a business and a

profession (Shadish et al, 1995).

The more recent reauthorization of the Elementary and Secondary Education Act, NCLB

(2002) with explicit evidence-based evaluation requirements solidified the significant role

program evaluation has in American Education (Dollarhide & Lemberger, 2006). An awareness

of the consistent presence of evaluation is important because of the mistaken impression held by

some that that evaluation was born from NCLB. While the recent impact of the prescriptive

nature of NCLB has influenced the prevalence of Educational evaluation, program evaluation in

education is not new. No matter the political climate, programs will continue to be scrutinized

through evaluation (Rossi, Lipsey, & Freeman, 2004).

The same National Defense Education Act (1958) that brought growth to the standardized

testing and evaluation industry gave birth to what in the 1980s became known as Developmental

School Guidance programs (Gysbers & Henderson, 2000; Myrick, 2003). The school counseling

profession has been and will continue to be influenced by some of the same educational policies

that have impacted program evaluation. Knowing the historical evolution and political context

provides a greater understanding of the environment the school counselor will have in which to

practice program evaluation.

Program Evaluation Models

One of the most widely referenced Program Evaluation Models is the CIPP Model.

Developed by Daniel Stufflebeam, in 1966, the CIPP Model is now in its fifth version. The

original focus of this model was aimed at including both the process and the product in the

evaluation model (Stufflebeam, 2004). The idea that the end product was not the only area

worthy of evaluation and that the process leading up to the outcomes was also worthy of

Page 40: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

40

evaluation represented a new direction in program evaluation. Later versions incorporated the

idea that the Model is within a context and both the context and the Inputs, which already exist,

should be considered in the whole approach to the evaluation. The most recent version of the

CIPP Model has evolved to include more detail related to the product. There is now the

inclusion of the effectiveness of the product; not only is the impact on the target group evaluated,

but an investigation into the quality and significance is also included (Stufflebeam, 1999). The

product is also more closely examined to determine sustainability and transportability. These

constant revisions in this model exemplify the tenets of program evaluation. As the model itself

is evaluated within the context of the work it does, and those inputs taking place, the process and

product are evaluated.

Daniel Patton‘s Utilization-Focused Evaluation, developed in 1978, is another model that

addressed the importance of using the program evaluation (Patton 1997). Patton defines program

evaluation as a ―systematic collection of information about a potential broad range of topics for a

variety of possible judgments and uses‖ (Patton, 1997, p. 23). Utilization-Focused Evaluation

moves from determining the value of a program in an evaluation to determining why the value is

important and what will be done with the knowledge of this value (Patton, 1997).

Systematic Approach to Program Evaluation

Rossi, Lipsey, and Freeman (2004) provide a model that, like those of Patton (1997) and

Stufflebeam (2004) is systematic. The Systematic Approach includes five levels that are

assessed when doing a program evaluation: (a) Need for the Program, (b) Program Theory, (c)

Program Process, (d) Impact of the Program, and (e) Cost Analysis. Each level builds upon the

next. When evaluating programs, there must be some success at each of the levels in order to

make a determination that the program was successful. The lessons learned from assessments at

each level provide value to the users of the program evaluation.

Page 41: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

41

The first level of the Systematic Approach to program evaluation is Needs Assessment.

In order to evaluate the program, there must be an understanding of the problem and the

population it was aimed at addressing. What was the need and does this program address that

need? Program designers sometimes fail to align the program with the need. For example, a

reading intervention, which is designed to teach initial sound fluency, may be utilized with a

population of students that, upon investigation, did not need a reading program focused on

fluency. The Needs Assessment might have found that this group of students had a high level of

fluency and a low level of reading endurance. Their need was a program in reading endurance,

not sound fluency. If the needs do not align with the intended outcomes of a program, it is

difficult to evaluate the program. In this example, since the students all began the program

proficient in initial sound fluency, they all would have done well on the post-test. If there had

not been a pre-test assessment, one might wrongly conclude that the program resulted in the

improved fluency. The program would have incorrectly been deemed successful.

Following the Needs Assessment, the second level of the Systematic Approach to

program evaluation is Program Theory. Once there has been a determination of need and an

intervention is deemed necessary there is an assessment of the Program Theory: Was the

program based on sound theoretical knowledge? An example of a problem at this second level

of program theory would be the implementation of a math program in all the schools even

though research on the program indicated that it did not improve student math performance.

Lipsey (2008) uses an example of the ‗Scared Straight‘ program aimed at pairing juvenile

delinquents with incarcerated criminals with the hopes that this relationship will decrease the

delinquent behavior amongst the juveniles. Research shows this approach does not work and in

Page 42: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

42

fact increases delinquency rates among juveniles. Even thought the ‗Scared Straight‘ program

fails the second level of program evaluation, Program Theory, it remains an ongoing program.

The third level of the Systematic Approach to program evaluation in Rossi et al. (2004) is

the Program Process. Was the program implemented in the way it was prescribed? How was the

implementation monitored? Programs are put into practice differently. When evaluating a

program, it must be clear what aspects of the program were critical to its success. When a

program is executed differently for one group then that different method of execution must be

investigated. At this level, the question becomes, is it the program or the individual

implementing the program that makes it a success? If the program‘s success is attributed to the

individual, one must then ask, what is it about the individual that is making a difference in

program effectiveness?

Outcomes Assessment, the fourth level of the Systematic Approach to program

evaluation, involves looking at the outcomes of a program to determine if the program

successfully achieved its goals. Outcomes assessment sometimes is mistakenly the first step that

people performing program evaluation start take, before assessing the needs, theory, or process

(Lipsey, 2008). Unfortunately, often times this level cannot be properly assessed because, in

order to attribute any outcomes to a program, the program has to have successfully considered

the previous three levels. Working in isolation at this level makes it impossible to draw

conclusions about effects of a program (Lipsey, 2008).

The final level of the Systematic Approach to program evaluation is a Cost Analysis, a

measure of the efficacy (Rossi et. al., 2004). The examination of the cost and benefits of the

program has the greatest impact on the future direction of a program (Rossi, et al., 2004). There

are many technical and advanced applications to use when conducting a cost analysis. However,

Page 43: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

43

cost and benefit are not always monetary. They may come in the form of time or student

performance. Broadly defining and understanding the value of cost and benefit assists in the

measuring of the program efficacy.

Each of these levels of the Systematic Approach to program evaluation is related to one

another. Knowing how they relate and how they are used to evaluate a program can influence

the planning of an effective program. Ideally program evaluation models are considered at the

planning stage of a program (Patton, 1997; Rossi, et al. L, 2004; Stufflebeam, 2004). However,

in practice, they are often used after a program has been implemented when someone is

responsible for determining the program outcomes or cost effectiveness.

Current State of Program Evaluation Among School Counselors

American education is currently in an environment of standards based reform. With the

reauthorization of the No Child Left Behind Act (NCLB) expected in the near future, it is

unlikely there will be a reduction in the levels of accountability expected from those involved in

the United States K-12 educational system (Manasevit, 2008). The accountability levels

mandated by NCLB have already impacted school counselors‘ way of work (Dollarhide &

Lemberger, 2006). The call for increased levels of accountability is not new to the school

counseling profession (Brown & Trusty, 2005; Gysbers, 2004; Gysbers & Henderson, 2001;

Myrick, 2003; Wheeler & Loesch, 1981) However, the sense of urgency for school counselors to

practice program evaluation has been heightened by the external pressures of educational policy

and national budget reductions (Dahir & Stone, 2009; Herr, 2001; House & Hayes, 2002;

McGannon, Carey & Dimmitt, 2005).

The increased attention to school counselor accountability has generated an increase in

the accountability efforts. The Fairchild & Zins (1986) study indicated that 45% of school

counselors take part in some accountability. Six years later, Fairchild (1993) repeated the study

Page 44: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

44

and found that the percentage of school counselors taking part in accountability efforts increased

from 45% to 65%. The Council of Accreditation of Counseling and Related Educational

Programs‘ (CACREP) revised standards expects students to be provided with an understanding

of program evaluation while enrolled in a counselor preparation program (CACREP, 2009).

Despite the value placed on program evaluation by CACREP, as evident in the 2009

revised standards, Trevisan‘s (2000) recent survey of school counselor certification requirements

found only 19 states and Washington, D.C. require some form of program evaluation skills as

demonstrated by completed coursework. Within those 20 locations, only Washington, D.C. and

Colorado mandate a level equal to that found in CACREP standards (Trevisan, 2000).

ASCA has responded to the current standards based climate in education, by establishing

clearly defined standards in the ASCA (2005) National Model which provide a benchmark for

school counselors to evaluate their programs (Dahir & Stone, 2009). Additional efforts to

improve school counselor accountability presented in the literature a variety of methods to

improve school counseling programs, provide evidence of effective practice, and improve the

school counselor‘s image (Dahir & Stone, 2009; Fairchild & Seeley, 1995 Rowell, 2005; Rowell,

2006; Whiston, 1996, 2002). The recent literature supporting school counselor accountability

includes articles calling for and presenting methodology for increased research; articles

presenting instruments school counselors could use to measure counselor effectiveness; and

research that specifically promotes the use of program evaluation (Astramovich & Coker, 2007;

Carey, Dimmitt, Hatch, Lapan, & Whiston, 2008; Dahir & Stone, 2009; Struder, Oberman &

Womak, 2006).

Research Methodology

Dahir & Stone (2009) respond to the call for increased accountability by summarizing the

results of more than 175 school counselors‘ action research projects. The old method of

Page 45: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

45

counting duties is being replaced with an outcomes based approach. School counselors are no

longer maintaining timesheets of activities, but are now looking at the measurable outcomes that

can be shared with others. Dahir & Stone use their own six-step action research model,

M.E.A.S.U.R.E. (mission, elements, analyze, stakeholders unit, results, and educate) (Dahir &

Stone, 2003). While the model does not meet the standards of NCLB‘s (2002) concept of

evidence based research, Dahir & Stone (2009) contend that action research begins to link school

counselor‘s work to student achievement.

In the Dahir & Stone (2009) study counselors and other stakeholders selected one goal of

the school improvement plan for the counseling program to focus upon. The study found that all

but two of the research plans had a positive impact on the selected goal. This work reiterates and

further supports some of the earlier works which promote action research as the means for

increased empirical evidence in the field (Rowell, 2006; Rowell, 2005; Whiston, 1996, 2003).

The establishment of a National Panel for Evidence-Based School Counseling has begun

to address the need for empirical evidence based studies in school counseling (Carey et al.,

2008). The joint effort on the part of ASCA and the Association for Counselor Education and

Supervision (ACES) has been charged with the task of ―improving the practice of school

counseling by helping develop the research base necessary for responsible and effective practice‖

(Carey et al., 2008, p. 196).

A major contribution of this group has been the development of a research coding

protocol. Strongly influenced by the What Works Clearinghouse Study Design and

Implementation Assessment Device (Valentine & Cooper, 2003) and the School Psychology‘s

(2003) Procedures and Coding Manual, the protocol allows the panel to consistently review the

literature with a uniform standard to determine the effectiveness of a given intervention. The

Page 46: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

46

protocol has seven domains: Measurement, Comparison Groups, Statistical Analysis of Outcome

Variables, Implementation Fidelity, Replication, Ecological Validity, and Persistence of Effect

(National Panel for Evidence-Based School Counseling, 2005). Each domain is assigned a value

of strong, promising, or weak.

Through the development of a research coding protocol, The National Panel for

Evidence-Based School Counseling has begun identifying school counseling practice that

benefits student improvement in areas like student achievement (Carey & Dimmitt, 2006). The

results of their meta-analysis approach review of current practice allows for a consistent standard

(Lipsey & Wilson, 2000). This standard not only contributes to the body of best practices, by

working to meet the standards of NCLB (2002) evidence-based practice, it also provides

researchers with a guide to develop their research methodology.

Instrumentation

In addition to guidelines for the practice of research, the literature offers school counselors

instrumentation to better measure their practice and more accurately align the outcomes with the

intended effect. Struder, Oberman, and Womack (2006) emphasize the need for counselors to

assess their programs and to describe an instrument design process. Their checklist for

developing an assessment instrument offers a simple straightforward approach that a school

counselor may use to demonstrate some measure of effect. The simple collection of results may

not result in empirical data, but it can serve as a means for educating others about the role of the

school counselor and their potential impact on student success (Struder, Oberman & Womak,

2006).

Whiston and Aricak (2008), and Sink and Spencer (2007) investigated the psychometrics

of instruments used to measure school counselor programs. The School Counseling Program

Evaluation Survey (SCoPES) is a 64-item measure with items aligned to each of the three

Page 47: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

47

domains of in the National Standards for School Counseling Programs (Campbell & Dahir,

1997). The initial results of the psychometric analysis of SCoPES showed promising results, but

additional studies with comparison groups may offer greater evidence that SCoPES is an

effective instrument.

The My Class Inventory-Short Form (MCI-SF) is a self-report measure used to assess

elementary school students‘ perception of the classroom climate. Sink and Spencer (2007)

modified this instrument to collect teacher perceptions of the classroom. The survey has five

dimensions: satisfaction, peer relationships, competitiveness, difficulty, and school counselor

impact. The examination of reliability and factorial validity indicate that this quick and easy

instrument could provide school counselors with a useful tool to assess a classroom guidance

unit (Sink & Spencer, 2007).

The ASCA National Model (2005) provides a Program Audit for school counselors to use

when developing a new program or monitoring an established program. The ASCA Program

Audit serves as a check sheet where counselors self-report or obtain input from other

stakeholders, such as an advisory group. The action items listed align with the standards defined

in the ASCA National Model (2005). This needs assessment/ monitoring tool is a valuable

resource (American School Counseling Association, 2010).

The development of these checklists and other instruments are a piece of the process

needed for school counselors to be more accountable for their practice. These instruments

provide valuable program feedback. They can improve a dialogue between the school counselor

and parents, teachers, and principals about the impact of a school counselor.

Program Evaluation In School Counseling

An understanding of instrument development, and research methods are necessary for

school counselors to effectively practice program evaluation (Loesch & Ritchie, 2008).

Page 48: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

48

Researchers in the field of school counseling, by expanding knowledge in the area of program

evaluation, support efforts of school counselors to be accountable to themselves and others

(Astramovich & Coker, 2007; Herr. 2001; House & Hayes, 2002; Loesch, 2000). In order to be

accountable, school counselors must ―develop manageable evaluation plans‖ (Fairchild &

Seeley, 1995 p. 10; Loesch & Ritchie, 2005). Counselors should utilize needs assessments;

advisory committees and stakeholder evaluations of counseling; and external assessments to

assist in the determination of effective practice (Fairchild & Seeley, 1995; Lapan, 2001; Vacc,

Rhyne-Winkler, & Poidevant, 1993).

While often used interchangeably, there has been a differentiation between the terms

accountability and program evaluation (Astramovich & Coker, 2007; Herr. 2001; House &

Hayes, 2002; Loesch, 2001). Accountability is the external pressure for the school counselor to

be able to show the results of their programs (Gysbers, 2004; Herr, 2001; Isaacs, 2003). Program

evaluation is the comprehensive approach used not only to demonstrate effectiveness, but also

the process by which the school counselor can improve and develop the comprehensive guidance

program (Astramovich & Coker, 2007; Loesch & Ritchie, 2008; Wheeler & Loesch, 1981).

Through effective practice of program evaluation, school counselors can be accountable for their

programs.

The research aimed at the specific practice of program evaluation includes the

development of program evaluation models for counselors (Astramovich & Coker, 2007; Lapan,

2001; Schwitzer, 1997). The Bridge Model, A Program Evaluation Framework for Counselors

provides counselors with a continuous cycle of program evaluation (Astramovich & Coker,

2007). The Bridge Model, like Lapan‘s (2001) frameworks, has been heavily influenced by

Stufflebeam‘s longstanding CIPP Model (2004). The CIPP model emphasizes the Context,

Page 49: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

49

Input, Process and Product evaluation (Stufflebeam, 2004). The Bridge Model emulates this

direction by making connections between the needs assessment and the feedback from

stakeholders. The feedback then influences planning, implementation, monitoring, and outcomes

assessment. The ―bridge‖ comes when the outcomes are shared and the feedback from

stakeholders drives the strategic planning and the needs assessment in a continuous loop

(Astramovich & Coker, 2007).

As the profession of counseling continues to respond to the need for increased counselor

accountability, there has been some investigation into the possible reasons why school

counselors fail to practice accountability methods, specifically program evaluation. There are

both internal and external factors that contribute to the school counselor‘s capacity to practice

program evaluation (Trevisan, 2002a). One external factor is the organizational environment.

For school counselors to effectively practice program evaluation, they must work in an

environment that values evaluation evident by the ―systematic, integrated or institutionalized

evaluation activities‖ (Trevisan, 2002a p. 297). When the stakeholders, including school-based

administrators, support and expect the practice of program evaluation, the school counselor has

an external push to practice program evaluation.

The lack of internal pull to practice program evaluation can be the result of fear of the

unknown. Counselors often lack training to adequately prepare them to practice program

evaluation (Astramovich & Coker, 2007; Lusky & Hayes, 2001). While school counseling

preparation programs sometimes require a research course, program evaluation theory and

practice are not always emphasized (Astramovich & Coker, 2007). In addition to fear of

attempting program evaluation because they lack the skills to do so, some counselors may fear

the results of a program evaluation (Isaac, 2003; Loesch & Richie, 2008). They may fear what

Page 50: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

50

those results will indicate and how those results will be used (Trevisan, 2002). A poor

understanding of how to perform program evaluation combined with an uncertainty as to what

the results of such a program will be and how those results will be used can make the

implementation of program evaluation a daunting task that many school counselors are reluctant

to pursue (Trevisan, 2002).

In order to address the lack of knowledge of program evaluation, Astramovich and Coker

(2005) conducted a study where school counselors were given training to provide a foundation

for the school counselors to evaluate their program. Based on ASCA National Model (2003)

concepts, the group received a three-hour workshop designed to help participants:

(a) understand the role of accountability in today‘s educational environment; (b)

understand the emphasis on accountability and program evaluation in the ASCA National

Model; (c) define program evaluation;(d) understand the evaluation process including the

role of needs assessment, program planning, program implementation, and assessing

outcomes; and (e) plan to implement their own school counseling program evaluation.

(Astramovich & Coker, 2005 p. 52)

After training the first group of five counselors, the district director asked those five

counselors to provide the training to an additional 23 school counselors. When this group was

surveyed, many of their responses reiterated what had been previously said by much of the

literature on program evaluation in the school counseling profession. Many (53.6%) of the

counselors had no graduate coursework in program evaluation; 78.5% of them had not received

professional development in program evaluation; 85.7% of them thought school counselors

ought to make time for program evaluation; and 82.1% of the group agreed that outcome data on

school counseling programs contributed to school accountability. After completion of the

Page 51: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

51

training program, participants were willing to conduct evaluations and felt they understood why

program evaluation is important, but many participants were unsure how to conduct a program

evaluation. Follow-up with the participants identified time constraints and lack of

understanding as the barriers to conducting program evaluation. The Astramovich & Coker

(2005) study took an important first step in meeting the professional development needs of

school counselors. This study, along with the description of accountability methods, instruments

and checklist found in the literature, emphasizes the need to develop and effectively evaluate

professional development for school counselors in the area of program evaluation.

Summary of the Literature

This review of the literature has provided a closer look at the overarching theoretical

framework for this study, social cognitive theory including the self-efficacy component and how

self-efficacy is determined. Best practices in professional development and the current methods

used to evaluate professional development were additional areas of interest in the development

of this study. Along with the historical perspective of program evaluation, several major

evaluation models with specific emphasis on the systematic approach to program evaluation

were reviewed above (Rossi et al., 2004). Lastly, the current state of program evaluation, in

school counseling profession was reviewed. Chapter three will describes the variables,

population, sampling procedures, research design, measurement procedures, professional

development unit, and data analysis of this study.

Page 52: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

52

CHAPTER 3

METHODOLOGY

Overview of the Study

This study examined the impact of a professional development unit on program evaluation

on in-service elementary school counselors‘: (a) knowledge of program evaluation, (b) self-

efficacy level towards performing program evaluation, and (c) ability to develop and conduct

program evaluation in the school setting. This chapter describes the variables, population,

sampling procedures, research design, measurement procedures, professional development unit,

and data analysis of this study.

Relevant Variables

The independent variable for this study was participation in a four-part professional

development unit. The dependent variables were school counselors‘ self-efficacy, program

evaluation competencies, and implementation of program evaluation. Data was collected for

demographic information including: (1) gender, (2) age, (3) race, (4) ethnicity, (5) years of

experience as a school counselor, and (6) graduation from a Council for Accreditation of

Counseling and Related Educational Programs (CACREP) accredited program. Baseline data

was collected for school counselors‘ self-rating of their competencies in the area of program

evaluation and self-efficacy. Data was collected for school counselors‘ competencies in the

areas of program evaluation, self-efficacy, and their application of program evaluation

development and use following the four-part professional development unit.

Population

The population for this study included in-service, elementary school counselors employed

by the Pinellas County School District located on the west coast of Florida. In 2007, Florida, the

fourth largest state, had an estimated population of 18,680,367 (Office of Economic and

Page 53: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

53

Demographic Research, 2009) with 67 public school districts. In the 2007-2008 school year,

there were 2,653,377 students enrolled in state funded schools (Florida Department of Education,

2009). The racial breakdown of students in Florida as reported by Florida Department of

Education is 45.9% white, 23.1% black, 24.7% Hispanic, 2.4% Asian, 3.6% multiracial, and

0.3% American Indian. Nearly half, (45.9%) of Florida students receive free or reduced lunch.

English speakers of other languages make up 11.9%, 14.4% are classified as students with

disabilities, and 4.9% are defined as gifted (Florida Department of Education, 2009).

The Florida Department of Education provides state certification in the area of school

counseling. Certification requires a minimum of (1) a master‘s or higher degree in guidance and

counseling or counselor education which must include at least three semester hours or supervised

counseling in a school setting or (2) a master‘s degree with 30 credit hours in guidance and

counseling, and the specific course work is described in Florida Administrative Rule 6A-4.0181.

Certification does include a requirement of three semester hours in student appraisal including

administration and interpretation of standardized tests, but does not require coursework in

program evaluation.

Sampling Procedures

In Pinellas County Schools, there are 77 elementary school counselors working in 80

elementary schools. Some counselors are assigned to multiple school sites, hence the difference

in number of participants and number of elementary schools. In 2008, Pinellas County was

home to 923,066 residents. The school district has over 13,000 employees and is the largest

single employer in Pinellas County. The Pinellas County School district is the 25th largest

school district in the nation and the seventh largest of the 67 districts in Florida. There are

104,717 students enrolled in kindergarten through twelfth grade; 44,955 are in the 80 elementary

schools. The racial breakdown of students enrolled is 62.8% white, 19.2% black, 9.3% Hispanic,

Page 54: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

54

3.7% Asian, 4.7% multiracial, and 0.3% American Indian. Pinellas County Schools have 40.8%

percent of students receiving free or reduced lunch. English speakers of other languages make

up 4.8%; 15.5% are classified as students with disabilities; and 5.4% are defined as gifted

(Florida Department of Education, 2009).

In the 2008-2009 school year, The Pinellas County School Board employed 230 school

counselors with a counselor to student ratio of 1:459 in kindergarten to grade twelve traditional

schools. In the 2009-2010 school year, at the elementary level, the counselor to student ratio was

1:634. All Pinellas County School Counselors hold a Florida state certification in the area of

school counseling.

Prior to the first meeting the 2009-2010 school year, all 77 of the elementary school

counselors were told that the professional development unit on program evaluation would be a

part of their monthly guidance meetings, and they were invited to participate in the study via a

district email from the guidance supervisor (Appendix A). In addition, they received information

about the professional development unit on program evaluation from their principal (Appendix

B). The researcher attended the first district-wide guidance meeting in August and described the

study to the school counselors and invited them to participate. The script of the invitation is

shown in Appendix C.

Over a period of six months, the school counselors met three times for a one to two-hour

long professional development unit. School counselors were given the opportunity to volunteer

to participate in the study as a supplemental activity to their district level meeting. Since

participation in professional development was strongly encouraged, all of the elementary school

counselors received the training while attending the district meeting. Forty-seven of the

elementary counselors volunteered to participate in the study by completing the instruments. Of

Page 55: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

55

the 47 who volunteered, 29 completed all three sessions of the professional development unit.

The study group was made up of 29 counselors who received all three sessions of the

professional development unit; volunteered to participate in the study; and completed the

Personal Data Sheet, the SCSES (Bodenhorn & Skaggs, 2005), the ECPE Self Assessment

(Stevahn, et al., 2005), and the Implementation Survey. The researcher did not provide any

monetary or time compensation for participation in the study.

Research Design

The research design used in this study was a pretest posttest design where the participants‘

gain scores were assessed to determine the effectiveness of the intervention (Ravid, 2000). There

was not a control group for this study. The district, which volunteered to allow the researcher to

conduct the study, stipulated all elementary school counselors were to have the opportunity to

receive the professional development. The research design is shown in Table 3-1.

This One Group Pretest-Posttest Design involved administering the Personal Data Sheet

(Appendix D), ECPE Self-Assessment (Appendix E), and the School Counselor Self-Efficacy

Scale (SCSES) (Appendix F) to participants at the start of the professional development.

Participants then took part in the professional development unit conducted over the period of six

months. The ECPE Self-Assessment (Ghere, King, Stevahn, & Minnema, 2006) and the SCSES

(Bodenhorn & Skaggs, 2005) were re-administered at the conclusion of the professional

development unit.

Two months after the completion of the professional development unit, participants were

sent an Implementation Survey (Appendix G) via the interdepartmental mail system. In this

Implementation Survey, participants were asked to self-report if they had conducted a program

evaluation in their school. Counselors were specifically asked if they had conducted a needs

assessment, selected a program to evaluate, assessed the purpose of the program, defined

Page 56: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

56

anticipated outcomes, assessed the program implementation, assessed the program outcomes, or

shared their findings with others. Each question was linked to a step in the program evaluation

process identified in The Systematic Approach to Program Evaluation (Rossi et al, 2007).

Additionally, participants were asked three open-ended questions: What did they like about the

unit; what would they change; and how did this program influence their way of work?

To insure confidentiality, participants were assigned a number. The instruments used in

the study were labeled with the number assigned to the participant. The key connecting the

names to the numbers was kept secure by the researcher. An electronic version of the key was

kept on a password protected private computer. A paper version of the key was stored in a

locked cabinet. When the study was completed and the data were analyzed, the key was

destroyed. Names were not published in any report.

Research Questions

Questions addressed by the study are listed below.

1. Will school counselors have a greater understanding of program evaluation principles after

taking part in a professional development unit? Will demographic factors influence performance

on the Essential Competencies for Program Evaluators Self-Assessment?

2. Will the professional development unit increase school counselors‘ self-efficacy towards

program evaluation implementation? Will demographic factors influence school counselors‘ self-

efficacy towards program evaluation implementation as measured by the School Counselor Self-

Efficacy Scale?

3. Will school counselors be able to develop a program evaluation in their own school setting as

a result of having received professional development training?

Page 57: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

57

Hypotheses

As related to research questions one and two, several null hypotheses were investigated in this

study:

Ho1a: There will not be a difference between the school counselors‘ pretest and posttest

mean scores on the ECPE Self-Assessment (Stevahn, et al., 2005).

Ho1b: There will not be a significant relationship between years of experience and the

performance on the ECPE Self-Assessment (Stevahn, et al., 2005).

Ho1c: There will not be a significant relationship between graduation from a CACREP

accredited program and the performance on the ECPE Self-Assessment (Stevahn,

et al., 2005).

Ho2a: There will not be a difference in the school counselors‘ pretest and posttest mean

scores on the SCES (Bodenhorn & Skaggs, 2005).

Ho2b: There will not be a significant relationship between years of experience and the

performance on the SCSES (Bodenhorn & Skaggs, 2005).

Ho2c: There will not be a significant relationship between graduation from a CACREP

accredited program and the performance on the SCSES (Bodenhorn & Skaggs,

2005).

Measurement Procedures/ Instrumentation

The personal data sheet, the ECPE Self-Assessment (Stevahn, et al., 2005), SCES

(Bodenhorn & Skaggs, 2005), and the Implementation Survey were the instruments used in this

study.

Personal Data Sheet

Questions in the personal data sheet asked the participants their gender, race, ethnicity, and

years of experience as a school counselor. The race and ethnicity codes were based on Florida

Page 58: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

58

Department of Education reporting codes effective for the 2010-2011 academic year (Florida

Department of Education, 2008). The participants had the option to choose yes or no for

Hispanic or Latino ethnicity and yes or no for each of the Florida Department of Education race

categories. Participants could choose multiple race categories. Participants who selected

multiple race categories were recorded and analyzed as multiracial. Participants were also asked

if they had graduated from a Council for Accreditation of Counseling and Related Educational

Programs (CACREP) accredited program. Information collected from this data sheet was used

to determine if there were relationships between the demographic factors and the dependent

variables.

Essential Competencies for Program Evaluators Self-Assessment

Stevahn, King, Ghere, and Minnema initially developed Essential Competencies for

Program Evaluators (ECPE) in 2001. In 2005, they revised the original competencies and cross-

walked the competencies to the standards of the Joint Committee Program Evaluation Standards

(1994), American Evaluation Association Guiding Principles (1995), and Canadian Evaluation

Society Essential Skills Series (1999) (Stevahn, King, Ghere, & Minnema, 2005). The ECPE

items were grouped into six major categories within the instrument: (1) professional practice, (2)

systematic inquiry, (3) situational analysis, (4) project management, (5) reflective practice, and

(6) interpersonal competence. The ECPE were later utilized as a self-reporting instrument for

school counselors to use, when reflecting on their professional development needs (Ghere, et al.,

2006). The ECPE Self-Assessment Instrument (Ghere, et al., 2006) directs respondents to select

their level of competency with specific knowledge and skills. Respondents defined themselves

as ‗entry/novice‘, ‗proficient/skilled‘, or ‗mastery /expert‘ for each item. The results were used,

as a tool for the participants to self-reflect on their professional development needs.

Page 59: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

59

Recently, the ECPE were used to measure health care workers program evaluation

competencies (Fournier, Banza, Tourigny, & Dieudonne, 2009). Essential evaluation

competencies were assessed by means of a questionnaire (pretest posttest, one year after) adapted

from the work of Stevahn et al., 2005. (Fournier et al, 2009). The participants in the study were

asked to rate their abilities to accomplish on a four-point scale of four being ‗easily‘ to one being

‗not at all‘ (Fournier et al, 2009). Mean scores of a retrospective pretest posttest approach were

compared to assess if the participants had learned and used the skills taught in the program

evaluation course (Fournier et al, 2009).

The prior work done to develop the ECPE Self-Assessment consisted of identifying the

essential competencies (Stevahn, et al., 2005). The essential competencies were identified and

used in a self-assessment tool (Ghere, et al., 2006). The results of the self-assessment tool were

used to help identify needed areas of professional development. Like the work done by Fournier

et al. (2009), the ECPE Self-Assessment tool was adapted from the work done by Stevahn et al.,

2005 and Ghere et al., 2006 for this study.

The self-assessment tool was formatted into an instrument asking participants to gauge

their confidence in their ability for each of the competency items. A five-point scale was used: 1

= not confident; 2 = slightly confident; 3 = moderately confident; 4 = generally confident and 5 =

highly confident. Participants were asked to circle the number that best represented their

response for each item. These directions mimic those found in the SCSE (Bodenhorn & Skaggs,

2005) also used by the participants. The ECPE Self-Assessment The instrument was reviewed

by a panel of experts and piloted with a small group for readability and ease of use (Fraenkel &

Wallen, 2008).

Page 60: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

60

School Counselor Self-Efficacy Scale

Permission to use items from the School Counselor Self-Efficacy Scale (SCSE) was

obtained from the author, Nancy Bodenhorn by means of personal correspondence on March 29,

2009,who with Garry Skaggs developed the 43-item scale (Bodenhorn & Skaggs, 2005). The

SCSE is based on the National Standards for School Counseling and the CACREP standards

(2001). Bodenhorn and Skaggs (2005) conducted three studies when developing the SCSE.

The first study consisted of a panel of experts who evaluated candidate items and

determined which items should be included (Bodenhorn & Skaggs, 2005).

The second study was conducted to analyze reliability, group differences, and item

analysis. The sample group consisted of in-service school counselors (Bodenhorn & Skaggs,

2005). In this stage of the scale development, items were deleted based on insufficient response,

lack of discrimination, or the excessive variability. The coefficient alpha for the scale score was

0.95 (Bodenhorn & Skaggs, 2005). The overall mean score for the items was a 4.21 on a five-

point scale with a standard deviation of 0.67. The Correlation matrix indicated 93% of the items

responses correlated between 0.2 and 0.6 (Bodenhorn & Skaggs, 2005). Analysis of variance

(ANOVA) with a 0.05 alpha level indicated no significant difference for groups working at

different levels or in different settings. According to Bodenhorn & Skaggs (2005), there were

differences based on gender, teaching experience, and number of years practicing. In the study,

self-efficacy was stronger among the female than the male participants F (1, 223) = 6.813, p <

0.05, R= 0.03. Those with teaching experience reported significantly stronger self-efficacy than

those without teaching experience F (1, 223) = 8.236, p < 0.01, R= 0.04. In addition, those who

had practiced school counseling for 3 or more years reported significantly stronger self-efficacy

than those who had practiced less than 3 years F (1, 220)= 7.037, p < 0.01, R= 0.03 (Bodenhorn

& Skaggs, 2005).

Page 61: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

61

The third study (Bodenhorn & Skaggs, 2005) was conducted with master‘s level students.

It was composed of correlation studies between the SCSE and (1) Counseling self-efficacy scales

(COSE) (Larson, Suzuki, Gillespie, Potenza, Bechtel, & Toulouse, 1992), (2) Social Desirability

Scale (SDS) (Crowne & Marlowe, 1960), (3) Six Trait Anxiety Inventory (STAI) (Spielberger,

1983), and (4) Tennessee Self-Concept Scale, second edition (TSCS: 2) (Fitts & Warren, 1996)

Validity was determined based on the COSE scores being positively correlated with SCSES

(correlation= 0.41) and correlated with SDS (correlation=0.30) (Crowne & Marlowe, 1960).

There was no correlation with the TSCS: 2 and the SCES. However, there was a significant

difference in SCSE scores when comparing this group of students with the group of practitioners

sampled in Study 2, F (l, 340) = 29.89, p < 0.0001, R=0 .08. The authors concluded that the

completion of the program and at least 1 year of experience resulted in an increase in school

counseling self-efficacy (Bodenhorn & Skaggs, 2005).

For this study, 18 items from SCSES were used to measure school counselor self-efficacy.

The items selected were specific school counselor activities related to program evaluation. The

researcher selected the 18 items. To provide content validity, experts in the areas of both school

counseling and program evaluation reviewed the items (Rand, 2000).

Implementation Survey

The researcher created the 16-item Implementation Survey using the steps in survey design

and reviewed the instrument using principles prescribed by the Tailored Design Method

(Dillman, 2007). The first eight survey items were dichotomous items asking the participants if

they had or had not developed and implemented a program evaluation since the professional

development unit. The following five survey items (items 9–13) were five point Likert-type

scale items designed to determine the participants‘ perceived level of support, capacity to find

the time, and ability to continue the practice of program evaluation. The last three survey items

Page 62: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

62

were open-ended questions asking the participants what they liked about the professional

development; what they would change; and how the professional development influenced their

way of work.

Distribution of this survey followed Dillman‘s (2007) Tailored Design Model. This model

includes five points of contact (Dillman, 2007). Pre-notice was given at the final session of the

professional development unit. Delivery of the survey was initially done through the school

district interoffice mail. Three reminder notices, with the survey as an attachment, were sent by

means of email.

Professional Development Unit

The professional development unit was based on Rossi, Lipsey, and Freeman‘s Systematic

Approach to Program Evaluation (Rossi et al., 2004). The first session of the three-part

participant training focused on the topic of Needs Assessment and included an introduction

reinforcing the need for school counselors to conduct a program evaluation. The second and third

topics of the unit focused on the topics of Program Alignment and Assessing Program Design

and Process. The Process included assessing program implementation and monitoring. The

fourth topic and final of the three sessions, focused on discussions on the topic of Sharing

Outcomes. A figure illustrating the four topics within the unit can be seen in Figure 3-1. A

complete outline for each of the topics of the professional development unit can be found in

Appendix I.

The delivery of the professional development was based on Social Cognitive Theory.

Social cognitive theory asserts an individual thoughts and feelings about a skill influence the

learning and that learning is strongly influenced by observation (Bandura, 1986, 1994, 2007). An

individual‘s thoughts and feelings about a skill can be developed through modeling and

comparing oneself to someone else (Bandura 1986, 1994; Zimmerman & Schunk, 2004).

Page 63: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

63

Based on these concepts of Social Cognitive Theory, the four-part unit was structured to

include lecture, discussion, collaborative learning activities, and individual reflection. The initial

session included time to review the informed consent (Appendix VIII); and time to administer

the personal data sheet, ECPE Self-Assessment (Stevahn, et al., 2005), and SCSES (Bodenhorn

& Skaggs, 2005). The final session allocated time to complete the retest of the ECPE Self-

Assessment (Stevahn, et al., 2005), and SCES (Bodenhorn & Skaggs, 2005).

The professional development unit provided participants with several experiences:

1. Defining and conducting a needs assessment,

2. Developing questions to determine the purpose of a program,

3. Assessing if the purpose of a program aligns to a need,

4. Defining anticipated outcomes of a program,

5. Discussing methods for monitoring implementation of a program,

6. Providing examples of expected outcomes of a program, and

7. Demonstrating methods of sharing results of the implementation of a program.

The researcher, a former school counselor and the current coordinator of accountability in

the district, provided the training model. The elements on needs assessment, program alignment,

program implementation and program monitoring were previously presented to groups of

instructional staff and administrators but were not previously presented to groups of school

counselors. Feedback from these prior presentations to groups of instructional staff and

administrators was considered in the development of the unit for school counselors.

The professional development unit took place as a supplemental activity to the scheduled

meetings between elementary school counselors and their district level guidance supervisor.

These meetings were held monthly. At these monthly meetings, school counselors usually

Page 64: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

64

received professional development along with relevant information related to their duties and

functions. The study was introduced at the August meeting and the professional development

took place in October, December and January. Those school counselors participating in this

study were asked to volunteer to complete an Implementation Survey in March, two months after

the final session of the professional development unit to determine if they had met the

expectation of conducting a program evaluation. The reminder of the Implementation Survey

was given to participants at the final session in January. The survey was delivered using the

school district interoffice mail with directions for completion and was returned to the researcher

using the same school district interoffice mail. Regardless of voluntary participation in the study

group, the guidance supervisors expected all school counselors to conduct an evaluation of a

program at their school as a result of the professional development unit.

Data Analyses

Data analysis was run using Statistical Package for Social Science (SPSS) Version 17.

Descriptive statistics including frequencies for demographic information on gender, age, race,

ethnicity, years of experience as a school counselor, and graduation from a CACREP accredited

program were performed. The mean age and years of experience were calculated and are

presented in the description of the results. Comparisons between the resultant sample and the

population of school counselors were done.

Analyses of the ECPE Self-Assessment (Stevahn, et al., 2005) and the SCES (Bodenhorn

& Skaggs, 2005) instruments were done to examine the strength of the instruments. Results

from both the ECPE Self-Assessment (Stevahn, et al., 2005) and the SCES (Bodenhorn &

Skaggs, 2005) pretest were used to determine the reliability and construct validity of both

instruments (Rand, 2006). In order to examine the internal consistency of the items on the ECPE

Self-Assessment (Stevahn, et al., 2005) and the SCES (Bodenhorn & Skaggs, 2005), a

Page 65: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

65

Cronbach‘s Alpha was applied. The Cronbach‘s Alpha method was an appropriate test of

reliability to use because of the scale-type responses (Shavelson & Towne, 2002). Item total

correlation analysis was completed with both pretests to examine the performance of each

individual item (Ravid, 2000). An inter-item correlation matrix was also completed to determine

the relationship between items and to ensure an absence of consistently negative correlations

between items that may indicate that an item is not measuring the same construct as the other

items (Shavelson & Towne, 2002). An examination of both the ECPE Self-Assessment (Stevahn,

et al., 2005) and the SCES (Bodenhorn & Skaggs, 2005) pretest items for Skewness and Kurtosis

was also done. Skewness and Kurtosis characterize the data to examine symmetry and normal

distribution (Shavelson & Towne, 2002).

The first research question addressed by this study was, ―Will school counselors have a

greater understanding of program evaluation principles after taking part in a professional

development unit?‖ Answering this question required use of a t-test to determine if there were

any significant differences between the pretest and posttest scores on the ECPE Self-Assessment

(Stevahn, et al., 2005). The dependent means t-test assumes a normal distribution of the data and

equal variances (Shavelson, 1996). The test retest design insured that the samples are dependent.

The pretest and posttest mean scores; maximum, minimum, and standard deviations of the ECPE

Self-Assessment (Stevahn, et al., 2005) were calculated.

The second part of this first research question was ―Will years of experience or

participation in a CACREP accredited program be related to performance on the ECPE Self-

Assessment?‖ Answering this question required the use of an analysis of covariance

(ANCOVA). The ANCOVA was used to determine if years of experience, or participation in a

CACREP accredited program was related to performance on the ECPE Self-Assessment. The

Page 66: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

66

posttest scores were used as the dependent variable. Participation in a CACREP program was the

fixed factor and years of experience and pretest score were the covariates. The ANCOVA

assumes the groups are independent of each other (Rand, 2006). In this study participants

completed their own individual instruments and there was no threat to independence. ANCOVA

also assumes the dependent variable is normally distributed within the population; also known as

normality (Rand, 2000). The normality test on the pretest and posttest of the ECPE Self-

Assessment (Stevahn, et al., 2005) was the Kolmogorov–Smirnov (K-S). The Shapiro-Wilk was

also applied to assess normality because it is an appropriate application for a small sample (Glass

& Hopkins, 1996).

The second research question addressed by this study was, ―Will the professional

development unit increase school counselors‘ self-efficacy towards program evaluation

implementation?‖ Answering this question required use of a t-test to determine if there were any

significant differences between the pretest and posttest scores on the SCES (Bodenhorn &

Skaggs, 2005). The dependent means t-test assumes a normal distribution of the data and equal

variances (Shavelson, 1996). The test retest design insures that the samples are dependent. The

pretest and posttest mean scores, maximum, minimum, and standard deviations of the SCES

(Bodenhorn & Skaggs, 2005) were calculated.

The second part of this second research question was ―Will years of experience or

participation in a CACREP accredited program be related to school counselors‘ self-efficacy

towards program evaluation implementation?‖ Answering this question required the use of an

analysis of covariance (ANCOVA). The ANCOVA was used to determine if years of

experience, or participation in a CACREP accredited program was related to performance on the

SCES (Bodenhorn & Skaggs, 2005). The posttest scores were used as the dependent variable.

Page 67: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

67

Participation in a CACREP program was the fixed factor and years of experience and pretest

score were the covariates. The ANCOVA assumes the groups are independent of each other

(Rand, 2006).

There are several assumptions of the ANCOVA. They are: independence, normality and

homogeneity of variances (Shavelson & Towne, 2002). Independence was achieved by having

the participants complete their instruments independent of each other. Normality was assured by

applying the K-S test of normality (Glass & Hopkins, 1996) and, due to the small sample size,

the Shapiro-Wilk to the data.

The third research question addressed by this study, ―Will school counselors develop a

program evaluation in their own school setting as a result of the Professional Development

Unit?‖ This question was addressed using the results from the Implementation Survey.

Descriptive statistics were calculated for the Implementation Survey. Furthermore, the

frequency of responses was calculated. In addition, the maximum, minimum, and standard

deviations for five Likert scale set of questions were calculated. The final three open ended

questions were sorted into like groups and summarized.

Summary

In this third chapter, methodology for this study was explained. Relevant variables,

population, sampling procedures, research design, measurement procedures and instrumentation,

the professional development unit, data analysis, and methodological limitations were presented.

In Chapter 4, results will be described, including demographic characteristics, descriptive

statistics, and inferential statistics.

Page 68: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

68

Table 3-1 Research design

SCSES Y1 X Y2

ECPE Y1 X Y2

Implementation

Survey X Y

Note: Y indicates the measure. X indicates the Professional Development Unit

Figure 3-1 Professional development unit

Page 69: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

69

CHAPTER 4

RESULTS

The purpose of this study was to examine the impact a professional development unit in

program evaluation on in-service elementary school counselors‘: (a) knowledge of program

evaluation, (b) self-efficacy level towards performing program evaluation, and (c) self-perceived

ability to develop and conduct program evaluation in the school setting. This chapter presents

the demographic data of the sample population, including: gender, age, race, and years of

experience and graduation from a Council for Accreditation of Counseling and Related

Educational Programs (CACREP) accredited program.

This chapter also presents the pretest posttest results of the Essential Competencies for

Program Evaluators (ECPE) Self Assessment (Stevahn, King, Ghere, & Minnema, 2005)

(Appendix E). The ECPE (Stevahn, et al., 2005) consisted of 60 items and used a five point

Likert-type scale to measure perceived self-confidence level with evaluation competencies.

Pretest posttest results from the 18-item, School Counselor‘s Self Efficacy Scale (SCSES)

(Bodenhorn & Skaggs, 2005) (Appendix F) are presented. The SCSES (Bodenhorn & Skaggs,

2005) measured perceived self-confidence toward school counselor skills using a five point

Likert-type scale. The analysis of the difference between the pretest posttest results and an

analysis of the relationship between years of experience and graduation from a CACREP

accredited program and the pretest posttest results are also presented. Lastly, the themes and

feedback gathered from the Implementation Survey are presented.

Demographic Characteristics

Seventy-seven elementary school counselors participated in the first of the three

professional development trainings and 47 agreed to participate in the study. Of the 47

volunteers study participants, 29 completed all three professional development trainings and both

Page 70: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

70

the pretest and posttest. Forty-four of the 47 volunteers completed two of the three trainings.

However, the third training date was rescheduled with less than two weeks notice. As a result of

the rescheduled meeting time, 29 counselors participated in all three trainings. The pretest data

from the initial 47 participants was used in the descriptive analysis, of the SCSES (Bodenhorn &

Skaggs, 2005) and ECPE Self Assessment (Stevahn, et al., 2005). The study group comprised the

29 counselors who received all three sessions of professional development unit; volunteered to

participate in the study; and completed the Personal Data Sheet, the SCSES (Bodenhorn &

Skaggs, 2005), the ECPE Self Assessment (Stevahn, et al., 2005), and the Implementation

Survey. Demographic and inferential statistics were based on the study group of 29 counselors.

Demographic characteristics of the elementary school counselor study group participants,

collected during the first meeting with them via the Personal Data Sheet (Appendix D), are

displayed in Table 4-1.

Of the 29 study subjects, 28 (96.6%) were female and one was male (3.4%). The

participants had the option to choose yes or no for Hispanic or Latino ethnicity and yes or no for

each of the Florida Department of Education race categories. Participants could choose multiple

race categories. In the area of ethnicity there was one participant who identified himself or

herself as Hispanic (3.4%). In the category of race, 28 participants identified themselves as

white (96.6%); one identified themselves as black (3.4%). Twelve of the 29 participants (41.4%)

were graduates of a CACREP accredited program, 14 (48.3%) were not. Three (10.3%) were

unsure. The mean age of the participants was 44.5 years with a standard deviation of 10.5 years

and a range of 27 years to 61 years. The mean number of years of experience was 11 with a

standard deviation of 9.9, with a range between one year and 35 years.

Page 71: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

71

Descriptive Statistics

The Statistical Package for Social Sciences (SPSS) version 17 was used for the data

summaries and analysis. An alpha level of p = .05 was chosen for all of the analyses. A total of

40 participants completed the ECPE Self Assessment (Stevahn, et al., 2005) and 43 completed

the SCSES (Bodenhorn & Skaggs, 2005) pretests.

Individual item means were calculated for the ECPE Self Assessment (Stevahn, et al.,

2005) pretest. The item mean on the five point Likert-type scale of the ECPE Self Assessment

(Stevahn, et al., 2005) pretest was 2.89 (N=40). Complete detail of item statistics are shown in

Table 4-2. On the five-point scale, the minimum item mean was 1.8 and the maximum item

mean was a 4.4 with a variance between means of 0.35. The item variance was calculated to

investigate the quality of the individual items. Variance had a mean of 1.12 with a minimum of

0.7 and a maximum of 2.04 with a 0.05 variance. An item summary, including the variance is

shown in Table 4-3.

In order to examine the internal reliability, The Cronbach‘s Alpha was applied and resulted

in a 0.98. Examination of individual items indicated none of the individual items strongly

impacted the Cronbach‘s Alpha when deleted. Results of individual item impact on Cronbach‘s

Alpha test for internal reliability are presented in Table 4-4.

Further investigation into the stability of the items was done through the examination of

items and their correlation with the total. The item total correlation revealed a range from 0.449

to 0.815. This wide range indicated an inconsistent variation between the individual items and

their correlation with the total.

An inter-item correlation matrix was also conducted to determine if any negative

correlations between items existed. No negative correlations were found.

Page 72: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

72

Individual item means were calculated for the SCSES (Bodenhorn & Skaggs, 2005). The

SCSES (Bodenhorn & Skaggs, 2005) had a mean of 3.3 (N = 43) on a 5-point scale measuring

self-confidence with counseling skills. Complete detail of item means are shown in Table 4-5.

The minimum mean was a 2.7 and maximum was 4.3 with a range of 1.6. The variance between

the mean was 0.256. In order to examine the item strength, item variances were examined. Item

variances had a mean of 0.906 with a minimum of 0.613 and a maximum of 1.36, with a range of

0.749 and a variance of 0.027. Complete item summary statistics can be seen in Table 4-6.

In order to examine the internal reliability a Cronbach‘s Alpha was applied to the SCSE

(Bodenhorn & Skaggs, 2005). The SCSE indicated an Alpha of 0.943. An examination of items

and their correlation with the total was done and item total correlation ranged from a 0.479 to

0.836. No item had a strong impact on the Alpha when deleted. Details of the individual item

correlations are shown in Table 4-7. An inter-item correlation matrix was completed and all

items were positively correlated. The test-retest research design implies normality (Rand, 2000).

Skewness and Kurtosis was examined. Skewness and Kurtosis characterize the data to

examine symmetry and normal distribution (Shavelson & Towne, 2002). ECPE had a Skewness

of 0.137 and Kurtosis of -0.334; SCSE Skewness was -0.653 and Kurtosis of 1.65.

Inferential Statistics

In order to address research questions and the null hypotheses investigated in this study,

analysis of the data obtained from the instruments completed by the 29 participants was done.

The first instrument was the ECPE (Stevahn, et al., 2005), which consisted of 60 items and used

a five point Likert-type scale. The second instrument was SCSES (Bodenhorn & Skaggs, 2005),

which consisted of 18 items and also used a five point Likert-type scale. The ECPE (Stevahn, et

al., 2005), and the SCSES (Bodenhorn & Skaggs, 2005) pretests were administered to

participants during the first professional development session. The posttests for both ECPE

Page 73: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

73

(Stevahn, et al., 2005) and SCSES (Bodenhorn & Skaggs, 2005) were administered at the fourth

and final session of the professional development unit. The calculation of differences between

pretest and posttest was based on those results collected from the 29 participants. Results of the

pretest were linked to the posttest results and tied to the initial personal data collection sheet,

which collected the demographic information of participants.

The Implementation Survey was sent to participants via district interoffice mail two

months after the fourth and final session of the professional development unit. It included seven

dichotomous questions; one ―select all that apply‖ question; five Likert-type scale questions to

determine level of agreement; and three open ended questions.

Research Question One

The first research question asked in this study was: ―Will school counselors have a greater

understanding of program evaluation principles after taking part in a professional development

unit? Will years of experience or participation in a CACREP accredited program be related to

performance on the ECPE Self-Assessment?‖ Three null hypotheses were determined to respond

to this question.

The first null hypothesis proposed was:

Ho1a: There will not be a difference between the school counselors‘ pretest and posttest

mean scores on the ECPE Self-Assessment (Stevahn, et al., 2005).

A dependent sample t-test was conducted to address this hypothesis. The ECPE (Stevahn,

et al., 2005) had a five-point scale. The pretest mean score for the ECPE (Stevahn, et al., 2005)

was 165.93 and the posttest mean score was 206.59. Details including standard deviation,

maximum, and minimum are shown in Table 4-8. The t-test was applied using the pretest and

posttest means. The t = -4.6 (df =28). The calculated confidence interval had a lower bound of -

58.79 and an upperbound of -22.532. The large size of the range could be attributed to the small

Page 74: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

74

sample size (Shavelson, 1996). Results of the t-test are shown in Table 4-9. The first Null

hypothesis was rejected because there was a significant difference between the pretest posttest

means (p = .009).

The second and third null hypotheses were:

Ho1b: There will not be a significant relationship between years of experience and the

performance on the ECPE Self-Assessment (Stevahn, et al., 2005).

Ho1c: There will not be a significant relationship between graduation from a CACREP

accredited program and the performance on the ECPE Self-Assessment (Stevahn,

et al., 2005).

One analysis was done to address both of the second and third hypotheses. An analysis of

covariance was done using the posttest scores of the ECPE (Stevahn, et al., 2005) as the

dependent variable with graduation from a CACREP program as the fixed factor and years of

experience and pretest scores of the ECPE (Stevahn, et al., 2005) as the covariates to determine

if there is a significant relationship. The effect of years of experiences was not significant, F(1,

24) = 2.057, p = .164 nor was graduation from a CACREP program, F(2, 24) = .005, p = .947.

Results from the ANCOVA are shown in Table 4-12. There was no statistical significance

indicating either years of experience or graduation from a CACREP program affected the

posttest. Therefore, the null hypotheses were not rejected.

Research Question Two

The second research question asked in this study was: ―Will the professional development

unit increase school counselors‘ self-efficacy towards program evaluation implementation? Will

years of experience or participation in a CACREP accredited program be related to school

counselors‘ self-efficacy towards program evaluation implementation?‖

The fourth null hypothesis proposed in this study was:

Page 75: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

75

Ho2a: There will be no difference in the school counselors‘ pretest and posttest mean

scores on the SCSES (Bodenhorn & Skaggs, 2005).

An independent samples t-test was conducted to address this hypothesis. The pretest mean

score for the SCSES (Bodenhorn & Skaggs, 2005) was 57.83 and the posttest mean was 69.10 on

a five-point scale. The t-test was applied to the pretest posttest mean difference (t = -5.52; df

= 28). The calculated confidence interval ranged from a lower level of -15.46 to an upper level

of -7.09. Results from the t-test are shown in Table 4-10. There was a statistical difference

between pretest posttest scores on the SCES (p = .002). Therefore, the null hypothesis was

rejected.

The fifth and sixth null hypotheses proposed in this study were:

Ho2b: There will not be a significant relationship between years of experience and the

performance on the SCSES (Bodenhorn & Skaggs, 2005).

Ho2c: There will not be a significant relationship between graduation from a CACREP

accredited program and the performance on the SCSES (Bodenhorn & Skaggs,

2005).

An analysis of covariance (ANCOVA) was done to address both hypotheses. An analysis

of covariance was done using the posttest scores of the SCSES (Bodenhorn & Skaggs, 2005) as

the dependent variable with graduation from a CACREP program as the fixed factor and years of

experience and pretest scores of the SCSES (Bodenhorn & Skaggs, 2005) as the covariates to

determine if there is a significant relationship. The effect of years of experience was not

significant, F(1, 24) = .467, p = .501 nor was graduation from a CACREP program, F(2, 24) =

.382, p = .687. Results from the ANCOVA are shown in Table 4-13. There was no statistical

significance in the analyses; therefore the null hypotheses were not rejected.

Page 76: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

76

Research Question Three

The third research question asked in this study was: Will school counselors develop a

program evaluation in their own school setting as a result of the Professional Development Unit?

The Implementation Survey was administered to determine if school counselors were able

to develop an evaluation. Twenty of the 29 study participants responded to the 16-item

Implementation Survey. When asked to respond to ―I have conducted a needs assessment‖ 14 of

20 (70%) reported yes. Nineteen of 20 (95%) indicated they had selected a guidance program to

evaluate. Seventeen (85%) said they had assessed the theory and purpose of the program.

Sixteen (80%) assessed how the program purpose aligns with the needs and defined the

anticipated outcomes. Nineteen (95%) had assessed the implementation of the program they

chose to evaluate, and fifteen (75%) had assessed the outcomes of the program. When asked with

whom they had shared results of their program, respondents were directed to ―check all that

apply‖. Their options included the following choices: Other counselors, School Advisory

Council, Principal, Parents, School Leadership Team, Teachers, and Other. Twelve said they had

shared results with principals; eleven with other counselors; ten with teachers; four with parents;

four with school leadership teams; none had shared with school advisory councils and none

selected other.

Participants were also asked five Likert-type questions. They were directed to indicate

their level of agreement: Strongly Agree, Agree, Neutral, Disagree or Strongly Disagree. During

data input, the researcher assigned a scale of one to five with one meaning ―strongly disagree‖

and five meaning ―strongly agree‖. When asked if they felt supported by their principal in the

evaluation of a program in their guidance program, the mean of the responses was a 4.37. The

mean score was 4.32 when participants were asked if they felt supported by other school

counselors. The mean score was a 4.21 when participants were asked if the other staff in the

Page 77: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

77

building supported them. The lowest mean score was 3.74 when participants were asked if they

had time to complete this process. When asked if participants will continue to practice program

evaluation the mean response was a 4.42. Complete results, including maximum, minimum, and

standard deviations for responses to these five questions are shown in Table 4-14.

In addition to the seven dichotomous yes/no questions and the five Likert-type questions,

the survey asked participants what they liked about the professional development training, what

they would change, and how the professional development influenced their way of work. The

comments related to what participants liked, and the comments related to how the professional

development experience influenced their way of work had similar themes.

The first theme was related to an improved structure by which counselors could reflect on

their practice. Eight of the comments were related to this theme. Some examples included: ―It

provides me with a more effective way of fine-tuning my guidance program‖, ―It allowed me to

reflect on the plus and minuses of the program‖, ―Allowed us to think about how we evaluate our

programs‖, and ―It will guide my program creation, implementation, and assessment‖.

The second theme centered on comments related to improved understanding of data. There

were five comments that specifically referenced data. Comments included: ―A new way to see

my data‖ and ―More conscientious and deliberate in making data work for me.‖

Comments related to how the professional development could be improved were largely

related to time. There were two comments about the need to begin the professional development

on program evaluation at the very start of the school year as well as the need for more time to

include things like direct-guided and hands on practice in the training. A complete list of the

comments (N=8) can be found in Table 4-15.

Page 78: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

78

Summary

In this chapter, results described included demographic characteristics, the descriptive

statistics, and inferential statistics, and presented the results of the Implementation Survey. In

the fifth and final chapter an overview of the study conclusions, limitations, implications, and

recommendations will be presented.

Page 79: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

79

Table 4-1 Participants‘ demographic characteristics

Variable Percent (N)

Gender

Male 3.4 (1)

Female 96.6 (28)

Race

Black 3.4 (1)

White 96.6 (28)

Ethnicity

Hispanic 3.4 (1)

Non-Hispanic 96.6 (28)

Graduate from a CACREP accredited program

Yes 41.4 (12)

No 48.3 (14)

Unsure 10.3 (3)

Age

Range: 27 to 61 years

Mean: 44.5 years

s.d.: 10.5

Years of Experience

Range: 1 to 35 years

Mean: 11.2 years

s.d.: 9.9

Table 4-2 Item statistics ECPE

Item Mean Std. Deviation N

1 4.40 .955 40

2 3.30 1.137 40

3 2.95 .986 40

4 3.45 1.011 40

5 2.43 1.059 40

6 3.30 1.043 40

7 2.65 1.051 40

8 2.63 1.055 40

9 2.85 .949 40

Page 80: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

80

Table 4-2 Item statistics ECPE continued

Item Mean Std. Deviation N

10 2.65 1.001 40

11 2.45 .959 40

12 3.75 .981 40

13 3.08 .917 40

14 3.28 1.062 40

15 3.30 .939 40

16 2.60 1.429 40

17 2.68 1.047 40

18 2.55 1.061 40

19 2.65 1.027 40

20 3.18 1.059 40

21 3.38 1.030 40

22 2.23 1.025 40

23 1.85 .834 40

24 2.85 1.075 40

25 2.25 .954 40

26 2.83 1.107 40

27 2.28 .960 40

28 2.45 .986 40

29 2.50 1.013 40

30 2.88 1.042 40

31 2.68 1.071 40

32 1.83 .958 40

33 3.03 1.000 40

34 2.55 1.154 40

35 2.43 1.035 40

36 2.55 1.011 40

37 2.55 1.085 40

38 2.58 .903 40

39 2.85 1.231 40

40 3.33 1.228 40

41 4.03 1.025 40

42 3.13 .992 40

43 4.28 .933 40

44 3.83 1.196 40

45 3.28 1.012 40

46 2.45 1.154 40

47 2.28 1.109 40

Page 81: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

81

Table 4-2 Item statistics ECPE continued

Item Mean Std. Deviation N

48 1.93 1.095 40

49 2.18 1.196 40

50 2.43 1.217 40

51 3.38 1.192 40

52 3.08 1.163 40

53 3.68 1.118 40

54 3.25 1.171 40

55 1.93 .997 40

56 1.95 .932 40

57 2.00 .987 40

58 2.53 1.062 40

59 4.40 .955 40

60 3.30 1.137 40

Table 4-3 Summary item statistics ECPE

Mean Minimum Maximum Range Maximum/

Minimum Variance N

Item

Means 2.819 1.825 4.400 2.575 2.411 .350 58

Item

Variances 1.116 .695 2.041 1.346 2.937 .050 58

Table 4-4 Item-total statistics ECPE

Item

Number

Scale Mean if Item

Deleted

Scale Variance

if Item Deleted

Corrected

Item-Total

Correlation

Squared

Multiple

Correlation

Cronbach's

Alpha if Item

Deleted

1 159.08 1753.404 .539 . .981

2 160.18 1730.199 .697 . .981

3 160.53 1739.640 .691 . .981

4 160.03 1734.076 .740 . .980

5 161.05 1742.613 .607 . .981

6 160.18 1733.635 .722 . .981

7 160.83 1739.840 .644 . .981

8 160.85 1741.208 .626 . .981

9 160.63 1743.574 .668 . .981

10 160.83 1736.712 .716 . .981

11 161.03 1748.230 .602 . .981

12 159.73 1741.897 .667 . .981

Page 82: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

82

Table 4-4 Item-total statistics ECPE continued

Item

Number

Scale Mean if Item

Deleted

Scale Variance

if Item Deleted

Corrected

Item-Total

Correlation

Squared

Multiple

Correlation

Cronbach's

Alpha if Item

Deleted

13 160.40 1756.913 .516 . .981

14 160.20 1736.933 .671 . .981

15 160.18 1738.456 .742 . .980

16 160.88 1736.471 .495 . .981

17 160.80 1730.421 .757 . .980

18 160.93 1725.199 .807 . .980

19 160.83 1728.558 .795 . .980

20 160.30 1737.549 .665 . .981

21 160.10 1736.144 .702 . .981

22 161.25 1732.603 .747 . .980

23 161.63 1748.958 .685 . .981

24 160.63 1738.651 .643 . .981

25 161.23 1727.256 .873 . .980

26 160.65 1727.977 .742 . .980

27 161.20 1742.831 .669 . .981

28 161.03 1739.820 .689 . .981

29 160.98 1727.820 .815 . .980

30 160.60 1733.118 .729 . .980

31 160.80 1738.164 .651 . .981

32 161.65 1741.772 .685 . .981

33 160.45 1742.869 .642 . .981

34 160.93 1725.199 .740 . .980

35 161.05 1730.203 .768 . .980

36 160.93 1729.507 .795 . .980

37 160.93 1725.712 .783 . .980

38 160.90 1732.451 .854 . .980

39 160.63 1731.215 .632 . .981

40 160.15 1729.105 .655 . .981

41 159.45 1745.690 .592 . .981

42 160.35 1740.336 .678 . .981

43 159.20 1755.549 .524 . .981

44 159.65 1751.054 .449 . .981

45 160.20 1749.446 .555 . .981

46 161.03 1724.025 .752 . .980

47 161.20 1735.087 .661 . .981

48 161.55 1740.151 .614 . .981

Page 83: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

83

Table 4-4 Item-total statistics ECPE continued

Item

Number

Scale Mean if Item

Deleted

Scale Variance

if Item Deleted

Corrected

Item-Total

Correlation

Squared

Multiple

Correlation

Cronbach's

Alpha if Item

Deleted

49 161.30 1721.087 .755 . .980

50 161.05 1724.562 .706 . .981

51 160.10 1730.810 .658 . .981

52 160.40 1735.579 .624 . .981

53 159.80 1740.574 .596 . .981

54 160.23 1724.487 .736 . .980

55 161.55 1731.126 .787 . .980

56 161.53 1739.640 .732 . .981

57 161.48 1740.102 .684 . .981

58 160.95 1734.869 .695 . .981

59 159.08 1753.404 .539 . .981

60 160.18 1730.199 .697 . .981

Table 4-5 SCES item statistics

Item

Number Mean s.d. N

1 3.79 .871 42

2 4.14 .783 42

3 3.24 .932 42

4 3.69 .924 42

5 2.81 .917 42

6 4.29 .944 42

7 3.19 .890 42

8 3.95 .936 42

9 2.67 .928 42

10 2.69 .897 42

11 3.26 .964 42

12 2.83 1.167 42

13 3.07 .947 42

14 3.07 1.091 42

15 2.79 .951 42

16 3.07 .921 42

17 2.83 .986 42

18 3.21 1.025 42

Page 84: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

84

Table 4-6 Summary item statistics SCES

Mean Minimum Maximum Range

Maximum/

Minimum Variance

N of

Items

Item

Means 3.255 2.667 4.286 1.619 1.607 .256 18

Item

Variances .906 .613 1.362 .749 2.221 .027 18

Table 4-7 Item-total statistics SCES

Item

Number

Scale Mean if

Item Deleted

Scale Variance

if Item Deleted

Corrected

Item-Total

Correlation

Squared

Multiple

Correlation

Cronbach's

Alpha if Item

Deleted

1 54.81 137.768 .507 .652 .943

2 54.45 137.181 .604 .750 .941

3 55.36 135.943 .555 .760 .942

4 54.90 133.503 .681 .743 .939

5 55.79 134.319 .646 .623 .940

6 54.31 134.658 .608 .739 .941

7 55.40 134.344 .666 .762 .940

8 54.64 137.503 .479 .719 .943

9 55.93 134.068 .649 .690 .940

10 55.90 131.357 .813 .842 .937

11 55.33 129.593 .836 .789 .936

12 55.76 128.820 .706 .735 .939

13 55.52 131.573 .755 .824 .938

14 55.52 131.914 .630 .764 .941

15 55.81 132.548 .705 .790 .939

16 55.52 131.768 .769 .879 .938

17 55.76 130.966 .751 .833 .938

18 55.38 130.583 .736 .758 .938

Table 4-8 Paired samples statistics pretest posttest ECPE

Instrument Mean N Std. Deviation Std. Error Mean

ECPE Pre test 165.93 29 37.195 6.907

ECPE Post test 206.59 29 52.198 9.693

Page 85: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

85

Table 4-9 Paired samples test (T Test) ECPE

Paired Differences

t d.f. Sig (2-

tail)

95% Confidence

Interval of the

Difference

Mean Std.

Deviation

Std.

Error

Mean

Lower Upper

Pre test-

Post test -40.655 47.644 8.847 -58.778 -22.532 -4.595 28 .000

Table 4-10 Paired samples statistics pretest posttest SCSE

Instrument Mean N Std. Deviation Std. Error Mean

SCSE Pre test 57.83 29 9.067 1.684

SCSE Post test 69.10 29 12.982 2.411

Table 4-11 Paired samples test (T Test) SCSE

Paired Differences

t d.f. Sig (2-

tail)

95% Confidence

Interval of the

Difference

Mean Std.

Deviation

Std.

Error

Mean

Lower Upper

Pre test-

Post test -11.276 11.003 2.043 -15.461 -7.091 -5.519 28 .000

Table 4-12 Test between subject effects (ANCOVA) posttest difference ECPE

Source Type III Sum

of Squares df Mean Square F Sig.

Corrected

Model

22322.093a 4 5580.523 2.482 .071

Intercept 13604.763 1 13604.763 6.050 .021

Page 86: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

86

Table 4-12 Test between subject effects (ANCOVA) pretest posttest difference ECPE continued

Source Type III Sum

of Squares df Mean Square F Sig.

experience 4625.065 1 4625.065 2.057 .164

ECPE 19504.056 1 19504.056 8.673 .007

CACREP 246.314 2 123.157 .055 .947

Error 53968.941 24 2248.706

Total 1313949.000 29

Corrected

Total

76291.034 28

a. R Squared = .293 (Adjusted R Squared = .175)

Table 4-13 Test between subject effects (ANCOVA) pretest posttest difference SCSE

Source Type III Sum

of Squares df Mean Square F Sig.

Corrected

Model

1542.981a 4 385.745 2.915 .043

Intercept 399.356 1 399.356 3.018 .095

experience 61.818 1 61.818 .467 .501

SCSE 1149.317 1 1149.317 8.686 .007

CACREP 101.033 2 50.516 .382 .687

Error 3175.708 24 132.321

Total 143202.000 29

Corrected

Total

4718.690 28

a. R Squared = .327 (Adjusted R Squared = .215)

Table 4-14 Implementation survey results items 8-13

Item

Number Mean sd Maximum Minimum

9 4.37 1.012 5 1

10 4.21 .787 5 2

11 4.32 .749 5 3

12 3.74 .933 5 2

13 4.42 .507 5 4

Table 4-15 Implementation survey comments

What did you like about the professional development?

It resulted in a better understanding of the ‗measure‘ on my part

It allowed me to reflect on the +and – of the program.

I was able to collaborate with others in my school to get feedback

The support and help with data collection

Page 87: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

87

A new way to see my data

It gave me direction and a starting point

It provided a lot of specific information and allowed us to think about how we evaluate our

programs

It provides me with a more effective way of fine-tuning my guidance program

I am also working on an ‗Inquiry‘ project with Lastinger and this helped my understanding of the

components

Made things clearer

Professional development in this area is always helpful

I had already received state DOE training on MEASURE and adapted it to my IPDP so had

already planned for the year what to do

What would you change about the professional development?

I was not always clear how to respond to the questions being asked

Nothing

We need more training earlier in the year and possibly tie it to the inquires that schools do

Allow more time for hands on and Q&A

Perhaps Nicole could follow up and work with small groups so we could implement our plan

with her direct guidance. Sometimes it‘s difficult for the theory to make sense without guided

practice.

It should be offered in the beginning of the year

I would appreciate the full workshop with opportunity for ‗hands on‘

Explanation of the new template we are using this year

More time to work on the concept

How will this professional development influence your way of work?

I will try to conduct a needs assessment next year

Be more reflective, use data more to drive guidance program

Always keep in mind the purpose and how my program purpose aligns with the needs, data, data,

data and more data

I will continue to practice program evaluation

I will try this approach with my program; unfortunately there is little time and support to

implement it effectively

It will guide my program creation, implementation, and assessment

It will make delivery of services more meaningful

More conscientious and deliberate in making data work for me

Page 88: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

88

CHAPTER 5

DISCUSSION

The purpose of this study was to examine the impact of a professional development unit

on program evaluation on in-service elementary school counselors‘: (a) knowledge of program

evaluation, (b) self-efficacy level towards performing program evaluation, and (c) ability to

develop and conduct program evaluation in the school setting. This chapter is divided into four

sections. The first section provides an overview of the study. The second section delineates the

implications of the study for the practice of counseling. The third section discusses limitations of

the study. The fourth and final section recommends potential avenues of future research.

Overview of the Study

This study provided a four-session professional development unit on program evaluation

to a group of elementary school counselors in Pinellas County, Florida. The 29 participants took

part in all four professional development sessions and they completed: a personal data sheet; a

pretest and posttest of the Essential Competencies for Program Evaluators Self-Assessment

(ECPE); a pretest and posttest of the School Counselors Self Efficacy Scale (SCSES); and an

implementation survey. The participants were primarily white (96.6%) and female (96.6%)

ranging in age from 27 to 61. All had at least one years experience working as a school

counselor.

Research Question One

The first research question investigated the impact the professional development unit had

on school counselors‘ knowledge of program evaluation. Additionally, it examined the

relationship between years of experience or participation in a CACREP accredited program and

school counselors‘ understanding of program evaluation. A statistically significant difference

was found between the school counselors‘ pretest and posttest mean scores on the EPCE Self-

Page 89: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

89

Assessment (Stevahn, et al., 2005), which measured self-confidence with their program

evaluation skills. This finding demonstrates that participants in the professional development unit

increased their self-confidence towards their knowledge of program evaluation.

On the other hand, there were no significant differences in respondents‘ gains between

pretest posttest mean scores on the EPCE Self-Assessment (Stevahn, et al., 2005) based on years

of counseling experience. Nor was there a significant difference in respondents‘ gains between

pretest posttest mean scores on the EPCE Self-Assessment (Stevahn, et al., 2005) based on

graduation from a CACREP accredited program. Therefore, neither years or experience, nor

graduation from a CACREP program were differentiating factors in the increase of mean scores

from pretest to posttest.

Research Question Two

The second research question investigated the impact the professional development unit

had on school counselors‘ self-efficacy towards program evaluation skills. Additionally, it

examined the relationship between years of experience or participation in a CACREP accredited

program and school counselors‘ self-efficacy towards program evaluation skills.

For this study 18 items from SCSES were used to measure school counselor self-efficacy.

The items selected were specific school counselor activities related to program evaluation. The

Cronbach‘s Alpha was applied to examine the internal reliability and resulted in a 0.98, which

provided preliminary indication the 18-item instrument was reliable. A statistically significant

difference was found between the difference in the school counselors‘ pretest and posttest mean

scores on the SCSES (Bodenhorn & Skaggs, 2005). Therefore, participants in the professional

development unit increased their perceived self-efficacy toward utilizing program evaluation

skills.

Page 90: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

90

There was no significant difference between participants‘ pretest-posttest mean score

gains on the SCSES (Bodenhorn & Scaggs) based on their years of counseling experience.

There was also no significant difference between participants pretest-posttest mean score gains

on the SCSES (Bodenhorn & Scaggs) based on their graduation from a CACREP accredited

program. Therefore, neither years of experience nor graduation from a CACREP program were

differentiating factors.

Research Question Three

The third question investigated the impact the professional development unit had on

school counselors‘ development of a program evaluation in their own school setting. The

Professional Development Unit was based on The Systematic Approach of Rossi, Lipsey, and

Freeman (2004). This approach prescribes five levels to be assessed when conducting a program

evaluation: (a) Need for the Program, (b) Program Theory, (c) Program Process, (d) Impact of

the Program, and (e) Cost Analysis. In this study, the professional development unit taught

needs assessment, evaluating program alignment with need, program design and theory, program

implementation and monitoring, and the topic of sharing outcomes. Two months after the final

session of the professional development unit, an Implementation Survey was given to the

participants to complete. The Implementation Survey collected information on the participants‘

use of the program evaluation skills taught during the professional development unit.

Results from the Implementation Survey indicated that 70% of participating school

counselors conducted a needs assessment after having received the professional development

unit. Interestingly, 80% of participating school counselors assessed how the purpose of the

program they selected aligned with the needs assessment. Results indicated more respondents

identified they had looked at program alignment with need than had conducted a needs

assessment. The difference could have been attributed to the fact that some school counselors

Page 91: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

91

aligned their program to the school-wide improvement plan, which had pre identified the needs.

Therefore, they would not have conducted a separate needs assessment. Finally, 95% of the

respondents evaluated the implementation of a program in their school setting, and 75% of them

assessed the outcomes of a program within their school-counseling curriculum. These results

indicate that the majority of respondents applied the learning from the professional development

in their school setting.

The results of the Implementation Survey indicated that the participants felt supported by

the principal and other staff in their building to implement a program evaluation. On a 5-point

scale, the mean of the responses was a 4.37 when participants were asked if they felt supported

by their principal in the evaluation of a program in their guidance program. When asked if

supported by other counselors the mean response was 4.32 and mean response was 4.21 when

asked if they felt supported by other staff in the building. This finding indicated the perceived

presence of organizational support and is supported by previous literature that organizational

support is a required element in the implementation of skills learning in professional

development (Gurskey, 2002).

Additionally, the Implementation Survey results identified both the need for more

training time and for hands on support as two areas that could be improved in the professional

development unit. This finding supports previous literature on program evaluation, which

identifies duration and experiential learning as factors in effective professional development

models (Guskey &Yoon, 2009; Trevisan, 2004; Yoon, Duncan, Lee, Scarloss, & Shapley, 2007).

Implications

Practice

Program evaluation is a comprehensive approach used to demonstrate effectiveness for the

purpose of accountability and as a means for the school counselors to improve and develop a

Page 92: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

92

comprehensive guidance program (Astramovich & Coker, 2007; Loesch & Ritchie, 2008;

Wheeler & Loesch, 1981). The understanding of the impact that this professional development

unit on program evaluation had on in-service elementary school counselors has implications for

the practice of school counseling program evaluation practices. There has been a long-standing

need for school counselors to be more accountable (Brown & Trusty, 2005; Gysbers, 2004;

Gysbers & Henderson, 2001; Myrick, 2003; Wheeler & Loesch, 1981).

The current climate of standards based reform has increased the sense of urgency for

school counselors to be accountable for their contribution to student achievement (Dahir &

Stone, 2009; Herr, 2001; House & Hayes, 2002; McGannon, Carey & Dimmitt, 2005).

Emphasis is evident in National School Counselor Training Initiative (NSCTI), established by

The Educational Trust, to promote the inclusion of the school counselor in the accountability

system. NSCTI works to promote the school counselor as a change agent who fosters student

academic achievement (Educational Trust, 2007). Additionally, the ASCA National Model

(2005), a framework for school counseling programs, specifies standards of an effective school-

counseling program and specifically includes being engaged in continuous program evaluation

activities as a means to address accountability (ASCA, 2005).

The sense of urgency to link the school counselor to student achievement has become even

more pressing with new initiatives like the Federal grant ―Race to the Top‖ (U. S. Department of

Education, 2010c). Race to the Top is a $4.35 billion federally funded competitive grant

awarded to states who demonstrate the development of conditions which promote significant

improvement to student outcomes in the areas of student achievement, graduation rates, and the

closing of achievement gaps (U. S. Department of Education, 2010c). State applications for

Grants like Race to the Top provide prescriptive interventions, which focus on comprehensive

Page 93: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

93

educational reform (U. S. Department of Education, 2010d). Educational reform plans impact

the systems and process in place at the school level. School counselors are in integral part of the

school reform efforts (ASCA, 2005; Dahir & Stone, 2009; Educational Trust, 2007; Herr, 2001;

House & Hayes, 2002; McGannon, Carey & Dimmitt, 2005).

Individual state accountability models and school reform measures now require the

development of a measure which links student achievement to school and in some cases district

personnel (U. S. Department of Education, 2010b). These requirements often include the school

counselor (Florida Department of Education, 2010). Counselors must have the confidence in

their program evaluation skills and employ the practice of program evaluation as a first step in

understanding the impact their programs have on student achievement (Rossi et al, 2004).

Yet, despite the recognized growing importance of school counselor accountability,

school counselors are not systematically implementing program evaluation into their way of

work (Fairchild & Steeley, 1995; Isaccs, 2003; McGannon, Carey & Dimmitt, 2005). Internal

and external factors contribute to the school counselor‘s capacity to practice program evaluation

(Trevisan, 2002a). Externally, the organization of the school, including support from principal

and other staff to conduct program evaluation are factors in contributing to the implementation of

program evaluation. Internally, counselors often lack training to adequately prepare them to

practice program evaluation (Astramovich & Coker, 2007; Lusky & Hayes, 2001).

This study addresses both the lack of training and examines the perceived support for

program evaluation. The results of this study were encouraging; the professional development

unit resulted in a significant increase in school counselors‘ self-confidence towards their

knowledge of program evaluation skills and in school counselors‘ self-efficacy with initiating

program evaluation practices. Results from the follow up Implementation Survey showed that

Page 94: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

94

more than 95% of the participants in the study did select a program in their school-counseling

curriculum to evaluate. Results indicated there were high levels of implementation in each of the

program evaluation components addressed in the professional development. The components

included, needs assessment, program alignment, program design and purpose, and sharing

outcomes. This evidence of implementation along with the increase in participants‘

understanding of program evaluation skills and improved self-efficacy implies an increased

likelihood that the participants will utilize the skills acquired from the professional development

unit (Bandura, 1977, 1996, 2007).

These findings have strong implications for practicing school counselors. There is an

identified need for the implementation of program evaluation amongst school counselors (Dahir

& Stone, 2009; Educational Trust, 2007; Fairchild & Steeley, 1995; Herr, 2001; House & Hayes,

2002; Isaccs, 2003; McGannon, Carey & Dimmitt, 2005.) This study found a professional

development unit in the area of program evaluation had an impact on the school counselors‘

implementation of program evaluation.

Theory

Moreover, the evaluation of the professional development unit supported the evidence

that use of a professional development unit was an effective strategy for increasing self-

confidence towards program evaluation knowledge and the application of these skills among the

participating school counselors. In the theoretical framework proposed by Gurskey (2002), the

evaluation of professional development has five levels. The five levels include: (1) Participant

Reaction; (2) Participant Learning; (3) Organizational Support and Change; (4) Participant Use

of Knowledge and Skills; and (5) Student Learning Outcomes. In this study, these five levels of

program evaluation were assessed as follows: (1) Participant Reaction— the Implementation

Survey indicated that the participants found the professional development unit valuable; (2)

Page 95: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

95

Participant Learning--the difference between the pretest and posttest ECSE demonstrated that the

participants increased self-confidence in their knowledge of the skills being taught; (3)

Organizational Support and Change-- the Implementation Survey results provided evidence of

organizational support; (4) Participant Use of Knowledge and Skills -- the Implementation

Survey indicated that the majority of participants implemented what they had learned through the

professional development training unit at their schools.

The delivery of the professional development was based on Social Cognitive Theory, and

included lecture, discussion, collaborative learning activities, and individual reflection. This

study provided evidence that use of the professional development unit delivered in this study was

an effective strategy for increasing school counselors‘ self-confidence towards their knowledge

and the application of program evaluation skills. These results further validate the applications

of Social Cognitive Theory to professional development.

This study is timely and the results were significant. The participants reported they had

implemented program evaluation. The study has the potential to inform the future development

of professional development on program evaluation skills for school counselors.

Limitations

This study used a pretest posttest pre-experimental design: it did not utilize a control

group. As a pre-experimental design, this study lacked internal validity (Ary, Jacobs, &

Razaviech, 1996). The observed changes cannot definitively be attributed to this professional

development unit and not to other possible causes. There are other variables that could have

been responsible for the observed changes between the pretest results and the posttest results

found in this study. Two such possible extraneous variables were history and maturation (Ary et

al., 1996). History as a source of change would include any external event or factor outside of

the study that could affect the subject‘s posttest score, such as outside training, reading, or other

Page 96: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

96

learning (Rand, 2006). Maturation as a source of change would include any physical or mental

change that could affect the subject‘s posttest score (Rand, 2006).

Another limitation in the pre-experimental design was reactivity; the concept that study

participants may act differently in response to being observed (Ary et al., 1996). In this study,

participants were all receiving the professional development as part of the monthly school

counselor meetings. The supervisor of school counseling for the school district was a present

observer at the meetings and strongly supported participation in the professional development.

Reactivity may have been a factor in the observed pretest-posttest change.

In addition, familiarity, where test scores improve over time because test subjects do better

on a test once they become more familiar with it, may affect the results of this study. Familiarity

with the items, rather than the result of the professional development, may have contributed to

the observed pretest-posttest change (Ary et al, 1996).

There are also limits to the generalizability of the study findings. First, the sample size

was small. The professional development was delivered to all 77 of the elementary school

counselors employed by Pinellas County Schools. Forty-six elementary school counselors

volunteered to participate in the study. Of those who volunteered, 29 were able to attend all

three sessions of the professional development unit and completed both the pretest and posttest

instruments. Although the study started with a large group, the end sample was small (N=29),

further limiting generalizability of the results.

Secondly, the sample of school counselors participating in the study was not randomly

selected. Participants chose to take part in the study. The common characteristic of being willing

to take part in the study may have somehow made them a special subgroup of the population and

therefore may not be representative of the population of all school counselors. Finally, the

Page 97: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

97

person delivering the training program was the person carrying out this research project. As the

trainer and researcher, the person leading the training program was highly motivated to see the

project succeed. This factor could introduce bias and make the study results less likely to

generalize to other situations in which the person leading the training program would not be

similarly motivated.

Recommendations

Weaknesses of this study could be easily addressed in future studies. This study lacked a

control group, was conducted with a small group of elementary school counselors in one district

in Florida, and was performed with the researcher also delivering the training program. The

primary recommendation for future research is to replicate this study with a control group to

include a larger group of elementary, middle and high school counselors in multiple locations,

and to utilize training program leaders who are not the researchers. Despite requiring a

substantial time investment for the researcher and participants, the results of this study indicated

a high level of participant satisfaction. Therefore, additional study in this area could be well

received by school counselors.

Including a control group in future studies would benefit the study by increasing the

strength of the study (Rand, 2000) by increasing the confidence that the positive changes were

attributed to the professional development unit. Expanding the scope of future studies to include

a larger group of school counselors at all school levels would allow for better generalizability of

the findings. Including different levels would also help determine if there is a difference in

acquisition of program evaluation skills, self-efficacy or implementation of program evaluation

between school counselors practicing at elementary, middle, and high school levels. Expanding

this research beyond one district would provide the opportunity to examine the effect of the

professional development training on program evaluation on counselors working in different

Page 98: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

98

school districts. This could include a variety of settings such as small and large districts and

different regions of the United States. Such expansion would allow an examination into the

difference between participants in different areas, in acquisition of program evaluation skills,

self-efficacy or implementation of program evaluation. Finally, separating the role of the

researcher from the role of the training program leader would remove a potentially important

source of bias from future studies. Additional studies in this area, with the above changes would

provide a more comprehensive understanding of the impact of this professional development on

program evaluation on in-service school counselors.

Future studies could be improved by heeding recommendations gleaned from the

Implementation Survey. Participants indicated the need for beginning this training at the start of

a school year. A recommendation could be to start this professional development at the very

beginning of the school year. The results of the Implementation Survey also indicated a need for

additional direct support with the development of an evaluation and hands on activities during

the professional development. A recommendation could be to increase the duration of the

professional development to last the entire nine months of the school year. Doing so would

allow additional time to provide targeted support and incorporate teaching techniques that foster

guided instruction, independent activities, and feedback.

This study was designed to determine the impact a professional development unit had on

school counselors‘ knowledge and implementation of program evaluation. This study did not

however, investigate the impact that implementation of program evaluation had on the overall

goals of the work done by the counselor. A recommendation would be to add a research

question, which could address, did the evaluation done by the school counselor at their school

result in improvements to the program they evaluated? A reasonable next step in the research

Page 99: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

99

would be to further investigate the impact of an evaluation done by the school counselor had on

the program they evaluated.

Additionally, the theoretical framework of Gurskey (2002) proposes evaluating the impact

a professional development had on student-learning outcomes. There are ever-increasing

mandates to link educational personnel in the schools to student-learning outcomes (U. S.

Department of Education, 2010c). The examination of the link between how the professional

development impacted the school counselors‘ impact on student achievement through program

evaluation would have strong implications for the development and implementation of

professional development for school counselors.

An important next step, which could further elicit additional recommendations for the

direction of ongoing research, is the sharing of the results with the participants. It is important to

share the outcomes of the research with the participants to validate their participation (Dillman,

2007). Discussions between the researcher and the participants may also bring out previously

unaddressed concerns with the professional development content or delivery. Sharing results

with participants is feasible and could be accomplished by the researcher utilizing the same

monthly guidance meeting forum used to deliver the professional development unit.

Summary

This study showed that after participating in a professional development unit on program

evaluation, elementary school counselors increased their knowledge of program evaluation,

increased their perceived self-efficacy toward program evaluation skills, and applied the learning

from the professional development in their school settings. The ASCA National Model (2005)

standards of an effective school-counseling program reflect the current climate of reform,

making these finding encouraging. The majority of school counselors who participated in the

study did implement the principles they learned, felt increased self-efficacy in doing so, and

Page 100: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

100

indicated a sense of school level support for using program evaluation. Although further

research is necessary, the findings of this study suggest that implementation of a professional

development unit like the one tested in this study might be a useful step toward increasing the

program evaluation skills of school counselors.

Page 101: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

101

APPENDIX A

EMAIL FROM GUIDANCE SUPERVISOR

To: School Counselors

From: Guidance Supervisor

RE: Back to School Guidance Meeting

Date:

During our district guidance meetings we will be working to develop our program

evaluation skills with Nicole Carr, a doctoral candidate from the University of Florida. The

materials and skills gathered from this professional development are aimed at assisting you in

completing the annual measure you submit to me at the end of the year. It is expected that you

will conduct an evaluation of a component of the guidance program at your school as a result of

the professional development unit.

Nicole will be delivering the professional development unit as part of a research study. As

a participant you will benefit from learning program evaluation skills, which are important to

you as a professional school counselor. Your participation in the research study is voluntary. As

a voluntary participant, at the start of the professional development unit, you will be asked to

anonymously complete (1) a personal data sheet, (2) a questionnaire regarding program

evaluation, and (3) a school counselor self- efficacy scale. At the end of the professional

development you will again be asked to anonymously complete a questionnaire regarding

program evaluation, and a school counselor self- efficacy scale. Two months after completing

the professional development unit, you will be asked to complete a survey to determine if you

have applied what you have learned and actually evaluated your guidance program. You can

receive the professional development and opt not to participate in the study by not completing

the forms for the study.

Page 102: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

102

APPENDIX B

EMAIL FROM PRINCIPAL

To: School Counselor

From: Principal

RE: Guidance Meeting

Date:

During your annual guidance meetings this year, you will have to opportunity to

participate in a professional development unit on Program Evaluation Skills. I hope you take

advantage of this opportunity and attend all three of the sessions offered. They will be available

during your scheduled countywide guidance meetings. As a participant you will benefit from

learning program evaluation skills. It is expected that you will conduct an evaluation of a

component of the guidance program at our school as a result of the professional development

unit.

Nicole Carr, a doctoral candidate from the University of Florida, will be delivering the

professional development unit as part of a research study. Your participation in the research

study is voluntary. Her study will request that anonymously complete (1) a personal data sheet,

(2) a questionnaire regarding program evaluation, and (3) a school counselor self- efficacy scale.

At the end of the professional development you will again be asked to anonymously complete a

questionnaire regarding program evaluation, and a school counselor self- efficacy scale. Two

months after completing the professional development unit, you will be asked to complete a

survey to determine if you have applied what you have learned and actually evaluated a

component of the school guidance program.

Page 103: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

103

APPENDIX C

SCRIPT PRESENTED BY RESEARCHER AT GUIDANCE MEETING

Hello,

It‘s really nice to be here for your meeting today. I thank Karlia for allowing me to share

information with you today. Currently, I am a doctoral candidate in the Department of

Counselor Education, at the University of Florida. As part of my dissertation I am examining the

impact a professional development unit on program evaluation has on in-service school

counselors.

I am asking you to volunteer to participate in this study. The study includes participation in a

professional development unit. You will be asked to meet three times for one to two hours, over

the course of the semester. At the start of the professional development unit, you will be asked

to anonymously complete (1) a personal data sheet, (2) a questionnaire regarding program

evaluation, and (3) a school counselor self- efficacy scale. At the end of the professional

development you will again be asked to anonymously complete a questionnaire regarding

program evaluation, and a school counselor self- efficacy scale. Two months after completing

the professional development unit, you will be asked to complete a survey to determine if you

have applied what you have learned and actually evaluated your program. The follow-up survey

will take between 5-10 minutes to complete.

There are not anticipated risks. As a participant you will benefit from learning program

evaluation skills. You are free to withdraw your consent to participate and may discontinue your

participation in the study at any time without consequence. The researcher will not provide

monetary or time compensation for participation in this study.

Your identity will be kept confidential to the extent provided by law. Your information will be

assigned a code number. The list connecting your name to this number will be kept on a

password protected computer and paper copies will be locked in a file in the researcher‘s office.

When the study is completed and the data have been analyzed, the list will be destroyed. Your

name will not be used in any report.

If you have any questions about this research protocol, please contact me, my name and the name

of my faculty advisor along with contact information are provided on the informed consent.

Questions or concerns about your rights as a research participant rights may be directed to the

IRB02 office, which can also be found on the informed consent.

Page 104: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

104

APPENDIX D

PERSONAL DATA SHEET

The purpose of these questions is to gather demographic information about participants in this

research study. You will not be asked to provide your name. Your responses will only be

reported in general terms as they are related to the variables being analyzed.

Please provide your information like this: not like this:

1.

Gender:

Male Female

2.

Ethnicity: Yes No

Hispanic or

Latino

3. Select all that apply

Race: Yes No

American Indian or

Alaskan

Asian

Black or

African American

Native Hawaiian or

Other Pacific Islander

White

4.

5. Number of years of experience as a school counselor: _______

6. Age: _______

Thank you for participating!

Program Yes No

Did you complete a Masters

Program in Counselor

Education?

Did you graduate from a

CACREP accredited program?

Page 105: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

105

APPENDIX E

SCHOOL COUNSELOR SELF-EFFICACY SCALE

Below is a list of activities representing many school counselor responsibilities. Indicate your confidence

in your current ability to perform each activity by circling the appropriate answer next to each item

according to the scale defined below. Please answer each item based on one current school, and based on

how you feel now, not on your anticipated (or previous) ability or school(s). Remember, this is not a test

and there are no right answers.

Use the following scale:

1 = not confident,

2 = slightly confident,

3 = moderately confident,

4 = generally confident,

5 = highly confident.

Please circle the number that best represents your response for each item.

1. Advocate for integration of student academic, career, and personal development into

the mission of my school.

1 2 3 4 5

2. Recognize situations that impact (both negatively and positively) student learning

and achievement.

1 2 3 4 5

3. Analyze data to identify patterns of achievement and behavior that contribute to

school success

1 2 3 4 5

4. Advocate for myself as a professional school counselor and articulate the purposes

and goals of school counseling.

1 2 3 4 5

5. Develop measurable outcomes for a school-counseling program which would

demonstrate accountability.

1 2 3 4 5

6. Consult and collaborate with teachers, staff, administrators and parents to promote

student success.

1 2 3 4 5

7. Select and implement applicable strategies to assess school-wide issues. 1 2 3 4 5

8. Promote the use of counseling and guidance activities by the total school community

to enhance a positive school climate.

1 2 3 4 5

9. Develop school improvement plans based on interpreting school-wide assessment

results.

1 2 3 4 5

10. Identify aptitude, achievement, interest, values, and personality appraisal resources

appropriate for specified situations and populations.

1 2 3 4 5

11. Analyze data to identify needs of students in my school. 1 2 3 4 5

12. Differentiate needs from means. 1 2 3 4 5

13. Identify expected program outcomes of a program in my counseling curriculum. 1 2 3 4 5

14. Review research on existing programs. 1 2 3 4 5

15. Develop a process to monitor implementation of a program. 1 2 3 4 5

16. Assess the effectiveness of a program in my counseling curriculum. 1 2 3 4 5

17. Analyze the impact of a program. 1 2 3 4 5

18. Articulate the outcomes of a program. 1 2 3 4 5

Page 106: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

106

APPENDIX F

ESSENTIAL COMPETENCIES FOR PROGRAM EVALUATORS SELF- ASSESSMENT

Below is a list of Program Evaluator Competencies. Indicate your confidence in your current ability to

perform each activity by circling the appropriate answer next to each item according to the scale defined

below. Please answer each item based on how you feel now, not on your anticipated (or previous) ability.

Remember, this is not a test and there are no right answers.

Use the following scale:

1 = not confident,

2 = slightly confident,

3 = moderately confident,

4 = generally confident,

5 = highly confident.

Please circle the number that best represents your ability for each item.

1

Act ethically and strives for integrity and honesty in

conducting evaluations 1 2 3 4 5

2 Address conflict in an evaluation 1 2 3 4 5

3 Analyze data 1 2 3 4 5

4 Analyze situations 1 2 3 4 5

5 Analyze the political considerations of an evaluation 1 2 3 4 5

6 Apply professional evaluation standards 1 2 3 4 5

7 Assess reliability of data 1 2 3 4 5

8 Assess validity of data 1 2 3 4 5

9 Attend to the issues of organizational change 1 2 3 4 5

10 Attend to the issues surrounding the use of an evaluation 1 2 3 4 5

11 Budget time and resources for an evaluation 1 2 3 4 5

12

Build professional relationships that will enhance program

evaluation practice 1 2 3 4 5

13 Collect data 1 2 3 4 5

14

Communicate with stakeholders throughout and

evaluation process 1 2 3 4 5

15 Conduct an evaluation in a non-disruptive manner 1 2 3 4 5

16 Conduct literature reviews 1 2 3 4 5

17 Conduct meta-evaluations 1 2 3 4 5

18

Consider the general and public welfare in evaluation

practice 1 2 3 4 5

19 Contribute to the knowledge base of evaluation 1 2 3 4 5

20

Convey personal evaluation approaches and skills to

potential stakeholders 1 2 3 4 5

21 Demonstrate cross-cultural competence in an evaluation 1 2 3 4 5

22 Describe a program 1 2 3 4 5

23 Determine program ‗evaluability‘ 1 2 3 4 5

24 Develop evaluation designs 1 2 3 4 5

Page 107: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

107

Use the following scale:

1 = not confident,

2 = slightly confident,

3 = moderately confident,

4 = generally confident,

5 = highly confident.

Please circle the number that best represents your response for each item.

25 Develop recommendations 1 2 3 4 5

26 Examine the organizational context of the evaluation 1 2 3 4 5

27

Facilitate constructive interpersonal interaction to assist in

evaluation 1 2 3 4 5

28 Frame evaluation questions 1 2 3 4 5

29 Identify data sources 1 2 3 4 5

30 Identify needed resources for an evaluation 1 2 3 4 5

31 Identify the interests of stakeholders 1 2 3 4 5

32 Interpret data 1 2 3 4 5

33 Justify the cost of an evaluation 1 2 3 4 5

34 Make judgments 1 2 3 4 5

35 Modify a study when needed 1 2 3 4 5

36 Negotiate with stakeholders before and evaluation begins 1 2 3 4 5

37 Note strengths and limitations of an evaluation 1 2 3 4 5

38 Present an evaluation in a timely manner 1 2 3 4 5

39

Provide rationales for decisions throughout the evaluation

process 1 2 3 4 5

40 Pursue professional development in program evaluation 1 2 3 4 5

41 Reflect on my competencies and areas for growth 1 2 3 4 5

42 Remain open to input from others 1 2 3 4 5

43 Report evaluation procedures and results 1 2 3 4 5

44

Respect clients, respondents, program participants and

other stakeholders 1 2 3 4 5

45 Respect the uniqueness of the evaluation site and client 1 2 3 4 5

46 Respond to requests for evaluations 1 2 3 4 5

47 Specify program theory 1 2 3 4 5

48 Supervise others conducting an evaluation 1 2 3 4 5

49 Train others in evaluation 1 2 3 4 5

50

Understand the knowledge base of evaluation (terms,

concepts, theories, assumptions) 1 2 3 4 5

51 Use appropriate technology for an evaluation 1 2 3 4 5

52 Use Interpersonal skills in program evaluation 1 2 3 4 5

53 Use negotiation skills in program evaluation 1 2 3 4 5

54 Use verbal / listening skills in program evaluation 1 2 3 4 5

55 Use written communication skills in program evaluation 1 2 3 4 5

57 Knowledgeable about mixed methods 1 2 3 4 5

58 Knowledgeable about qualitative methods 1 2 3 4 5

59 Knowledgeable about quantitative methods 1 2 3 4 5

60 Aware of self as an evaluator 1 2 3 4 5

Page 108: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

108

APPENDIX G

IMPLEMENTATION SURVEY

The purpose of these questions is to determine if the Professional Development Unit had an

impact on the school counselors‘ development and use of program evaluation in the school. It is

not an evaluation of the school counselor. You are asked not to provide your name or name of

your school.

Please provide your information like this: not like this:

Since completing the Professional Development Unit:

Yes No

1. I have conducted a needs assessment.

2. I have selected a guidance program to evaluate.

3. I have assessed the theory and purpose of the program.

4. I have assessed how the program purpose aligns with the needs.

5. I have defined the anticipated outcomes.

6. I have assessed the implementation of the program.

7. I have assessed the outcomes of the program.

8. I have shared the results with: (Please check all that apply)

Other Counselors School Advisory Council Principal

Parents School Leadership Team Teachers Other

Please indicate your level of agreement with the following statements:

Strongly

Agree Agree Neutral Disagree

Strongly

Disagree

I felt supported by my principal in the

evaluation of a program in my guidance

program.

I felt supported by the other staff in the

building.

I felt supported by other school

counselors.

I had time to complete this process.

I will continue to practice program

evaluation as a way of work.

What did you like about the professional development on Program Evaluation?

Page 109: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

109

What would you change about the professional development?

How will this professional development influence your way of work?

Page 110: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

110

APPENDIX H

INFORMED CONSENT

Dear School Counselor:

I am a doctoral candidate in the Department of Counselor Education, at the University of Florida. As part

of my dissertation I am examining the impact a professional development unit on program evaluation has

on in-service school counselors.

I am asking you to volunteer to participate in this study. In this study you will be asked to volunteer to

participate in a four-part professional development unit. You will be asked to meet three times for one to

two hours, over the course of the semester. At the start of the professional development unit, you will

also be asked to anonymously complete a personal data sheet, a questionnaire regarding program

evaluation, and a school counselor self- efficacy scale. At the end of the professional development you

will be asked to anonymously complete a questionnaire regarding program evaluation, and a school

counselor self- efficacy scale. Two months after completing the professional development unit, you will

be asked to complete a survey to determine if you have applied what you have learned in the professional

development unit and conducted an evaluation of your program. The follow-up survey will take between

5-10 minutes to complete.

There are no anticipated risks. As a participant you will benefit from learning program evaluation skills.

You are free to withdraw your consent to participate and may discontinue your participation in the study

at any time without consequence. The researcher will not provide monetary or time compensation for

participation in this study. Your identity will be kept confidential to the extent provided by law. Your

information will be assigned a code number. The list connecting your name to this number will be kept on

a password protected computer and paper copies will be locked in a file in the researcher‘s office. When

the study is completed and the data have been analyzed, the list will be destroyed. Your name will

not be used in any report.

If you have any questions about this research protocol, please contact me, Nicole Carr, at (727) 643-4658

[email protected] or my faculty advisor at Mary Ann Clark, PhD. Department of Counselor Education,

University of Florida, and (352) 273-4331.

Questions or concerns about your rights as a research participant rights may be directed to the IRB02

office, University of Florida, Box 112250, Gainesville, FL 32611

Please sign and return this copy of the letter in the enclosed envelope. A second copy is provided for your

records. By signing this letter, you give me permission to report your responses anonymously in the final

dissertation.

Participant: _________________________________________________Date:__________

Principal Investigator: _________________________________________Date: _________

Page 111: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

111

APPENDIX I

OUTLINE OF PROFESSIONAL DEVLOPMENT UNIT

I. Unit I Needs Assessment

A. Basic Program Evaluation

1. Why do we evaluate?

2. How do we evaluate?

3. When do we evaluate?

B. How to of Program Evaluation

1. Cost Analysis

2. Effectiveness

3. Implementation

4. Theory/ Design

5. Needs Assessment

C. Today‘s Focus- Needs Assessment

D. Determine what it is we need before we can determine what program aligns with our

needs.

E. ACTIVITY (whole group): School Counselors

1. Who are we?

2. What do we do?

3. What do we want to be?

F. The gaps between what is and what is the goal.

G. ACTIVITY (individual)

Personal example

H. The Data

I. What data do we have available?

1. AYP

2. School Grades

3. Individual Student

4. Climate Surveys

5. Discipline

6. Attendance

7. Targeted Behavior

8. Other

J. ACTIVITY (small group)

1. Where is your program now?

2. What does it look like?

3. What are the gaps/ problems/ needs?

K. Next Steps

1. Set goals that align with the needs

2. Sound Theory

3. Design Plan

4. Implement

Page 112: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

112

II. Unit II Program Alignment

A. How do we know what will meet our needs?

1. What are our needs?

2. How did we determine our needs?

Where are we and where do we want to be?

What are the gaps between what is and what should be?

The gaps are the needs.

B. Avoid starting with the means

1. At first we all think of the MEANS (resources solutions, money, more teachers,

computers, more programs, more testing)

2. Be sure you have a clear understanding of what it is you really need.

3. If you are unclear what the needs are you will not be able to align your

programming to fit the needs

C. Understanding what we have

1. What means are already in place to meet these needs?

a. Are these means aimed at meeting other needs?

b. Are they working?

c. How do we know?

D. ACTIVITY (small group)

1. Your school needs

2. Your school has what in place to meet these needs

E. What Works

1. Obtaining a clear definition of the objective of a program-

2. What is it intended to do? (Refer to What Works Clearinghouse)

F. Be Specific

1. What is the goal of your program?

a. Does this goal match my need?

b. Is the goal clear?

2. Who will this program impact?

3. How will I monitor the progress?

4. How will I know this program is working?

a. What can I measure to see this is working?

b. What will indicate this program is not working?

G. ACTIVITY (individual/ whole group share)

1. Identify a need in your school

2. Your school has what in place to meet these needs

3. Next steps

H. Program Design-

1. What does the theory tell you?

2. What are the objectives of the program?

3. Is this something that can be done?

I. Remember:

Effectiveness

Implementation

Theory/ Design

Needs Assessment

Page 113: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

113

III. Unit III Program Design and Process

A. Review

1. Assessment of Program Outcomes

2. Assessment of Program Process

3. Assessment of Program Design

4. Assessment of Need for Program

B. Sound Theory

C. Design

1. Is the design grounded in theory?

2. Is the design feasible?

3. Are the objectives clear and measurable?

D. Is the program a means to filling the need?

Does this program objective align with what is needed?

E. Implementation

1. Describe what took place. How was it implemented?

2. Was the program implemented the way it was intended?

3. What impact did the changes in implementation have on the program‘s

objective?

F. Monitoring

G. ACTIVITY (small group)

1. What was the goal of your program?

a. Did this goal match my need?

b. Was the goal clear?

2. Who did this program impact?

3. How am I monitoring the progress?

4. How will I know this program is working?

a. What can I measure to see this is working?

b. What will indicate this program is not working?

H. Assessing the process

I. Remember:

Effectiveness

Implementation

Theory/ Design

Needs Assessment

Page 114: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

114

IV. Unit IV Sharing Outcomes

A. What happened?

1. What were the goals?

2. What need/ problem was being addressed?

3. What took place? (describe it)

4. What were the results?

5. How were these results determined?

B. Evidence verses Proof

C. Who needs to know?

D. ACTIVITY (whole group)

1. Who needs this information?

2. How will I determine who needs the information?

3. How will I know what to share with them?

E. How do I tell them?

1. Not one size fits all- no template

2. You know your audience

F. ACTIVITY (small group)

1. What was the goal of your program?

a. Did this goal match my need?

b. Was the goal clear?

2. Who did this program impact?

3. How did I monitor the progress?

4. How did I determine if this program worked?

a. What did I measure to see if it worked?

b. What were my results?

5. How will I share my results?

Page 115: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

115

LIST OF REFERENCES

Alderman, H. S., & Taylor, L. (2002). School counselors and school re-form: New directions.

Professional School Counseling, 5, 235–248.

American School Counseling Association. (2005). American School Counseling Association

National Model: A framework for school counseling programs. (2nd

ed.). Alexandria,

VA: Author.

American School Counseling Association. (2007). School counseling standards: school

counselor competencies. Retrieved October 19, 2007 from

http://www.schoolcounselor.org/files/competencies.pdf

American School Counseling Association. (2008). School Counselor Performance Standards.

Retrieved December 2008 from:

http://www.ascanationalmodel.org/content.asp?pl=33&sl=35&contentid=35

American School Counseling Association. (2010). ASCA National Model. Retrieved September

2010 from: http://www.ascanationalmodel.org/

Ary, D., Jacobs, L. C., & Razaviech, A. (1996). Introduction to Research in Education (5th

ed.)

Florida: Harcourt Brace & Company.

Astramovich, R.L., & Coker, J.K. (2007). Program evaluation: The accountability bridge

Model for counselors. Journal of Counseling and Development, 85, 162–172.

Astramovich, R.L., & Coker, J.K. (2005). Training school counselors in program evaluation.

Professional School Counseling, 9, 49–54.

Baggerly, J., & Osborn, D. (2006). School counselors‘ career satisfaction and commitment:

Correlates and predictors. Professional School Counseling, 9, 197–205.

Bandura, A. (1977). Self-Efficacy: Toward a unifying theory of behavior change. Psychological

Review, 84, 191–215.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory.

Englewood Cliffs, NJ: Prentice-Hall.

Bandura, A. (1994). Self-Efficacy. In V.S. Ramachaudran (Ed.), Encyclopedia of human

behavior (Vol 4.) New York: Academic Press (Reprinted from Encyclopedia of mental

health, H Friedman (Ed.) 1998 San Diego: Academic Press).

Bandura, A. (1999). Social cognitive theory: An agenetic perspective. Asian Journal of

Social Psychology, 2(1), 2141.

Bandura, A. (2007). Self-Efficacy: The exercise of control. (9th

Ed.) New York: MacMillan.

Page 116: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

116

Bax, S. (2002). The social and cultural dimensions of training the trainer. Journal of Education

for Teaching, 28, 167–178.

Bechtel, P. & O‘Sullivan, M. (2006). Effective professional development- what we now know.

Journal of Teaching in Physical Education, 25, 363–378.

Bernard, J.M., & Goodyear, R.G. (1998). Fundamentals of Clinical Supervision (2nd

Ed.).

Needham Heights, MA: Allyn & Bacon.

Bernard, J.M., & Goodyear, R.G. (2004). Fundamentals of Clinical Supervision (3rd

Ed.).

California: Merrill.

Betz, N., & Voyten, K. K. (1997). Efficacy and outcome expectations influence career

exploration and decidedness. The Career Development Quarterly, 46, 179–189.

Bodenhorn, N., & Skaggs, G. (2005). Development of the school counselor self-efficacy scale.

Measurement and Evaluation in Counseling and Development, 38, 14–21.

Bodenhorn, N., Wolfe, E., Airen, O. (2010). Program choice and self-efficacy: Relationship to

the achievement gap and equity. Professional School Counselor, 13, 165–174.

Borders, L. D. (2005). Snapshot of clinical supervision in counseling and counselor education: A

five-year review. The Clinical Supervisor, 24 (1-2), 69–113.

Borders, L. D., & Brown, L. L. (2005). The New handbook of counseling supervision. Mahwah,

NJ: Lahaska/Lawrence Erlbaum.

Brouwers, A., & Tomic, W. (2001). The factorial validity of scores on the teacher interpersonal

self-efficacy scale. Educational and Psychological Measurement, 6(3), 433–445.

Brown , D., & Trusty, J. (2005). School counselors, comprehensive school counseling programs,

and academic achievement: Are school counselors promising more than they can

deliver? Professional School Counseling, 7, 91–99.

CACREP (2009) Directory of programs. Retrieved December 12, 2009, from

http:// www.cacrep.org.

Cantrell, S. C., and H. K. Hughes. 2008. Teacher efficacy and content literacy implementation:

An exploration of the effects of extended professional development with coaching.

Journal of Literacy Research 40:95–127.

Carey, J., & Dimmitt, C. (2006). Resources for school counselors and counselor educators: The

Center for School Counseling Outcome Research. Professional School Counseling, 9,

416–420.

Page 117: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

117

Carey, J. C., Dimmitt, C., Hatch, T. A., Lapan, R.T., & Whiston, S. C. (2008). Report of the

National Panel for Evidence-Based School Counseling: Outcome Research Coding

Protocol and evaluation of Student Success Skills and Second Step. Professional School

Counseling, 11, 197–206.

Carey, J., Harrity, J., & Dimmitt, C. (2005). The development of a self-assessment instrument to

measure a school district's readiness to implement the ASCA national model.

Professional School Counseling, 8, 305–312.

Campbell, C. A., & Dahir, C.A. (1997). The national standards for school counseling programs.

Alexandria, VA: American School Counseling Association.

Clark, M. A. & Amatea, E. (2004). Teacher perceptions and expectations of school counselor

contributions: Implications for program planning and training. Professional School

Counseling, 8 12–140.

Compeau, D. R., & Higgins, C. A. (1995). Application of social cognitive theory to training for

computer skills. Information Systems Research, 6. 118–143.

Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of

psychopathology. Journal of Consulting Psychology, 24, 349–354.

D‘Eon, M, Sadownik, L., Harrison, A., & Nation, J. (2008). Using self-assessment to detect

workshop success: Do they work? American Journal of Evaluation, 29, 92–98.

Dahir, C., & Stone, C. (2003). Accountability: A M.E.A.S.U.R. E. of the impact school

counselors have on student achievement. Professional School Counseling, 6, 214–221.

Dahir, C., & Stone, C. (2009). School Counselor accountability: The path to social justice and

system change. Journal of Counseling and Development, 87, 12–20.

Dillman, D. (2007) Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. (2nd

Ed.) New York: Wiley.

Dollarhide, C., & Lemberger, M. (2006). ‗No child Left Behind‘: Implications for school

counselors. Professional School Counseling, 9, 295–304.

Eakin, S. (1996) National education summit. Technos, 5, 16–25.

Educational Trust. (2007). Transforming school counseling. Retrieved January 13, 2009, from

http://www2.edtrust.org/EdTrust/Transforming+School+Counseling/main.

Elementary and Secondary Education Act, Pub. L. No.89–10, 79 Stat 27, 20 U.S.C. ch 70 (1965)

Eisner, E. (2001). What does it mean to say a school is doing well? Phi Delta Kappan, 82, 367–

372.

Page 118: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

118

Fairchild, T. N., & Seeley, T.J. (1995). Accountability strategies for school counselors: A

baker‘s dozen. School Counselor, 42, 377–392.

Feldt, R. C., & Woelfel, C. (2009). Five factor personality domains, Self-efficacy, career-

outcome expectations and career indecision. College Student Journal, 43, 2, 429–437.

Fitch, T. J., & Marshal, J. L. (2004). What counselors do in high-achieving schools: A study on

the role of the school counselor. Professional School Counselor, 7, 172–177.

Fitts, W. H., & Warren, W. L. (1996). Tennessee Self-Concept Scale: TSCS:2. Los Angeles:

Westem Psychological Services.

Florida Department of Education (2009). Florida School Indicator Reports. Retrieved March 30,

2009, from http://www.fldoe.org/eias/eiaspubs/xls/fsir/2007-

08/Mem_Category_Dist_0708.xls

Florida Department of Education (2008) Race/ Ethnicity, Survey 6 and Other Changes.

Retrieved March 29, 2009, from:

http://www.fldoe.org/eias/databaseworkshop/word/racesvy6.doc

Florida Senate (2010) The 2010 Florida Statutes. Retrieved September 11, 2010 from:

http://www.leg.state.fl.us/statutes/

Florida Bureau of School Improvement (2010) The Florida Department of Education: Bureau of

School Improvement. Retrieved September 11, 2010 from: http://flbsi.org/

Fournier, P., Banza, B., Tourigny, C., & Dieudonne, O. (2009). Programme evaluation training

for health professionals in francophone Africa: Process, competence acquisition and use.

Human Resources for Health, 7, (3).

Fraenkel, J., & Wallen, N. (2008). How to design and evaluate research in education. (7th

Ed.)

New York: McGraw-Hill.

Fullen, M. (2001). Leading in a culture of change. San Francisco: Jossey-Bass.

Ghere, G., King, J. A., Stevahn, L., & Mimmema, J. (2006). A professional development

unit for reflecting on program evaluation competencies. American Journal of Evaluation,

27, 108–123.

Greason, P. B. & Cashwell, C. S. (2009) Mindfulness and counseling self-efficacy: The

mediating role of attention and empathy. Counselor Education and Supervision, 49, 2–

18.

Glass, G.V., & Hopkins, K. D. (1996). Statistical Methods In Education and Psychology. (3rd

Ed.) Massachusetts: Allyn and Bacon.

Page 119: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

119

Guba, E. & Lincoln, Y. (2004). Fourth Generation Evaluation (6th

Ed.) Thousand Oaks,

California: Sage.

Guskey, T.R. (2000). Evaluating Professional Development. Thousand Oaks, CA: Corwin Press.

Guskey, T. R. (2002). Professional development and teacher change. Teachers and Teaching:

Theory and Practice, 8, 381–391.

Guskey, T. R. (2003, April). What makes professional development effective. Presented at

annual meeting of American Educational Research Association, Chicago.

Guskey, T. R., & Yoon, K.S. (2009). What works in professional development? Phi Delta

Kappan, 90, 495–500.

Gysbers, N. C. (2004). Comprehensive guidance and counseling programs: The evolution of

accountability. Professional School Counseling, 8, 1–4.

Gysbers, N. C., & Henderson, P. (2000). Developing and Managing your school guidance

program (3rd

Ed.). Alexandria, VA: American Counseling Association.

Gysbers, N. C., & Henderson, P. (2001). Comprehensive guidance and counseling programs: A

rich history and bright future. Professional School Counseling, 4, 246–256.

Gysbers N. C., Hughey, K., Starr, M., & Lapan, R (1992) Improving school guidance programs:

a framework for program, personnel, and results evaluation. Journal of Counseling and

Development, 70, 565–570.

Herr, E.L. (2001). The impact of national policies, economics, and school reform on

comprehensive guidance programs. Professional School Counseling, 4, 236–245.

Harootunian and Yargar (1981). Teachers‘ conceptions of their own success. Current issues.

Washington, DC :ERIC Clearinghouse on Teacher Education

House, R. M., & Hayes, R.L. (2002). School counseling: Becoming key players in school

reform. Professional School Counseling, 5, 249–256.

Isaacs, M. L. (2003). Data-driven decision making: The engine of accountability. Professional

School Counseling, 6, 288–295.

The Joint Committee on Standards for Educational Evaluation (2008). The program evaluation

standards: How to assess evaluation of educational programs (2nd

Ed.). Michigan:

Author.

Kosine, N, Steger, M, & Duncan, S. (2008). A strengths based approach to finding meaning and

purpose in careers. Professional School Counseling, 12, 133–136.

Page 120: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

120

Kumpulainen, K. Classroom Interaction and Social Learning: From Theory to Practice.

London: Routledge Falmer, 2001.

Krumboltz, J.D. (1998). Counselor actions needed for the new career perspective. British

Journal of Guidance & Counseling, 26, 559–564.

Lambie, G., & Williamson, L. (2004). The challenge to change from guidance counseling

to professional school counselor. Professional School Counseling, 12, 124–131.

Lapan, R. T. (2001). Results-based comprehensive guidance and counseling program: A

framework for planning and evaluation. Professional School Counseling, 4, 289–299.

Lapan, R. T., Gysbers, N. C., & Sun, Y. (1997). The impact of more fully implemented

guidance programs on the school experiences of high school students: A statewide

evaluation study. Journal of Counseling and Development, 75, 292–302.

Larson, L. M., Suzuki, L. A., Gillespie, K. N., Potenza, M. T., Bechtel, M. A., & Toulouse, A. L.

(1992). Development and validation of the Counseling Self-Estimate Inventory. Journal

of Counseling Psychology, 39, 105–120.

Larson, L. M. & Daniels, J. A. (1998) Review of the counseling self-efficacy literature. The

Counseling Psychologist, 26, 179–218.

Lee, C., & Workman, D. (1992). School counselors and research: Current status and future

direction. School Counselor, 40, 15–20.

Lipsey, M., & Wilson, D.B., (2000). Practical meta-analysis. (3rd

Ed.) Thousand Oaks, CA:

Sage

Loesch, L. C., & Ritchie, M. (2008). The accountable school counselor (2nd

Ed.). Austin, TX:

ProEd Inc.

Loesch, L. C. (2001). Counseling program evaluation: Inside and outside the box. In D. C.

Locke, J. E. Myers, & E. L. Herr (Eds.), The handbook of counseling (pp. 513–525).

Thousand Oaks, CA: Sage.

Lusky, M. B., & Hayes, R. L. (2001). Collaborative consultation and program evaluation.

Journal of Counseling & Development, 79, 26–38.

Madaus, G. E., & Stufflebeam, D. L. (2000). Program evaluation: a historical overview. In

Stufflebeam, D.L., Madaus, G.F. & Kellaghan, T. (Eds.). Evaluation Models: Viewpoints

on Educational and Human Services Evaluation (pp.3–18) (2nd

ed. ) New York: Springer

Page 121: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

121

Martin, J., McCaughtry, N., Kulinna, P. H., & Cothran, D. (2009). The Impact of a Social

Cognitive Theory-Based Intervention on Physical Education Teacher Self-Efficacy.

Professional Development in Education, 35, 511–529.

McGannon, W., Carey, J., & Dimmitt, C. 2005, May). The current status of school counseling

outcome research (Research Monograph No. 2). Amherst, MA: Center for School

Counseling Outcome Research, School of Education, University of Massachusetts.

Myrick, R. D. (1990). Retrospective measurement: An accountability tool. Elementary School

Guidance and Counseling, 25, 21–29.

Myrick, R. D. (1997). Developmental guidance and counseling: A practical approach (3rd

ed.).

Minneapolis, MN: Educational Media.

Myrick, R. D. (2003). Accountability: Counselors count. Professional School Counseling, 6,

175–179.

National Center for Educational Statistics (2009). The Nation‘s report card. Retrieved March

29, 2009, from: http://nces.ed.gov/nationsreportcard/

Patton, M. Q. (2004). Roots of Utilization-Focused Evaluation. In Alkin, M. C. (Ed.), Evaluation

roots: tracing theorists’ views and influences. Thousand Oaks, California: Sage.

Patton, M.Q. (1997). Utilization-Focused Evaluation: The new century text (3rd

Ed.) Thousand

Oaks, CA: Sage.

Patton, M.Q. (2008). Utilization-Focused Evaluation: The new century text (4th

Ed.) Thousand

Oaks, CA: Sage.

Pajares, F. (1991). Current directions in self-efficacy research. In M. Maehr & P. R. Pintrich

(Eds.). Advances in motivation and achievement. Volume 10, (pp. 1–49). Greenwich, CT:

JAI Press.

Pajares (2002). Overview of social cognitive theory and of self-efficacy. Retrieved: September

10, 2010 from http://www.emory.edu/EDUCATION/mfp/eff.html

Paulsen, A. & Betz, N. (2004). Basic confidence predictors of career decision-making self-

efficacy. The Career Development Quarterly, 52, 354–362.

Pratt, C., McGuigan, W., & Katzev, A. 2000. Measuring program outcomes using retrospective

pre-test methodology. American Journal of Evaluation, 21, 341–350.

Ravid, R. (2000). Practical Statistics for Educators (2nd

ed.). New York: University Press of

America.

Page 122: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

122

Rayle, A. D. (2006). Do school counselors matter? Mattering as a moderator between job

stress and job satisfaction. Professional School Counselor, 9, 206–215.

Riggs, I., & Enochs, L. (1990). Toward the development of an elementary teacher's science

teaching efficacy belief instrument. Science Education, 74, 625–638.

Rossi, P., Lipsey, M., & Freeman, H. (2004). Evaluation: A Systematic Approach.(7th

Ed.).

Thousand Oaks, CA: Sage.

Rowell, L.L. (2005). Collaborative action research and school counselors. Professional School

Counseling, 9, 28–36.

Rowell, L. L. (2006). Action research and school counseling: Closing the gap between research

and practice. Professional School Counseling, 9, 376–384.

Scarborough, J. L. (2005). The school counselor activity rating scale: An instrument for

gathering process data. Professional School Counseling, 8, 274–283.

Schwitzer, A.M. (1997). Utilization-focused evaluation: Proposing a useful method of program

evaluation for college counselors and student development programs. Measurement and

Evaluation in Counseling and Development, 30, 50–61.

Scriven, M. (2004). Reflections. In Alkin, M. C. (Ed.), Evaluation roots: tracing theorists’ views

and influences. Thousand Oaks, California: Sage.

Shadish, W., Cook, T., & Leviton, L. (1995). Foundations of program evaluation (3rd

Ed.).

Thousand Oaks, California: Sage.

Shavelson, R. & Town, L. (2002). Scientific Research in Education. Washington: National

Academies Press.

Shaw, K., Davis, N., & McCarty, B. (1991). A cognitive framework for teacher change. In

Proceedings of the thirteenth annual meeting of the North American chapter of the

international group for the psychology of mathematics education, edited by R.G.

Underhill, p. 161–167. Blacksburg, VA.

Sink, C. A., & Spencer, L. R. (2005). My Class Inventory-Short Form as an accountability tool

for elementary school counselors to measure classroom climate. Professional School

Counseling, 9, 37–48.

Sink, C. A., & Spencer, L. R. (2007). Teacher version of the my class inventory—short form: an

accountability tool for elementary school counselors. . Professional School Counseling,

11, 129–139.

Spielberger, C. (1983). State-Trait Anxiety Inventory (Form Y). Redwood City, CA: Mind

Garden.

Page 123: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

123

Stevahn, L., King, J., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for

program evaluators. American Journal of Evaluation, 26, 43–59.

Stone, C. & Dahir, C. (2006). School Counselor Accountability: A MEASURE of Student

Success. (2nd

ed.) New York: Prentice Hall.

Studer, J. R., Oberman, A. H., & Womack, R. H. (2006). Producing evidence to show counseling

effectiveness in the schools. Professional School Counseling, 9, 385–391.

Stufflebeam, D. (2004). The 21st century CIPP model; Origins, development, and use. In Alkin,

M. C. (Eds.), Evaluation roots: Tracing theorists’ views and influences. Thousand Oaks,

California: Sage.

Trevisan, M. S. (2000). The status of program evaluation expectations in state school counselor

certification requirements. American Journal of Evaluation, 21, 81–94.

Trevisan, M. S. (2001). Implementing comprehensive guidance program evaluation support:

Lessons learned. Professional School Counseling, 4, 225–228.

Trevisan, M. S. (2002a). Enhancing practical evaluation training through long-term evaluation

projects. American Journal of Evaluation, 23, 81–92.

Trevisan, M. S. (2002b). Evaluation capacity in K-12 school counseling programs. American

Journal of Evaluation, 23, 291–305.

Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American

Journal of Evaluation, 25, 255–272.

U.S. Department of Education (2001). No Child Left Behind Act (Pub. L. No. 107–110).

Retrieved March 10, 2009, from

http://www.ed.gov/nclb/overview/intro/execsumm.html

U.S. Department of Education (2008). Research and Statistics. Retrieved March 29, 2009, from

http://www.ed.gov

U.S. Department of Education (2010a) Catalog of Federal Domestic Assistance. Retrieved

September 18, 2010, from http://www2.ed.gov/programs/readingfirst/index

U.S. Department of Education (2010b) Decision Letters on Each State‘s Final Assessment

System Under No Child Left Behind. Retrieved September 18, 2010 from,

http://www2.ed.gov/admins/lead/account/nclbfinalassess/index.html

U.S. Department of Education (2010c) Race to the Top Program Executive Summary

Retrieved: October1, 2010: http://www2.ed.gov/programs/racetothetop/executive-

summary.pdf

Page 124: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

124

U.S. Department of Education (2010d) Race to the Top Fund. Retrieved October1,

2010http://www2.ed.gov/programs/racetothetop/index.html

Vacc, N. A. & Loesch L. C. (2000). Professional orientation to counseling. (3rd

Ed.).

Philadelphia: Brunner-Routledge

Vacc, N. A., Rhyne-Winkler, M. C., & Poidevant (1993). Evaluation and accountability of

counseling services: Possible implications for a midsize school district. The School

Counselor, 40, 260–266.

Valentine, J & Cooper, H. (2003). Effect size substantive interpretation guidelines: Issues in the

interpretation of effect sizes. Washington, DC: What Works Clearinghouse.

VanSteendam, E., Rijlaarsdam, G., Sercu, L., & VandenBergh, H. (2010). The effect of

instruction type and dyadic or individual emulation on the quality of higher order peer

feedback in EFL. Learning and Instruction, 20, 316–327.

Vogt, M. E., and B. A. Shearer. 2007. Reading specialists and literacy coaches in the real world.

(2nd

Ed.) Boston: Pearson.

Walsh, M. E., Barrett, J. G., & DePaul, J. (2007). Day-to-day activities of school counselors:

Alignment with new directions in the field and the ASCA national model. Professional

School Counseling, 10, 370–378.

Wang, S.L., & Lin, S.J. (2007). The application of social cognitive theory to web-based learning

through NetPorts. British Journal of Educational Technology, 38, 600–612.

Wheeler, P. T., & Loesch, L. (1981). Program evaluation and counseling: Yesterday, today and

tomorrow. Personnel and Guidance Journal, 51, 573–578.

Whiston, S. C., & Aricak, O. T., (2008). Development and Initial Investigation of the School

Counseling Program Evaluation Scale. Professional School Counseling, 11 (4), 253–261.

Whiston, S.C. (1996). Accountability through action research: Research methods for

practitioners. Journal of Counseling and Development, 74, 616–623.

Whiston, S. C. (2002). Response to the past, present, and future of school counseling: Raising

some issues. Professional School Counseling, 5, 148–155.

Wittmer, J., & Loesch, L.C. (1990). Roses, ducks, and doctoral degrees in counselor education.

Counselor Education and Supervision, 30, (2), 156–162.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., and Caruthers, F. A. (2011). The program

evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand

Oaks, CA: Sage.

Page 125: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

125

Yoon, K. S., Duncan, T., Wen-Yu S., Scarloss, B. and Shapley, K.L. . Reviewing the Evidence

on How Teacher Professional Development Affects Student Achievement. Issues and

Answers Report, REL 2007--No. 033. Washington, D.C.: U.S. Department of Education,

Institute of Education Sciences, National Center for Education Evaluation and

Regional Assistance, Regional Educational Laboratory Southwest, 2007.

Wheeler, P. T., & Loesch, L. (1981). Program evaluation and counseling: Yesterday, today and

tomorrow. The Personnel and Guidance Journal, 51, 573–578.

Yuen, A. & Ma, W. (2008). Exploring teacher acceptance of e-learning technology. Asia-Pacific

Journal of Teacher Education, 36, 3, 229–243.

Zimmerman, B.J., & Schunk, D. (2004). Self-regulating intellectual processes and outcomes: A

social cognitive perspective. In Dai, D.Y., & Sternberg, R.J. (Eds.), Motivation, emotion,

and cognition: Integrative perspectives on intellectual functioning and development, (pp.

323-349). Mahwah, N.J. Erlbaum.

Zunker, V.G. (2002). Career counseling: Applied concepts of life planning. Pacific Grove, CA:

Brooks/Cole.

Page 126: THE IMPACT OF A PROFESSIONAL DEVELOPMENT UNIT ON THE

126

BIOGRAPHICAL SKETCH

Born in Rhode Island, Nicole Merlan Carr wanted to become a teacher. After receiving a

Bachelor of Arts degree in English and Secondary Education from Rhode Island College, she

began a teaching career as a high school English teacher in Zuni, New Mexico. She then moved

to Florida and continued to teach English.

In 1998, she graduated with concurrent degrees from the University of Florida, a Master of

Education and a Specialist in Education majoring in school counseling and guidance and mental

health counseling. She worked as a high school and middle school counselor in Washington and

Florida for several years. During that time, she became a National Board Certified Counselor.

In 2006, she returned to the University of Florida to pursue a doctorate in counselor

education at the University of Florida. While enrolled she accepted a position as the full time

Title I Research Specialist for Pinellas County Schools, and was later promoted to the Senior

Coordinator of Accountability.