21
SYSTEMATIC DEVELOPMENT OF A CLINICAL PERFORMANCE INSTRUMENT FOR AUD AND SLP STUDENTS/GRADUATES Elaine Mormer, PhD, University of Pittsburgh Deborah Moncrieff, PhD University of Pittsburgh Deborah Dixon, MA, ASHA, Director, School Services Janet Deppe, MS, ASHA, Director, State Advocacy

Elaine Mormer, PhD, University of Pittsburgh Deborah Moncrieff, PhD University of Pittsburgh Deborah Dixon, MA, ASHA, Director, School Services

Embed Size (px)

Citation preview

SYSTEMATIC DEVELOPMENT OF A CLINICAL PERFORMANCE INSTRUMENT FOR AUD AND SLP STUDENTS/GRADUATES

Elaine Mormer, PhD, University of Pittsburgh Deborah Moncrieff, PhD University of Pittsburgh Deborah Dixon, MA, ASHA, Director, School Services Janet Deppe, MS, ASHA, Director, State Advocacy

BACKGROUND

Personnel preparation programs must develop valid and reliable student assessment instrumentation (ACAE 2005 standards; CAA standards 2013)

No standardized approach to clinical skills and knowledge across AuD and SLP programs

Several states have developed performance measures for classroom teachers, but not for Audiologists & Speech-Language Pathologists (Asha, 2012)

Development of student/clinician performance evaluation should follow a systematic approach, using appropriate resources

COMPONENTS IN THE DEVELOPMENT OF SKILLS TRACKING/EVALUATION

OBJECTIVE

We applied a systematic approach to design an instrument to assess and track AuD and SLP student knowledge and skills necessary for provision of services to high need children in underserved populations. The steps were as follows:

1. Identification of relevant skills and knowledge (i.e. items to be rated)

2. Application of a reliable and valid rating scale

3. Implementation of a data collection mechanism

METHOD

1. Identification of valid skill and knowledge list

• Identified resource materials e.g. publications, standards, etc.(see next panel)

• Solicited input from relevant constituents e.g. school personnel, consumers, etc.

• Peer review and feedback

• Pilot items with clinical instructors 2. Application of a valid and reliable rating scale

• Determine scale format e.g. # of points vs. checklist

• Create anchor values

• Create clearly defined descriptor for each scale point

• Considered priorities for psychometric characteristics

3. Implementation of a manageable data collection mechanism

• Format i.e. paper based vs. online?

• Availability of support staff for distribution and collection?

• Use of existing online mechanisms e.g. Typhon AHST, E*Value

SAMPLE RESOURCES EXAMPLES OF SOURCES FOR DEVELOPMENT OF SKILLS/KNOWLEDGE LIST IS BELOW:

RESOURCES, CONT…•Produced by the American Speech‐Language‐Hearing Association’s(ASHA) Value‐Added Project Team member requests, rapidly developing

•State‐level policies regarding accountability measures for school‐based speech‐languagepathologists (SLPs) •Objective was to identify a value‐added model designed for SLPs or onethat specifically accounted for the unique contributions of SLPs

• Team reviewed literature, attended seminars, and conducted a peer review to obtain input from related professional organizations, members, pertinent stakeholders, and researchers on value‐added models and assessments

SAMPLE SKILLS/KNOWLEDGE ITEMS

Addresses cultural /linguistic variations in

screening/assessment activities

Calculate classroom reverberation times

Participate effectively on multi-disciplinary team

Minimize barriers to curriculum access

Assist educational team member in making referrals

Train and supervise support personnel

Write appropriate IEP goals, considering academic,

behavioral, and developmental issues

Counsel regarding transition planning

RATING SCALE

Data collection mechanism is online via the TyphonGroup© EASI survey system

FUTURE PLANS

1. Currently piloting with selected clinical instructors

2. Implementation of instrument in Fall term 2013

3. Evaluation of instrument pending responses on first round of implementation

4. Ongoing editing of items and scale as per constituent feedback

SELECTED REFERENCES

Kogan J, Conforti L, Benabeo E, Iobst W, Holmboe, E(2011). Opening the black box of clinical skills assessment via observation: a conceptual model. Medical Education 45 1048-1060 doi: 10.1111/j.1365-2923.2011.04025.x

American Speech-Language-Hearing Association (2005). Quality indicators for professional service programs in audiology and speech-Language pathology. Available from: www.asha.org/docs/html/ST2005-00186.html

American Speech-Language-Hearing Association (2012). Performance assessment of contributions and effectiveness of speech-language pathologists (PACE). Available from: http://www.asha.org/uploadedFiles/SLPs-Performance-Assessment-Contributions-Effectiveness.pdf

American Psychological Association (1999). Standards for Educational and Psychological Testing., Washington, DC: American Educational Research Association

Crossley J, Humphris G, Jolly B. (2002). Assessing Health Professionals. Medical Education 36 : 800-804

PERFORMANCE ASSESSMENT OF CONTRIBUTIONS AND EFFECTIVENESS OF SPEECH-LANGUAGE PATHOLOGISTS (PACE)

Value Added Assessment Research Findings Rationale for the development of the

PACE Goals of an assessment system Components of PACE Tools and resources

WHAT IS VALUE-ADDED ASSESSMENT?

Value-added assessment a process to accurately and fairly assess a professional’s impact on student performance and overall success of the school community.

A comprehensive, statistical method of analyzing test data that measures teaching and learning.

WHY IS VAA IMPORTANT?

Federal grant programs and waivers require states to include VAA/ teacher accountability measures in applications

VAA is viewed as an important accountability measure

Teacher accountability systems are being developed in many states

RESEARCH IMPLICATIONS

Research has primarily focused on implications of use of VAA with classroom teachers.

Notable concerns surfaced, such as difficulty linking student outcomes to one teacher and uncertainty about the accuracy of imputation models for missing student data.

RESEARCH IMPLICATIONS

Evaluating the value that an SLP brings to the school or connecting their value to specific student  performance is a challenge when compared to a classroom teacher.

ASHA’s Value-Added Working Team was not able to identify any VAA models that specifically incorporated SLPs.

RATIONALE FOR THE DEVELOPMENT OF PACE

Since there was no system specifically developed for SLPs or other support personnel ASHA wanted to ensure that the assessment model for SLPs: accurately reflects the speech-language

pathologist’s (SLP) unique role in contributing to a child’s overall performance.

Demonstrates that the SLP is contributing to the success of the school community.

COMPONENTS OF AN EVALUATION SYSTEM

ASHA also wanted to make sure that the evaluation system for SLPs was: Comprehensive Used multiple measures Demonstrated valid and reliable findings Provided data for professional

development objectives Linked to the specific roles and

responsibilities of the specific job

COMPONENTS OF THE PACE

- PACE Matrix Portfolio Observation chart Teacher, student, and parent checklist Self reflection tool Observations by individual with knowledge of the roles and

responsibilities of SLP The matrix consists of a set of nine objectives by which

an SLP should be evaluated. These objectives are derived from typical roles and

responsibilities of a school based SLP The portfolio is developed to show evidence of mastery

of each objective

TOOLS AND RESOURCES

The PACE documents can be located at: http://www.asha.org/Advocacy/state/Performance-Assessment-of-Contributions-and-Effectiveness/

Additional tools/resources include: A guide for developing the portfolio for the

Matrix An evaluator’s guide Observation “Look Fors” and scoring system for

the Matrix A Step-by-Step guide for using the PACE system

DISCUSSION QUESTIONS: SYSTEMATIC APPROACH TO INSTRUMENT DEVELOPMENT

  When considering an assessment

instrument, what are relevant quality markers necessary to include?

What are the challenges of evaluating related services scholars and graduates?

How can data collected be used to improve program quality?

Can we ensure “calibrated” responses across raters?