37
ASSESSMENT HANDBOOK A handbook for Assessment Coordinators, Program Directors, and Department Chairs February 2015 Office of Academic Quality Gallaudet University

ASSESSMENT HANDBOOK - Gallaudet University

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Office of Academic Quality

ASSESSMENT HANDBOOK A handbook for Assessment Coordinators, Program Directors,

and Department Chairs

February 2015

Office of Academic Quality

Gallaudet University

ASSESSMENT HANDBOOK 2

Office of Academic Quality

TABLE OF CONTENTS

Introduction 3

Assessment Policies 4

Other Assessment Processes 5

Checklist for new department chairs/program directors and assessment coordinators 6

Assessment Council 7

WEAVE 9

OAQ Review Evaluation Rubric 10

GU Syllabus 12

Developing and Writing Missions 13

Developing and Writing Effective Program Student Learning Outcomes (SLOs) 14

Assessment Tools & Methods 18

Developing a Scoring Criteria (Rubrics) 20

Setting Targets 28

Findings – Summarize Data 31

Findings – Analyze Data 35

Results 36

Office of Academic Quality

The Gallaudet University community recognizes the need to adopt an assessment process that

advances evidence-based decisions about teaching, learning, bilingualism, diversity, campus life,

academic support and student development. Fostered by a concern for student academic growth and

personal development, the assessment process at Gallaudet aspires to employ data-driven decision-

making to continuously increase student achievement and institutional effectiveness.

To support and strengthen the assessment process, this Assessment handbook contains information on

the assessment of student learning at Gallaudet University and is for the department chairs, program

directors, and program/unit assessment coordinators. Included in this handbook are expectations,

guides, examples, explanations, and resources for developing, planning and reporting on assessment.

When you see hyperlinks, you are encouraged to visit OAQ’s Assessment website for additional

information.

As the University’s Coordinator of Assessment and Planning, I aim to provide assistance and support

programs in developing, implementing, and reporting on their assessment process. Please do not

hesitate to contact me if you have any questions related to assessment of student learning.

Regards,

Norma Morán

Coordinator of Planning and Assessment

Mission/Goals

Articulated

ASSESSMENT HANDBOOK 4

Office of Academic Quality

Policy: Reporting on Assessment of Student Learning

Documentation of assessment practices allows the Office of Academic Quality (OAQ) to assess

strengths and needs in Gallaudet’s assessment development program. Reports also provide focus

points for discussions among faculty and staff regarding qualities that characterize good assessment

practices (e.g., at Assessment Council Meetings).

ASSESSMENT POLICIES FOR NON-ACCREDITED PROGRAMS

All programs in Academic Affairs submit an annual Learning Assessment Update (LAU) in the fall.

All units in Student Affairs submit their LAUs in August. LAUs are submitted via the online

assessment management system of WEAVE. The LAU should describe improvements in the use of

assessment and in the program/unit through assessment since the previous LAUs.

Annual LAUs should document ongoing assessment processes that include the components of

Student Learning Outcomes, measures, performance targets, scoring criteria, data summary, data

analysis, and findings. For the evaluation of the LAUs, the University’s Assessment Coordinator and

members of the Assessment Council utilize the rubric, OAQ’s LAU-WEAVE Review Rubric (seen

next page).

ASSESSMENT POLICIES FOR ACCREDITED PROGRAMS

Identified programs with accreditation requirements are to submit a Learning Assessment Update

(LAU) mid-way through their accreditation cycle. For the assessment calendar, contact the

university’s assessment coordinator or see the Assessment of Student Learning Outcomes’ website.

All correspondence (e.g., accreditation reports, rejoinders, follow-up reports, decision letters) with

accrediting organization must be on file with the OAQ.

.

Annual LAUs should document ongoing assessment processes that include the components of

Student Learning Outcomes, measures, performance targets, scoring criteria, data summary, data

analysis, and findings. For the evaluation of the LAUs, the University’s Assessment Coordinator and

members of the Assessment Council utilize the rubric, OAQ’s LAU-WEAVE Review Rubric (seen

next page).

ASSESSMENT HANDBOOK 5

Office of Academic Quality

Other Assessment Processes at Gallaudet University1

1) Unit Effectiveness Planning (annual in fall)

2) Senior Literacy Assessment (annual in spring)

3) Senior Assessment (in progress)

Assessment Process Purpose

Unit Effectiveness Planning (annual in fall) Each undergraduate and graduate programs

will annually set targets and develop action

plans to achieve those targets.

Subsequently in the following

fall, programs will assesses the

achievement of their goals, report

outcomes, and revise targets and action

plans as needed for the following year.

Senior Literacy Assessment (annual in spring) To assess Gallaudet’s graduating seniors on

their English and ASL competence.

To collect data that can be used for

institutional and program improvement.

This will be absorbed into departmental

Senior Assessments for Spring 2015.

Senior Assessment (in progress) By 2014 all UG programs have a draft written

senior assessment plan that describes the

following:

The institutional and program-specific

outcomes that are required to be integrated

by the student in performing this

assessment activity

Discipline standards for the outcomes

The ways in which students, in explicit and

cumulative ways, are prepared for senior

assessments in prior semesters

The ways in which the senior assessment(s)

are a learning experience for students

The breadth of faculty collaboration in the

assessment activity

1 As of February 1

ASSESSMENT HANDBOOK 6

Office of Academic Quality

Checklist for new Department Chairs/Program Directors and/or Assessment Coordinators ________ Meet with the previous assessment coordinator to:

Obtain a copy of your program’s assessment plan if any

Locate the program’s curriculum map if any

Learn the current status of your program’s Learning Assessment Update

(LAU) and Senior Assessment if applicable

________ Know your program’s current SLOs

________ Contact OAQ to be added to the Assessment Council email distribution list

________ Review OAQ’s Assessment website

________ Contact OAQ to obtain a WEAVE account

________ Review WEAVE for previous program LAUs

________ Review your program’s data in the Blackboard organization, “Program Outcomes

Assessment”

________ Schedule a meeting with the University’s Assessment Coordinator

________ Ensure a continuous data collection within your program/unit

ASSESSMENT HANDBOOK 7

Office of Academic Quality

ASSESSMENT COUNCIL

Office of Academic Quality established an interdisciplinary Assessment Council, which is composed

of designated Assessment Coordinators from each academic department and service program. This

position is considered a vital function within departments/programs.

Peer Review

Assessment Council convenes one to two times each semester to conduct a peer review of the

learning assessment reports submitted by academic departments and student service programs. The

coordinators work in teams to analyze the assessment reports, provide feedback by completing the

OAQ's learning assessment report rubric, and participate in an inquiry of the reports and their

assessment work.

The coordinators also meet to discuss assessment-specific issues, share ideas and strategies, assist one

another in the development, coordination, and successful application of departmental assessments of

student learning. Best practices are usually identified and made available to the coordinators for their

own assessment work.

Role of Assessment Coordinators

Assessment Coordinators provide leadership and support in the assessment of student learning and the

use of learning assessment data for program improvement in their unit.

To do so, the Assessment Coordinator:

1. Develops and updates (in conjunction with Department Chair/Unit head, and faculty/ staff)

the unit's plan for using program-level student learning assessment to improve the program by:

a. Engaging unit colleagues in shared conversations about student learning and

assessment, and the use of assessment data for program improvement.

b. Leading the development and periodic review of unit outcomes.

c. Leading the development and periodic review of unit curriculum and

assessment activities matrices.

d. Leading the development and implementation of direct and indirect assessment

methods appropriate for the unit.

e. Leading the collection of data about the program and student learning.

f. Working with other faculty and staff to close the loop between what is found in

the data and improving student learning.

2. Provides annual updates of progress and/or achievements (Learning Assessment Update) to

the Senate curriculum councils, Dean, Provost and Office of Academic Quality.

3. Participates as a member of the Assessment Council (AC). Activities include:

a. Functioning as a peer professional development group member with other

Assessment Coordinators;

ASSESSMENT HANDBOOK 8

Office of Academic Quality

b. Reviewing and providing feedback on annual Learning Assessment Updates.

4. Functions as unit specialist in learning assessment by through professional development and

use of resources.

5. Keeps the unit informed about Gallaudet University's institutional requirements regarding

learning assessment.

Assessment Council Membership For more information, visit this page for the roster.

http://www.gallaudet.edu/office_of_academic_quality/assessment_of_student_learning_outcomes/ass

essment_council.html

ASSESSMENT HANDBOOK 9

Office of Academic Quality

WEAVE

Since Fall 2012, assessment reporting is expected to take place in WEAVE. WEAVE is an

online assessment management system that will be used to manage our planning and

assessment processes (i.e. Assessment of Student Learning).

We no longer accept the paper format for the annual Learning Assessment Updates and the

full three-year Learning Assessment Reports.

The Assessment Council will review programs' Learning Assessment Updates (LAUs) with

the OAQ evaluation rubric.

How to Access WEAVEonline

o Click on the logo on the WEAVE page

(http://www.gallaudet.edu/office_of_academic_quality/assessment_of_student_learnin

g_outcomes/weave.html) to access the system. If "Abbreviation" appears after you

click on the logo, type in "Gallaudet" then a welcome window with "Gallaudet

University" will appear on the screen. You can also type in the URL directly

(http://app.weaveonline.com/gallaudet/login.aspx).

OAQ's user manuals for WEAVE and curriculum mapping are available:

WEAVE Guide

Curriculum Mapping Guide for WEAVE

ASSESSMENT HANDBOOK 10

Office of Academic Quality

OAQ’s LAU-WEAVE Evaluation Rubric During an annual peer review in January, the Assessment Council uses this rubric in evaluating

programs’ Learning Assessment Updates (LAUs). The rubric aims to provide feedback on three

essential elements of learning assessment: program student learning outcomes (SLOs),

measures/targets, and results.

After the peer review, Assessment Coordinator revisits the reviews then provides the programs with a

summary of feedback and suggested next steps if any.

Criteria Acceptable (2) Needs Some

Modification (1)

Not Acceptable (0)

Measures/Target

1. Each SLO has a measure.

SLO 1:

SLO 2:

SLO 3:

SLO 4:

SLO 5:

SLO 6:

SLO 1:

SLO 2:

SLO 3:

SLO 4:

SLO 5:

SLO 6:

SLO 1:

SLO 2:

SLO 3:

SLO 4:

SLO 5:

SLO 6:

2. Each measure is clearly

described.

3. Most of the measures are

direct measures.

4. 3 or more methods of

assessment are evident.

5. Each measure has an

identified and appropriate

scoring criteria (ex. applied

literacy rubric)

6. Each measure has a target that

indicates the desired level of

performance that represents

success at achieving the SLO.

7. Measures allow student

performance to be gauged over

time.

Findings/Results

1. Results are entered for each

measure.

2. Results are interpreted in

relation to the SLOs.

3. Include sufficient and specific

summaries of the results.

ASSESSMENT HANDBOOK 11

Office of Academic Quality

Overall

1. The program is using

assessment to enhance student

learning and teaching (as

discussed in results and the

analysis responses).

2. The overall assessment plan

appears to align with the

program SLOs (as discussed in

results and the analysis

responses).

Evaluation Summary

Strengths Next Steps/Action Plans Ex. Clear evidence of target achievement Ex. Rubric needs to be applied to project presentations.

1. 2. 3. 4.

Comments

What are the positive things that you have observed in this report? What are the improvements

needed?

Reviewed by:______________________________________ Date: __________________________

Back page

Cont’d

ASSESSMENT HANDBOOK 12

Office of Academic Quality

GU Syllabus Template and Rubrics As of December 2010, Gallaudet University has a Senate-approved GU Syllabus Template which all

new courses must adhere to and is required by both CUE and CGE for all current courses.

Gallaudet University Undergraduate Syllabus Template

The latest version of the syllabus template

Sample of an Undergraduate Syllabus

A sample that adheres to the template

SLO chart for the GU syllabus Template

Institutional Rubrics

1. GU Writing Rubric

2. GU Critical Thinking Rubric

3. GU ASL Presentation Rubric

ASSESSMENT HANDBOOK 13

Office of Academic Quality

Developing and Writing Mission Statement

Definition Characteristics

Mission: A broad statement

reflecting the direction and the

values of the program/unit.

A strong mission statement has the following characteristics:

Clear and concise

Identifies purpose of program/unit

Identifies who is served and how

Appropriately aligned (supports mission of institution)

course > program/unit > institution

Example(s): Gallaudet University, federally chartered in 1864, is a bilingual, diverse, multicultural institution of

higher education that ensures the intellectual and professional advancement [purpose] of deaf and

hard of hearing individuals [who] through American Sign Language and English [how]. Gallaudet

prepares its graduates [who] for career opportunities in a highly competitive, technological, and

rapidly changing world [purpose].

The mission of Student Financial Services is to ensure the timely and accurate processing and

recording of all student [who] financial transactions [purpose]. Student Financial Services

accomplishes its mission through clear and effective communication and coordination with students,

parents, external agencies and Gallaudet offices [how].

ASSESSMENT HANDBOOK 14

Office of Academic Quality

Developing and Writing Effective Program Student Learning Outcomes (SLOs)2

Definition Characteristics

Student Learning Outcomes (SLOs): the

knowledge, skills, and dispositions students

should take with them after completing a

program or as the result of using services

A strong SLO has the following characteristics:

Specific: Clear and focused

Measureable: observable/verifiable

Aligned with your unit/program’s mission as well as

the Institutional Outcomes

Rigorous: program-level goals should reflect higher

cognitive levels

In writing outcome statements, first think of what you expect the student to be able to do after

completing your program/using your services. A common approach to writing outcomes is to

complete the sentence:

At the end of this program:

students will be able to (fill in the blank).

After using our services:

The key to “filling in the blank” begins with 1) selecting an appropriate verb that identifies

the observable skill, disposition, or knowledge the student will have and 2) ensuring it is

appropriately rigorous.

2 References:

Anderson, H.M., Moore, D.L., Anaya, G, and Bird, E, (2005). Student Learning Outcomes Assessment: A Component of Program

Assessment. American Journal of Pharmaceutical Education. Retrieved July 15, 2010 fromhttp://www.ajpe.org/view.asp?art=aj690239&pdf

Assessment Handbook. (March, 2008). Gallaudet University, Washington, DC.

Assessment: How to Develop Program Outcomes. (March, 2008). University of Hawai’i at Manoa. Retrieved July 15,

2010 from http://www.uhm.hawaii.edu/assessment/howto/outcomes.htm

Bloom’s Taxonomy. (2010, April 20). In Wikipedia, the free encyclopedia. Retrieved July 15, 2010,

fromhttp://en.wikipedia.org/wiki/Bloom%27s_Taxonomy

Hatfield, S. (2009). Assessing Your Program-Level Assessment Plan. The Idea Center, (#45). Retrieved April 6, 2009

from http://theideacenter.org/sites/default/files/IDEA_Paper_45.pdf

How to Write Program Objectives/Outcomes. (September 2006). University of Connecticut. Retrieved July 15, 2010

from http://www.assessment.uconn.edu/docs/HowToWriteObjectivesOutcomes.pdf

Learning Outcomes. (2009). Rensselaer Polytechnic Institute. Retrieved July 15, 2010, fromhttp://provost.rpi.edu/node/18

Stempien, J. and Bair, A. (December, 2006). Introduction to Developing Student Learning Goals. Department of Geology and the Science

Education Initiative, University of Colorado at Boulder. Retrieved July 15, 2010,

fromhttp://www.colorado.edu/sei/documents/Learning_Goals-Geology.pdf

ASSESSMENT HANDBOOK 15

Office of Academic Quality

Example:

(fill in the blank)

At the end of this program students will

be able to...

analyze and evaluate research published in

professional journals employing a range of

rhetorical techniques.

develop the Information Architecture of an

interactive product within given parameters

in a way that addresses audience needs and

facilitates user access.

After using our services students will be

able to...

systematically and critically evaluate the

sources for validity and appropriateness.

create concise and complete outlines of

important points when taking notes.

Choosing an Appropriate Verb

Avoid selecting words that are unclear or open to interpretation. The outcome statements should also

have the following characteristics:

Specific: Clear and focused

Measureable: observable /verifiable

Aligned with your unit/program’s mission as well as the Institutional Outcomes

Rigorous: program-level outcomes should reflect higher cognitive levels

A good rule of thumb is NOT to select skills, dispositions, or knowledge that are not directly

measurable, such as “understand” “learn” "appreciate" "like" "believe" “know” etc. Instead focus on

what students will be able to do, produce, or demonstrate.

The following are examples of vague and effective outcomes statements.

Example(s):

Vague: At the end of this program, a student will be able to do

research.

More effective: At the end of this program a student will be able to

establish and test an original hypothesis.

Vague: As a result of using our service, a student will be able to

do an interview.

More effective: At the end of this program a student will be able to

evaluate the effectiveness of their interview skills.

The terms in the Bloom’s Taxonomy – Learning in Action chart (below) can be used to create SLOs

that tap into the different ability levels. When using the chart, remember that the lower cognitive

skills are prerequisites to the higher ones. So before a student can analyze (4) a situation, they must

have knowledge (1), comprehend (2) that knowledge, and be able to apply (3) that knowledge in

situations.

ASSESSMENT HANDBOOK 16

Office of Academic Quality

A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of

Educational Objectives

NOTE for Academic Programs: program-level SLOs should be appropriate for

students completing the program. The majority of your program-level outcomes should be reflective

of the higher-level cognitive skills.

ASSESSMENT HANDBOOK 17

Office of Academic Quality

Highest

Cognitive

Level

Cognitive Learning

Evaluate: To present and defend opinions by making judgments about

information, validity of ideas, or the quality of work based on established

criteria, logic, or application in a situation

Synthesis: To combine different ideas together to create something original, to

integrate ideas into a solution

Analyze: To break information into its component parts by identifying motives

or causes; may focus on analysis of relationships between parts, or recognition of

organizational principles

Apply: To apply knowledge to new situations, to solve problems

Comprehend: To understand, interpret, or explain learned information without

necessarily relating it to anything else

Lowest

Cognitive

Level

Know: To recall or remember facts without necessarily understanding them

Revising Outcomes It may require multiple revisions over several assessment cycles before you develop the “final”

version that best articulates your vision for your students. Do not worry, this is a natural part of the

assessment process. And, your learning outcomes should improve with each revision.

Examples of Student Learning Outcomes (SLOs):

At the end of this program students will

be able to... analyze and evaluate the theories and

applications underlying multiple data

collection techniques used in psychology

analyze business information and infer the

resolution of key issues

analyze and discuss the role of ASL in the

field of linguistics, education, politics and

media

After using our services students will be

able to... locate information and evaluate it

critically for its validity and

appropriateness

apply tutoring strategies that will help

students develop independent learning

skills

write concise and complete outlines of

important points when taking notes

ASSESSMENT HANDBOOK 18

Office of Academic Quality

Assessment Tools & Methods Selecting Appropriate Assessment Tools & Methods

Definition Characteristics

Assessment Tool: the instrument (form, test,

rubric, etc.) that is used to collect data for each

outcome. The actual product that is handed out to

students for the purpose of assessing whether

they have achieved a particular learning

outcome(s).

Assessment Method: description of how and

when the assessment tool will be used to assess

the outcome.

Appropriate assessment tools have the following

characteristics:

Will they measure achievement of your outcomes

Shows if targets – the desired level of

performance (level of satisfaction, productivity,

efficiency, student performance) for each

outcome – were achieved

Cost-Effective in time and money

Useful = will produce results that provide

information that can be used in making decisions

to improve student learning

Reasonably-accurate and truthful – NOT Perfect

= yields dependable, consistent responses over

time

Evidence of being on-going, not once and done

Tip(s):

Each SLO should be assessed using at least one direct method (see chart below for more information).

Try to use more than one measure to demonstrate that students have achieved the expected learning

outcomes as that will give you a more balanced picture of your unit/program. Also, multiple sources

of evidence that support the same conclusions add validity to any decisions you make about

improvements to your program/unit.

NOTE: There are times when one assessment tool could measure more than one outcome (e.g., a

survey [indirect] with questions related to several outcomes or a capstone experience [direct]

assessing critical thinking and presentation skills).

Direct Measures vs. Indirect Measures

Direct

Assessment

Observation of student* performance or examination of products in which they

demonstrate mastery of specific subject / skills; demonstrate a certain quality in work

[e.g., creativity]; demonstrate they hold a particular value

Indirect

Assessment

Inferring student* abilities, knowledge, and values based on an analysis of reported

perceptions about student mastery of outcomes. The perceptions may be self-reports by

students, or they may be made by others, such as alumni, fieldwork supervisors,

employers, or faculty

*= can mean either individual students or representative samples of students

ASSESSMENT HANDBOOK 19

Office of Academic Quality

Example(s): Chart of Strategies for Assessment of Student Learning

NOTE: The chart below is not all inclusive but is meant to provide you with some available

measurement tools. You are not limited to using only these assessment tools.

Examples of Direct & Indirect Measures

DIRECT MEASURES INDIRECT MEASURES

(ACADEMIC)

Course/Homework assignments

evaluated using a rubric

Reflective papers

Observations of field

work, internship, performance,

service learning, or clinical

experience, with notes recorded

systematically

Summaries/analyses of electronic

discussion threads

Evaluation of capstone experiences,

senior theses, exhibitions, portfolios,

performances, research projects,

presentations, dissertations, or oral

defenses

Scores and pass rates on appropriate

licensure/certification exams (e.g.,

Praxis, NLN) or other published tests

Employer and internship supervisor

ratings of student skills

Score gains between entry and exit on

published or local tests or writing sample

(ACADEMIC)

Course/Assignment grades

Number of student hours spent at

intellectual or cultural activities

related to course

Focus Group/Exit Interviews with

students, faculty/staff

Registration or course enrollment

information

Placement rates of graduates into

appropriate career positions and

starting salaries

Alumni, employer, and student

surveys (including satisfaction

surveys)

Quality/reputation of graduate and

four-year programs into which

alumni are accepted

Length of time to degree

(SERVICES)

Benchmarking

Discussions

Doc. Analysis

Evaluations

Government Standards

Professional Standards

(SERVICES)

Activity Volume

Benchmarking

Efficiency

Focus Groups

Satisfaction

Service Quality

ASSESSMENT HANDBOOK 20

Office of Academic Quality

Developing a Scoring Criteria (Rubrics)3

A rubric is a scoring guide used to assess performance against a set of criteria. At a minimum, it is a

list of the components you are looking for when you evaluate an assignment. At its most advanced, it

is a tool that divides an assignment into its component parts, and provides explicit expectations of

acceptable and unacceptable levels of performance for each component.

Types of Scoring Criteria (Rubrics)

Checklists

Basic Rating Scales

Holistic Rating Scales

Steps for Creating An Analytic Rating Scale (Rubric) from Scratch

Steps for Adapting An Existing Analytic Rating Scale (Rubric)

Uses of Rating Scales (Rubrics)

Other Sample Rating Scales (Rubrics)

Resources

Types of Rubrics

1 - Checklists, the least complex form of scoring system, are simple lists indicating the presence,

NOT the quality, of the elements. Therefore, checklists are NOT frequently used in higher education

for program-level assessment. But faculty may find them useful for scoring and giving feedback on

minor student assignments or practice/drafts of assignments.

3 Adapted from sources below:

Allen, Mary. (January, 2006). Assessment Workshop Material. California State University, Bakersfield. Retrieved DATE fromhttp://www.csub.edu/TLC/options/resources/handouts/AllenWorkshopHandoutJan06.pdf

Creating and Using Rubrics. (March, 2008). University of Hawai’i at Manoa. Retrieved April 5,

2010 fromhttp://www.uhm.hawaii.edu/assessment/howto/rubrics.htm Creating an Original Rubric. Teaching Methods and Management, TeacherVision. Retrieved April 7, 2010

from http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4523.html?detoured=1

Danielson, Cherry and Naser, Curtis. (November 7, 2009). Developing Effective Rubrics: A New Tool in Your Assessment Toolbox.

Workshop at Annual NEAIR Conference.

How to Design Rubrics. Assessment for Learning Curriculum Corporation. Retrieved April 7, 2010

fromhttp://www.assessmentforlearning.edu.au/professional_learning/success_criteria_and_rubrics/success_design_rubrics.html Mertler, Craig A. (2001). Designing Scoring Rubrics for Your Classroom. Practical Assessment, Research & Evaluation. Retrieved April 7,

2010 from http://pareonline.net/getvn.asp?v=7&n=25

Mueller, Jon. (2001). Rubrics. Authentic Assessment Toolbox. Retrieved April 12, 2010 fromhttp://jonathan.mueller.faculty.noctrl.edu/toolbox/rubrics.htm

Rubric (academic). (2010, March 3). In Wikipedia, the free encyclopedia. Retrieved April 70, 2010,

fromhttp://en.wikipedia.org/wiki/Rubric_(academic) Tierney, Robin & Marielle Simon. (2004). What's Still Wrong With Rubrics: Focusing on the Consistency of Performance Criteria Across

Scale Levels. Practical Assessment, Research & Evaluation, 9(2). Retrieved April 13, 2010

from http://PAREonline.net/getvn.asp?v=9&n=2

ASSESSMENT HANDBOOK 21

Office of Academic Quality

Example 1: Critical Thinking Checklist

The student…

__ Accurately interprets evidence, statements, graphics, questions, etc.

__ Identifies the salient arguments (reasons and claims)

__ Offers analyzes and evaluates major alternative points of view

__ Draws warranted, judicious, non-fallacious conclusions

__ Justifies key results and procedures, explains assumptions and reasons

__ Fair-mindedly follows where evidence and reasons lead

Example 2: Presentation Checklist

The student…

__ engaged audience

__ used an academic or consultative ASL register

__ used adequate ASL syntactic and semantic features

__ cited references adequately in ASL

__ stayed within allotted time

__ managed PowerPoint presentation technology smoothly

2 - Basic Rating Scales are checklists of criteria that evaluate the quality of elements and include a

scoring system. The main drawback with rating scales is that the meaning of the numeric ratings can

be vague. Without descriptors for the ratings, the raters must make a judgment based on their

perception of the meanings of the terms. For the same presentation, one rater might think a student

rated “good” and another rater might feel the same student was "marginal."

Example: Basic Rating Scale for Critical Thinking

Excellent

5

Good

4

Fair

3

Marginal

2

Inadequate

1

Accurately interprets evidence,

statements, questions etc.

Identifies the salient arguments

(reasons and claims)

Offers analyzes and evaluates

major alternative points of view

Draws warranted, judicious, non-

fallacious conclusions

Justifies key results and

procedures, explains assumptions

and reasons

Fair-mindedly follows where

evidence and reasons lead

ASSESSMENT HANDBOOK 22

Office of Academic Quality

3 - Holistic Rating Scales use a short narrative of characteristics to award a single scored based on an

overall impression of a student's performance on a task. A drawback to using holistic rating scales is

that they do not provide specific areas of strengths and weaknesses and therefore are less useful to

help you focus your improvement efforts.

Use a holistic rating scale when the projects to be assessed will vary greatly (e.g., independent study

projects submitted in a capstone course) or when the number of assignments to be assessed is

significant (e.g., reviewing all the essays from applicants to determine who will need developmental

courses).

Example: Holistic Rating Scale for Critical Thinking Scoring

Peter A. Facione, Noreen C. Facione, and Measured Reasons LLC. (2009), The Holistic Critical

Thinking Scoring Rubric: A Tool for Developing and Evaluating Critical Thinking. Retrieved April

12, 2010 from Holistic Critical Thinking Scoring Rubric

4 - Analytic Rating Scales are rubrics that include explicit performance expectations for each

possible rating, for each criterion. Analytic rating scales are especially appropriate for complex

learning tasks with multiple criteria.

Evaluate carefully whether this the most appropriate tool for your assessment needs. They can

provide more detailed feedback on student performance; more consistent scoring among raters but the

disadvantage is that they can be time-consuming to develop and apply.

Results can be aggregated to provide detailed information on strengths and weaknesses of a program.

ASSESSMENT HANDBOOK 23

Office of Academic Quality

Example: Critical Thinking Portion of the Gallaudet University Rubric for Assessing Written English

Pre-College Skills

1

Emerging Skills

2

Developing Skills

3

Mastering Skills

4

Exemplary Skills

5

IDEAS and CRITICAL THINKING

1. Assignment

lacks a central

point.

2. Displays central

point, although not

clearly developed.

3. Displays

adequately-

developed central

point.

4, Displays clear,

well-developed

central point.

5. Central point is

uniquely displayed

and developed.

1. Displays no real

development of

ideas.

2. Develops ideas

superficially or

inconsistently.

3. Develops ideas

with some

consistency and

depth.

4. Displays insight

and thorough

development of

ideas.

5. Ideas are

uniquely

developed.

1. Lacks

convincing support

for ideas.

2. Provides weak

support for main

ideas.

3. Develops

adequate support

for main ideas.

4. Develops

consistently strong

support for main

ideas.

5. Support for main

ideas is uniquely

accomplished.

1. Includes no

analysis, synthesis,

interpretation,

and/or other critical

manipulation of

ideas.

2. Includes little

analysis, synthesis,

interpretation,

and/or other critical

manipulation of

ideas.

3. Includes

analysis, synthesis,

interpretation

and/or other critical

manipulation of

ideas in most parts

of the assignment.

4. Includes

analysis, synthesis,

interpretation,

and/or other critical

manipulation of

ideas, throughout.

5. Includes

analysis, synthesis,

interpretation,

and/or other critical

manipulation of

ideas,

throughout—

leading to an

overall sense that

the piece could

withstand critical

analysis by experts

in the discipline.

1. Demonstrates no

real integration of

ideas (the author’s

or the ideas of

others) to make

meaning.

2. Begins to

integrate ideas (the

author’s or the

ideas of others) to

make meaning.

3. Displays some

skill at integrating

ideas (the author’s

or the ideas of

others) to make

meaning.

4. Is adept at

integrating ideas

(the author’s or the

ideas of others) to

make meaning.

5. Integration of

ideas (the author’s

or the ideas of

others) is

accomplished in

novel ways.

ASSESSMENT HANDBOOK 24

Office of Academic Quality

Steps for Creating an Analytic Rating Scale (Rubric) from Scratch

There are different ways to approach building an analytic rating scale: logical or organic. For both the

logical and the organic model, steps 1-3 are the same.

Steps 1 – 3: Logical AND Organic Method

Determine the Best Tool

1. Identify what is being assessed, (e.g., ability to apply theory) as this is focused on

program-level learning assessment.

Determine first whether an analytic rating scale is the most appropriate way of scoring

the performance and/or product.

An analytic rating scale is probably a good choice

a. if there are multiple aspects of the product or process to be considered

b. if a basic rating scale or holistic rating scale cannot provide the breadth of

assessment you need.

Building the Shell

The Rows

Identify what is being assessed. (e.g., ability to apply theory).

• Specify the skills, knowledge, and/or behaviors that you will be looking for.

• Limit the characteristics to those that are most important to the assessment.

Examples:

The Columns

2. Develop a rating scale with the levels of mastery that is meaningful.

Tip: Adding numbers to the ratings can make scoring easier. However, if you plan to also use the

rating scale for course-level assessment grading as well, a meaning must be attached to that score.

For example, what is the minimum score that would be considered acceptable for a “C.”

ASSESSMENT HANDBOOK 25

Office of Academic Quality

Example:

Other possible descriptors include: * Exemplary, Proficient, Marginal, Unacceptable

* Advanced, High, Intermediate, Novice

* Beginning, Developing, Accomplished, Exemplary

* Outstanding, Good, Satisfactory, Unsatisfactory

Step 4:

Writing the Performance Descriptors in the Cells

The descriptors are the critical piece of an analytic rating scale. To produce useful, valid scores,

attributes in your descriptors must be consistent across the ratings and easy to read. See examples of

inconsistent performance characteristics and suggested corrections.

3. Use either the logical or the organic method to write the descriptions for each criterion at

each level of mastery.

Logical Method Organic Method

For each criterion, at each rating

level, brainstorm a list of the

performance characteristics*. Each

should be mutually exclusive.

Have experts sort sample assignments into piles

labeled by ratings (e.g., Outstanding, Good,

Satisfactory, Unsatisfactory)

Based on the documents in the piles, determine

the performance characteristics*that

distinguish the assignments

ASSESSMENT HANDBOOK 26

Office of Academic Quality

Tips: Keep list of characteristics manageable by only including critical evaluative components.

Extremely long, overly-detailed lists make a rating scale hard to use.

In addition to having descriptions brief, the language should be consistent. Below are more ideas to

keep descriptors consistent:

1. Refer to specific aspects of the performance for each level

3 2 1

analyses the effect of describes the effects of lists the effects of

2. Keep the aspects of a performance stay the same across the levels but adding adjectives or

adverbial phrases to show the qualitative difference

3 2 1

provides a complex

explanation

provides a detailed

explanation

provides a limited

explanation

shows a comprehensive

knowledge

shows a sound knowledge shows a basic knowledge

3. Refer to the degree of assistance needed by the student to complete the task

3 2 1

uses correctly and

independently

uses with occasional peer or

teacher assistance

uses only with teacher

guidance

4. Use numeric references to show quantitative differences among levels. A word of

warning: numeric references on their own can be misleading. They are best teamed with a

qualitative reference (e.g. three appropriate and relevant examples) to avoid ignoring quality at

the expense of quantity.

3 2 1

provides three

appropriate examples

provides two appropriate examples provides an

appropriate example

uses several relevant

strategies

uses some relevant strategies uses few or no relevant

strategies

Steps 5-6: Logical AND Organic Methods

4. Test the rating scale before making it official. Have a norming* session. Ask colleagues

who were not involved in the rating scale’s development to apply it to some products or

behaviors and revise as needed to eliminate ambiguities, confusion, and/or

inconsistencies. You might also let students self-assess using the rating scale.

*See University of Hawaii’s “Part 6. Scoring Rubric Group Orientation and Calibration”

for directions for this process.

5. Review and revise.

ASSESSMENT HANDBOOK 27

Office of Academic Quality

Steps for Adapting an Existing Analytic Rating Scale (Rubric)

1. Evaluate the rating scale. Ask yourself:

Does the rating scale relate to all or most the outcome(s) I need to assess?

Does it address anything extraneous?

2. Adjust the rating scale to suit your specific needs.

Add missing criteria

Delete extraneous criteria

Adapt the rating scale

Edit the performance descriptors

3. Test the rating scale.

4. Review and revise again, if necessary.

Uses of Rating Scales (Rubrics)

Use rating scales for program-level assessment to see trends in strengths and weaknesses of groups

of students.

Examples:

To evaluate a holistic project (e.g., theses, exhibitions, research project) in

capstone course that pulls together all that students have learned in the program.

Supervisors might use a rating scale developed by the program to evaluate the field

experience of students and provide the feedback to both the student and the program.

Aggregate the scores of rating scale used to evaluate a course-level assignment. For

example, the Biology department decides to develop a rating scale to evaluate students' reports

from 300- and 400-level sections. The professors use the scores to help determine the students’

grades and provide students with feedback for improvement. The scores are also given to the

department’s Assessment Coordinator to summarize to determine how well they are

meeting their student learning outcome, "Make appropriate inferences and deductions from

biological information."

For more information on using course-level assessment to provide feedback to students and to

determine grades, see University of Hawaii’s “Part 7. Suggestions for Using Rubrics in Courses”

and the section on Converting Rubric Scores to Grades in Craig A. Mertler’s “Designing Scoring

Rubrics for Your Classroom”.

Sample Rating Scales (Rubrics)

Rubric Bank (University of Hawai’i at Manoa)

Sample Rubrics by type (Winona State University)

ASSESSMENT HANDBOOK 28

Office of Academic Quality

Setting Performance Targets (Criteria for Success)4

Setting a target is not about guessing what you can achieve. It involves knowing where you are now,

what you are trying to achieve, and determining challenging but realistic amounts of improvement

needed to get there

Set Rigorous But Achievable Targets

If you have a small amount of data you can prepare it by hand. Otherwise, you will probably want to

enter the results into a computer to make them easier to summarize and analyze.

Definition Characteristics

Targets: the desired level of

performance you want to see, as

measured by indicators, that

represents success at achieving your

outcome.

Stretch Target: challenging but

realistic target should be able to

reach with some effort

Specific: what you plan to achieve is clear

Measureable: there is a way to determine whether or

not you have achieved it

Achievable

Rigorous

Timeframe is specified

Step 1: Define where you are now

Method 1 — Use Historical Data It can be helpful to use data that your unit has already gathered to establish a baseline, or starting

point, for your target.

4 Adapted from sources below:

Harvard ManageMentor. (n.d.). Gathering Performance Data. Retrieved October 21, 2009 from

http://ww3.harvardbusiness.org/corporate/demos/hmm10/performance_measurement/set_targets.html

Phillips, L., Gray, R., Malinovsky, A., Rosowsky, M. (April 2009). The Assessment Report: Documenting Findings and Using Results to Drive Improvement. Texas A&M University Retrieved 10/12/09 from

http://assessment.tamu.edu/wkshp_pres/AssessReport_UsingResults.pdf

PMMI Project. (August 2005). Target Setting — A Practical Guide. Retrieved October 21, 2009 from http://www.idea.gov.uk/idk/core/page.do?pageId=845670

PMMI Project. (August 2005). Target Setting Checklist. Retrieved October 21, 2009 from

http://www.idea.gov.uk/idk/core/page.do?pageId=845670

ASSESSMENT HANDBOOK 29

Office of Academic Quality

Example: Below are examples of targets that could have been set based on historical data

Student Learning Outcome Target

All students are expected to achieve a Proficient level on at least four of the five

categories of the Child Study Evaluation Rubric.

80% of graduating students will score a 20 (out of 25) or higher on the Organization

criteria of the English rubric.

90% of students will achieve a score of at least3.5 (out of 5) in all seven of the subscale

criterion areas on the Lab Report Rubric by their junior year.

Students entering their senior year will achieve a mean score at or above that of peer

institutions for 80% of the discipline’s content test subscales.

NOTE: It's important to carefully evaluate the historical data you're considering using as your target

baseline. Look at how the data for a particular period and see whether there has been an abrupt

change in performance. If there has been, investigate the reasons for the change. If there were unusual

circumstances during that period (such as a recession), the figure may not be a good reference point

and you may want to consider using data from a different period to inform your target.

Method 2 — Use External Sources

When you do not have historical data, you might consider using information from outside data

sources to benchmark, or compare your performance data with those of other comparable universities

/ departments / programs (an accrediting agency’s standards, IPEDS, etc.). Then set targets that seem

reasonable in light of the benchmarking information you've gathered.

Example:

Student Learning Outcome Target

The ABC Association want member institution’s to have at least 80% pass rate for graduates taking

licensing examinations.

Step 2: Define what you want to achieve and by when

Remember, you want to have a delicate balance between challenging and realistic. A stretch target is

intended to "raise the bar" enough to inspire your people. But it also must be set at a level at which

your direct reports have the skills, knowledge, and company resources required to meet the target.

“Stretch” targets usually require significant effort to achieve. Ask yourself how much of a stretch will

motivate without causing people to become overwhelmed or demoralized.

ASSESSMENT HANDBOOK 30

Office of Academic Quality

Example:

Student Learning Outcome Target

Scenario: 80% of graduating seniors currently can interpret and analyze a text using different

theoretical

Possible Targets for next year:

82% increase might be a Minimal Target

85% increase might be a Moderate Target

88% increase might be a Stretch Target

100% increase might be an Unrealistic Target

Scenario: 85% of students can apply information literacy concepts to library searches after using

HelpDesk services

Possible Targets for next year:

87% increase might be a Minimal Target

90% increase might be a Moderate Target

92% increase might be a Stretch Target

100%* increase might be an Unrealistic Target

*WARNING: It's important to carefully evaluate the historical data you're considering using as your

target baseline. Look at how the data for a particular period and see whether there has been an abrupt

change in performance. If there has been, investigate the reasons for

the change. If there were unusual circumstances during that period

(such as a recession), the figure may not be a good reference point and

you may want to consider using data from a different period to

inform your target.

Step 3: Things to consider

Timeline: Be clear about how long you need to

achieve your target. Will you need to set intermediary targets?

Example:

Scenario: Enrollment in your program has increased an average of 2% over the past three years

Possible Target: Increase in enrollment 3% annually so that we have 70 students by 2015

Resources: Do you have everything you will need (equipment, personnel, processes, workspace,

etc.) to achieve the target?

How can it achieved?: Can it be achieved by working harder, more resources, improving a process,

an investment in technology?

ASSESSMENT HANDBOOK 31

Office of Academic Quality

Findings – Summarize Data

Information is the results of data processing, or said another way, data only becomes information that

you can use to make decisions after it has been processed.

It's hard to understand data in bulk. Thus, it's best if the data is summarized in the results.

The benefit of summarization is that it not only reduces the amount of data needed to digest, but it

increases the ability to interpret the data.

Tips to Summarize Data

Organize the Data

If there is a small amount of data, it can be prepared it by hand. Otherwise, the results should be

entered into a computer for easier summarizing and analyzing.

If the assessment tool uses descriptive instead of numeric categories, it will be necessary to change

the ratings or responses into numbers (coding) before entering them into the computer. It will make

them easier to summarize and analyze.

Example:

Exemplary = 4

Can express why psychology is a science = C1

Notes on coding. Keep careful notes explaining the meaning of each code to minimize confusion.

They will be invaluable if anyone decides to repeat the assessment later.

Summarize the Data - Clean the Data

Depending on the data collection, a cleaning up will be needed to make sure it is appropriate and

accurate prior to being summarized and analyzed. For example, assessment results from a paper-

based survey or rubric may include some unclear or inaccurate responses that you will need to be

decided about (e.g., correcting or eliminating data from the sample).

Some types of responses that may need to be address before summarizing data:

1. Inapplicable responses

(e.g., males students answered questions in section for female students only)

2. Inappropriate multiple responses

(e.g., two answers checked for one non-multiple choice question)

3. Responses outside given category:

(e.g., student wrote in answer because they didn't like choices provided)

4. "Other" responses that really aren't

(i.e., student checked “Other — Please Specify” but their comment matched one of the

answers provided)

ASSESSMENT HANDBOOK 32

Office of Academic Quality

Make a List (& check it twice)

1. List the raw data

2. Remove identifying information such as

names to ensure confidentiality

3. Compare the list to the source

information. This will help in finding

and correcting any errors.

Once the list is accurate, proceed to the

next step.

4. Tally the results or responses to get a quick

picture

4= Exemplary 3 = Good 2 = Minimally Acceptable 1 = Unacceptable

Example: Tally of raw data from list above

ASSESSMENT HANDBOOK 33

Office of Academic Quality

Chart Your Results in a way that is meaningful. It is often helpful to use tables, line graphs or bar

charts to get a clear look at the big picture. It depends on the kind of questions the assessments are

needed to answer. (see two examples below showing the same data summarized two ways).

Tips

1. AVOID complex statistics

2. Use round numbers

3. Create simple charts, graphs, lists (They are easier to read and understand.)

4. Sort results from highest to lowest [optional]

5. Percentages may be more meaningful than averages

6. Show trend data if assessing over time

Example 1: Table using data from tally above with percentages added, column with total percentage

of students who were successful in the program (adding Exemplary + Good + Minimally Acceptable)

ASSESSMENT HANDBOOK 34

Office of Academic Quality

Example 2: Line chart using data from tally above with target the program hope to achieve.

Find the Story in the Data [Analyze Data]

Data summaries make it easier for you to see

meaning but by themselves, they do not reveal the whole story. You also need to include an explicit

narrative interpretation of what you saw in the data…and what you plan to do about it.

What do the data summaries reveal about students' learning? (identify meaningful information)

What are you going to do about what you have learned?

When, where, and how are you going to do it?

Additional Resource: More examples of summarized data are in the attached document, including a

thematic analysis of qualitative data.

ASSESSMENT HANDBOOK 35

Office of Academic Quality

Findings – Analyze Data5

This step involves your interpretation (analysis) of the information from your data summaries.

Definition Characteristics*

Analysis: Interpretation of

program/unit's findings. What are

strengths and areas and in which

improvement is needed.

Analysis is:

Short and simple = easy to read & understand

o Make use of simple charts, graphs, lists

Tells a story - NOT Perfect

Appropriate to stakeholder

o Different versions available. Only give them what they

need. For example, the Board might prefer lots of charts

with brief explanations, while the Assessment Council

might want charts and in-depth explanations.

*Characteristics adapted from: Suskie, L. (December, 2008). Understanding and Using Assessment

Results. Paper presented at 2008 Middle States Commission on Higher Education Annual

Conference.

Example:

Data Summary: Student improved in all areas on Post-Test although overall target was not met: 98%

(15% increase) in C1 – target exceeded; still below target in other categories: C2 = 33% (310%

increase); C3 = 56% (523% increase); C4 = 65% (67 increase). [Target Not Met]

Analysis: The initial analyses indicate quite positive student learning outcomes on each of the

selected pre-test post-test items. In some test items, improvement between pre-test performance and

the final exam was quite dramatic. Here are a few selected results:

Item #1 represents students' highest achievement by far on the pre-test, as the majority (85%) of

students was able to answer this item accurately. Just the same, final exam scores still increased

(15%), showing modest yet positive student learning outcomes on this item.

Item #2: Despite having grown up with the Internet and computers, the great majority of students

could not accurately answer this pre-test item that relies on basic knowledge of how computer

databases - from web search engines to library research databases and online catalogs - are searched.

There was a dramatic increase of more than 300% in the post-test scores in the category.

Item #3” This item that obviously poses great challenges to students entering the course, as the great

majority of them are not able to identify a standard citation on the first day of class. Knowing how to

interpret citations leads directly to knowing how to access the described material. From these pre-test

results, we can posit that most students would not only not know what they were looking at when

retrieving such citations from a database, print bibliography, or the free web, but also would not know

the next appropriate steps to take to attempt independently to find the item as these vary somewhat by

citation type.

5 Example adapted from: Iowa State University. Library 160: Measurement of Outcomes and Results. Retrieved June 5, 2009 from http://www.lib.iastate.edu/cfora/generic.cfm?cat=gen_libinfo&navid=11078&parent=3038&disp=classic

ASSESSMENT HANDBOOK 36

Office of Academic Quality

Findings – Results6

Using Assessment Results: This step involves making recommendations using your analysis of the

data to make program changes that will improve student learning.

Definition

Recommendations: actions taken/to be taken to improve student learning that area clearly supported by the

data — what will be done, who will do it, how it will be assessed it, and by when.

*Once the data are analyzed the unit should be able to see whether it has achieved its intended

outcome.

Where the criterion is met or surpassed, the unit may rightly conclude that no change is needed and

report, “No action required.” If, when the same outcome is assessed the next year, the results are

repeated and the staff can insure the criterion was met, the unit should consider assessing a different

outcome in the following cycle.

In the case where the results indicate the criterion level was not met, the unit needs to evaluate its

results further to determine what needs to be done to improve the likelihood of achieving the

outcome.

**AREAS to look at when assessment results are disappointing…

1. SLOs

Are goals inappropriate or overly ambitious?

Do they need to be clarified?

Do you have too many?

2. Curriculum

Does curriculum adequately address each SLO?

3. Pedagogy

Are you teaching in the way students learn best?

4. Assessment

Are they poorly written and misinterpreted?

Do they match your SLOs?

Are they too difficult for most responsible students?

5. Student

Is poor performance really the student’s fault?

6 * Assessing the Effectiveness of Non-Instructional Support Offices

**Adapted from: Suskie, L. (December, 2008). Understanding and Using Assessment Results. Paper presented at 2008 Middle States Commission on Higher Education Annual Conference.

***Example adapted from: Iowa State University. Library 160: Measurement of Outcomes and Results. Retrieved June 5, 2009

from http://www.lib.iastate.edu/cfora/generic.cfm?cat=gen_libinfo&navid=11078&parent=3038&disp=classic

ASSESSMENT HANDBOOK 37

Office of Academic Quality

Recommendations

Develop recommendations to improve student learning outcomes based on your data analysis which

identified the strengths and weaknesses of your program/unit. You should not only create a plan to

improve on your weaknesses, but to build on strengths to make them better. (Remember to build into

the plan the periodic re-assessing of your strengths to make sure you’re not slipping.)

Example:

***Results of the pre-test have documented conclusively that students entering the class are far from

"knowing it all" - in fact, the scores are typically below 50% accurate. These pre-test data document

the great need for the Library 101 course, despite claims of some students, and form the foundation

for subsequent student learning throughout a student’s academic career. Even though the final exam

shows a dramatic increase in student learning, several items still require improvement: Item 2 , an

achievement rate of just 32.8% is not adequate. Course administrators will investigate why more

students are not learning or retaining this specific item; Item 3, there is a great positive jump in

student learning outcomes seen in the final exam percentage correct, but again a success rate of only

56.1% is not adequate. This item will be addressed by course administrators, in the effort to increase

the overall percentage of student learning and retention on this item.