50
Assessment Handbook for Academic Programs Assessing Student Learning in Degree Programs Prepared by the Office of Analysis, Assessment, & Accreditation Updated January 2021

What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

Assessment Handbook for Academic Programs

Assessing Student Learning in Degree Programs

Prepared by theOffice of Analysis, Assessment, & Accreditation

Updated January 2021

Page 2: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

2

What is assessment?

OVERVIEW OF ASSESSMENT

Assessment tells us what our students are learning and how well they are learning that material. Assessment is an ongoing process in which faculty determine what knowledge and skills students should be learning. Part of the assessment process is to make deliberate, measurable statements about this student learning. These statements are commonly referred to as student learning outcomes.

The assessment process also involves developing and implementing a deliberate plan to determine how students’ learning relates to these learning outcomes. A well-‐developed assessment plan includes a variety of assessment methods for each student learning outcome, careful collection and interpretation of the assessment data gleaned from these methods, and using this information to improve student learning.

Why engage in assessment?

Assessment is all about improving student learning and creating a better educational environment. Assessment is not just about keeping accreditation bodies happy (i.e., NWCCU). Yes, accreditation agencies require schools to engage in assessment activities. However, these accreditation agencies require schools to engage in assessment for the very reason that the schools themselves should want to be involved in assessment; assessment improves student learning. Indeed, assessment benefits everyone. Assessment is a best practice in higher education AND improves our students’ learning. Utah State University’s engagement in the assessment of student learning outcomes will make us a stronger and better institution.

Who is responsible for assessment?

Assessment is not the sole responsibility of any one faculty member, administrator, or committee. Assessment is the responsibility of the administration, faculty, and staff at Utah State University (USU). Program/department-‐level assessment is the responsibility of all faculty, administrators, and staff.

When do we “do” assessment?

Assessment is an ongoing process, which means that degree programs should be engaged in assessment throughout the academic year. This doesn’t mean that faculty need to meet weekly or crunch assessment data daily. When we say assessment is an ongoing process, we mean that in any given academic year, degree programs should be reviewing/revising student learning outcomes statements as needed, collecting and/or analyzing assessment data to make inferences about student learning in relation to each learning outcome, and using that information to make adjustments to the program to increase student learning.

How do we evaluate the effectiveness of assessment?

The office of Analysis, Assessment, and Accreditation (AAA) is responsible for reviewing current program assessment efforts and providing feedback to departments on an annual basis. The rubrics used for this process are available at the end of this handbook.

Page 3: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

3

SUMMARY ASSESSMENT CHECKLIST FOR DEPARTMENTS

I. Assign faculty to oversee assessment efforts for individual programs or the department as a whole

a. This is often done at the direction of the department head.

b. Staff cannot be the primary individuals responsible for program assessment.

II. Review existing assessment activities and efforts

a. If your department has already begun assessment, review the selected assessable outcomes, current assessment plans, implementation strategies, outcomes data (results), and changes made as a result of prior assessment.

b. If your department has not already developed and deployed an assessment plan, starting with the selection of assessable student outcomes, this is your starting point.

III. Using this handbook and the assessment rubrics, determine what aspects of the current program assessment process need to be prioritized.

a. Our office (AAA) is always happy to meet with faculty to review current efforts and provide feedback on best practices

IV. Ensure all assessment activities, including faculty feedback on existing efforts, are properly documented

a. This includes annual updates for the following: data collected on assessable outcomes of student learning, any revisions to the assessment plan, documentation showing how collected data has been used by faculty to inform decisions regarding program, documentation showing the results of prior changes made in response to collected data (including their effectiveness).

V. Schedule a time with the AAA office to review the prior year’s assessment efforts

a. Our office reviews program assessment annually in the first quarter. Following this review, we publish the results in an online dashboard, supplemented with a report.

b. You should anticipate scheduling a meeting with our office sometime after April 30th to discuss the prior year’s efforts.

I.

Page 4: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

4

OUTCOMES AND ASSESSMENT TERMINOLOGY

This handbook uses some terminology related to student learning outcomes and assessment. A brief glossary of terms has been provided below for reference purposes.

Assessment – the systematic process investigating student learning through gathering, analyzing, and using information about student learning outcomes.

Assessment Method – this term refers to any technique or activity that is used to investigate whatstudents are learning or how well they are learning.

Assessment Plan – the proposed methods and timeline for assessment-‐related activities in a given course (e.g., when are you going to check what/how well the students are learning and how are you going to do that?).

Benchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark can also be thought of as the minimally acceptable level of performance for an educational outcome.

Course-‐Level Assessment – this type of assessment focuses on what students are learning in a certain course within a degree program. Course-‐level assessment can focus on either a single section of a course or all sections of the same course. Course-‐level assessment data can be used as one source of information for degree program-‐level assessment.

Curriculum Map – a matrix representation of learning outcomes that shows where they are taught within the program/department.

Degree Program/department Student Learning Outcomes (often abbreviated as SLOs) – what the program/department faculty intend students to be able to know, do, or think upon completion of a degree program.

Degree Program – any major course of study that results in a degree from a program or department.

Degree Program-‐Level Assessment – the evaluation of degree program-‐level student learning outcomes. The results of this assessment are used to make informed changes to the program/department to improve student learning and success.

Direct Measures – processes used to directly evaluate student work. They provide tangible, self- ‐explanatory, and compelling evidence of student learning. Examples include: exam questions, portfolios, performances, projects, reflective essays, computer programs, and observations. These are often referred to as Signature Assignments.

Embedded Assessment – in this type of assessment, faculty carefully construct an assignment (with a corresponding scoring rubric) that specifically measures a certain learning outcome.

Formative Assessment – assessment that occurs during a learning experience. This type of assessment allows faculty to make adjustments to the learning experience to improve student learning. Examples include midterm exams in the middle of a course, focus groups at the midpoint in a degree program, etc.

Page 5: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

5

Grading – grading is a process of evaluating student performance. It can be a basis for assessment if it follows a rubric, which defines different levels of student achievement.

Indirect Measures – processes that provide evidence that students are probably attaining learning goals. These require inference between the student’s action and the direct evaluation of that action. Examples include: course grades, student ratings, satisfaction surveys, placement rates, retention and graduation rates, and honors and awards earned by students and alumni.

Rubric – a scoring and instruction tool used to assess student performance using a task-‐specific range or set of criteria. To measure student performance against this pre-‐determined set of criteria, a rubric contains the essential criteria for the task and levels of performance (i.e., from poor to excellent) for each criterion.

Signature Assignment – an assignment or exam that best displays the knowledge or skills essential to the objectives of a course. Other coursework should build toward the completion of the course ‘signature’ assignment. Think of a signature assignment as a milestone in the student’s progress toward fulfilling the program/department outcomes. Ideally, signature assignments are the types of works that students and faculty would most like to present to others as evidence of accomplishment.

Summative Assessment – assessment that occurs at the end of a learning experience (e.g., a comprehensive exam at the end of a degree program, etc.).

Uses for Improvement – this is usually seen as the third stage of the assessment cycle. During the “uses for improvement” stage, faculty compare assessment data to student learning outcomes to investigate student learning in the degree program.

Page 6: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

6

THE ASSESSMENT CYCLE

The assessment cycle is best conceptualized as an ongoing process that investigates student learning in a degree program. Since assessment is part of making continuing improvements to the quality of learning in a degree program, this assessment cycle should be an ongoing part of program/department functioning. Here is a brief summary of the different phases of the assessment cycle:

PLANNING PHASE (Assessable Outcomes/Planning) – This is often seen as the beginning phase of assessment. During this phase learning outcomes statements are developed or revised. The planning phase also involves making decisions about the specific assessment-‐related activities that need to be completed. Establishing timelines and assigning specific personnel to these activities are also common aspects of the planning phase.

During the planning phase for degree program-‐level assessment, it is important to distinguish between course-‐level assessment activities and the assessment of the degree program as a whole. Course-‐level assessment is very specifically and narrowly focused on the knowledge and skills within single courses within a degree program. Degree program-‐level assessment is much broader than this. Degree program assessment should encompass the knowledge and skills learned in the entire program rather than piecing together examples from different courses. Likewise, it is important to develop unique, broad learning outcomes that represent the entire degree program rather than adopting a few learning outcome statements from different courses.

ASSESSMENT PHASE (Implementation/Results) – The assessment phase involves selecting the appropriate assessment method(s) for each student learning outcome, implementing those assessments, and analyzing the assessment data to learn more about student performance in relation to the student learning outcomes.

ACTION PHASE (Results/Feedback) – This phase is what assessment is all about. During this phase, faculty reflect upon the information gathered during the planning and assessment phases and determine what changes are needed to increase student learning in the degree program. The action phase also involves the implementation of those changes. Finally, during the action phase faculty may also identify problems with the assessment methods. As such, the action phase also involves making adjustments to assessment methodology.

Page 7: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

7

PLANNING PHASESTUDENT LEARNING OUTCOMES

Student Learning Outcome (definition)

A student learning outcome is a formal statement of what students are expected to learn in a degree program. Student learning outcomes refer to specific knowledge, practical skills, areas of professional development, attitudes, higher-‐order thinking skills, etc. that faculty members expect students to develop, learn, or master during a degree program.

Simply stated, student learning outcome statements describe:

What faculty members want students to know at the end of the degree program, AND

What faculty members want students to be able to do at the end of the degree program.

Learning outcomes have three major characteristics:

1. They specify learning that is observable

2. They specify learning that is measurable

3. They specify learning that is completed by the students/learners (rather than the faculty members or the degree program)

Student learning outcome statements should possess all three of these characteristics so that they can be assessed effectively.

Page 8: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

8

WRITING EFFECTIVE LEARNING OUTCOME STATEMENTS

Selection of Action Words for Learning Outcome Statements

When stating student learning outcomes, it is important to use verbs that describe exactly what the learner(s) will be able to know or do upon completion of the degree program.

Examples of good action verbs to include in student learning outcome statements:

Compile, identify, create, plan, revise, analyze, design, select, utilize, apply, demonstrate, prepare, use, compute, discuss, explain, predict, assess, compare, rate, critique, outline, or evaluate

There are some verbs that are unclear in the context of a student learning outcome statement (e.g., know, be aware of, appreciate, learn, understand, comprehend, become familiar with). These words are often vague, have multiple interpretations, or are simply not measurable. As such, it is best to avoid using these terms when creating student learning outcome statements, unless this language is needed to meeting specific (e.g. specialized accreditation) terms or requirements. If these words are to be used, ensure that they are accompanied by a rubric that clearly explains how they will be measured.

For example, please look at the following learning outcomes statements:

Upon completion of the degree students should understand basic human development theory.

Graduates of the degree program should appreciate music from other cultures.

Both of these learning outcomes are stated in a manner that will make them difficult to assess. Consider the following:

How do you observe someone “understanding” a theory or “appreciating” other cultures?

How easy will it be to measure “understanding” or “appreciation”?

These student learning outcomes are more effectively stated the following way:

Upon completion of the degree students should be able to summarize the major theories of human development.

Graduates of the degree program should be able to compare and contrast the characteristics of music from other cultures.

Page 9: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

9

DEVELOPING STUDENT LEARNING OUTCOMES

The first step in an assessment cycle is to identify the program-specific learning outcomes that each student in the program should achieve by the time they complete the degree requirements of that program. A well-formulated set of Student Learning Outcomes (SLOs) will describe what the faculty hopes to successfully accomplish in offering their particular degree to prospective students or what specific skills, competencies, and knowledge the faculty believe graduates of the program will have attained by degree completion. The learning outcomes must be concise descriptions of the impact the program will have on its students.

Ask yourself the following questions when developing learning outcomes:

What do we want students in our program/degree to know?What do we want students to be able to do?When do we want them to be able to do it?Are the outcomes observable and measurable and can they be performed by students?

The Student Learning Outcomes need to link to the University’s mission and core themes.

UTAH STATE UNIVERSITY CORE THEMESCore Themes are very broad in scope

(Student achieves outcome as he/she completes degree)↕

STUDENT LEARNING OUTCOMELearning Outcome is broad in scope

(Student achieves outcome as he/she completes program)↕

COURSE LEARNING OUTCOMESLearning Outcome is narrow in scope

(Student achieves outcome as he/she completes course)

When creating Student Learning Outcomes please remember that the outcomes should clearly state what students will do or produce to determine and/or demonstrate their learning. Use the following learning outcomes formula:

Graduates of this program will be able to + behavior + Resulting Evidence

Page 10: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

10

DEVELOPING MEASURABLE STUDENT LEARNING OUTCOMES STATEMENTS

Correct word usage plays an important role in the development of learning outcomes. All learning outcomes must be specific and measurable. Learning outcomes that state, “should understand . . .”, “will be able to appreciate . . .”, and “will know how to . . .” are not directly measurable and lead to different interpretations of what the student’s behavior will be. We need to know specific outcomes that will demonstrate how students will “understand”, “appreciate” or “know”. Specific verbs such as “explain”, “appraise”, or “apply” are better, more measurable choices. The final part of the outcome is the resulting evidence, which refers to the work that students produce – such as papers, exams, presentations, performances, portfolios, and lab results – to demonstrate their learning.

Bloom (1956) proposed the following taxonomy of thinking skills. All levels of Bloom’s taxonomy of thinking skills can be incorporated into student learning outcome statements.

Definitions of the different levels of thinking skills in Bloom’s taxonomy

1. Knowledge – recalling relevant terminology, specific facts, or different procedures related to information and/or course topics. At this level, a student can remember something, but may not really understand it.

2. Comprehension – the ability to grasp the meaning of information (facts, definitions, concepts, etc.) that has been presented.

3. Application – being able to use previously learned information in different situations or in problem solving.

4. Analysis – the ability to break information down into its component parts. Analysis also refers to the process of examining information in order to make conclusions regarding cause and effect, interpreting motives, making inferences, or finding evidence to support statements/arguments.

5. Synthesis – being able to judge the value of information and/or sources of information based on personal values or opinions.

6. Evaluation – the ability to make and defend judgments based on internal evidence or external criteria. The ability to uniquely apply prior knowledge and/or skills to produce new and original thoughts, ideas, processes, etc. At this level, students are involved in evaluating/creating their own thoughts and ideas.

(Adapted from information from Ball State University accessed at http://web.bsu.edu/IRAA/AA/WB/chapter2.htm)

NOTE: Since degree program-‐level student learning outcomes represent the knowledge and skills that we hope graduates possess, it is likely that at least some outcomes will reflect what is called “higher-‐order thinking skills” rather than more basic learning. The Application, Analysis, Synthesis, and Evaluation levels of Bloom’s taxonomy are usually considered to reflect higher-‐order thinking skills.

Page 11: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

11

The University of Central Florida provides the following lists of key words to use within each level of Bloom’s Taxonomy for each type of learning outcome:

Key Words for Each Level of Bloom’s TaxonomyKnowledge Arrange, define, describe, duplicate, enumerate,

identify, indicated, know, label, list, match, memorize, name, read, recall, recognize, record, relate, repeat, reproduce, select, state, view, underline

Comprehension Classify, cite, convert, defend, describe, discuss, distinguish, estimate, explain, express, generalize, give examples, identify, indicate, infer, locate, paraphrase, predict, recognize, report, restate, review, rewrite, select, suggest, summarize, tell, trace, translate, understand

Application Act, administer, apply, articulate, assess, change, chart, choose, collect, compute, construct, contribute, control, demonstrate, determine, develop, discover, dramatize, employ, establish, extend, give examples, illustrate, implement, include, inform, instruct, interpret, investigate, manipulate, operate, organize, participate, practice, predict, prepare, preserve, produce, project, provide, relate, report, schedule, shop, show, sketch, solve, teach, transfer, translate, use, utilize, write

Analysis Analyze, appraise, break down, calculate, categorize, compare, contrast, correlate, criticize, debate, determine, diagram, differentiate, discriminate, distinguish, examine, experiment, focus, identify, illustrate, infer, inspect, inventory, limit, outline, point out, prioritize, question, recognize, relate, select, separate, subdivide, solve, test

Synthesis Adapt, anticipate, arrange, assemble, categorize, collaborate, collect, combine, communicate, compile, compose, construct, create, design, devise, develop, explain, express, facilitate, formulate, generate, incorporate, individualize, initiate, integrate, intervene, manage, model, modify, negotiate, organize, perform, plan, prepare, produce, propose, rearrange, reconstruct, reinforce, relate, reorganize, revise, set up, structure, substitute, validate, write

Evaluation Appraise, argue, assess, attach, choose, compare,

Page 12: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

12

conclude, contrast, criticize, critique, decide, defend, enumerate, estimate, evaluate, grade, interpret, judge, justify, measure, predict, rate, reframe, revise, score, select, support, value

Page 13: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

13

KEEP IT SIMPLE

It is usually best to keep degree program outcome statements as simple as possible. Overly specific and complex learning outcomes statements can be very difficult to assess because degree programs need to gather assessment data for each type of knowledge or skill that is named in a program/department-‐level student learning outcome.

SAMPLE STUDENT LEARNING OUTCOME STATEMENTS

The following is a list of some of the common areas for degree program/department-‐level student learning outcomes. These examples are meant to serve as ideas of what well-‐stated and measurable student learning outcomes might look like.

Students completing a (undergraduate or graduate) degree in ____________________ should be able to:

Demonstrate knowledge of the fundamental concepts of the discipline

Utilize skills related to the discipline

Communicate effectively in the methods related to the discipline

Conduct sound research using discipline-‐appropriate methodologies

Generate solutions to problems that may arise in the discipline

Other areas as appropriate

Page 14: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

14

The University of Central Florida (2008) notes that learning outcomes should be SMART:

Specific Define learning outcomes that are specific to your program. Include in clear and definite terms the

expected abilities, knowledge, values, and attitudes a student who graduates from your program is expected to have.

Focus on intended outcomes that are critical to your program. When the data from the assessment process are known, these outcomes should create an opportunity to make improvements in the program that is being offered to your students.

Measurable The intended outcome should be one for which it is feasible to collect accurate and reliable data. Consider your available resources (e.g., staff, technology, assessment support, institutional level

surveys, etc.) in determining whether the collection of data for each student learning outcome is a reasonable expectation.

Include more than one measurement method that can be used to demonstrate that the students in a particular program have achieved the expected outcomes of that program.

Aggressive but Attainable “Don’t let the perfect divert you from what is possible.” When defining the learning outcomes and

setting targets, use targets that will move you in the direction of your vision, but do not try to “become perfect” all at once.

The following is a collection of questions that might help you to formulate and define aggressive but attainable outcomes for your program.

o How have the students’ experiences in the program contributed to their abilities, knowledge, values, and attitudes? Ask:

Cognitive skills: What does the student know? Performance skills: What does the student do? Affective skills: What does the student care about?

o What are the knowledge, abilities, values, and attitudes expected of graduates of the program?

o What would the perfect program look like in terms of outcomes?

o What would a good program look like in terms of outcomes?

Results-Oriented and Time-Bound When defining the outcomes, it is important to describe where you would like to be within a specified

time period (e.g., 10% improvement in exam scores within 1 year, 90% satisfaction rating for next year, 10% improvement in student communication performance within 2 years). Also, determine what standards are expected from students in your program. For some learning outcomes, you may want 100% of graduates to achieve them. This expectation may be unrealistic for other outcomes. You may want to determine what proportion of your students achieve a specific level (e.g., 80% of

Page 15: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

15

graduates pass the written portion of the standardized test on the first attempt). If you have previously measured an outcome, it is helpful to use this as the baseline for setting a target for next year.

Page 16: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

16

Examples of Good and Not-So-Good Learning Outcomes

The University of Central Florida provides the following examples of poor, better, and best outcome statements:

Example 1:

Poor: Students completing the undergraduate program in Hypothetical Engineering will have knowledge of engineering principles.This is a weak statement because it does not specify which engineering principles a graduate from the program should know. Also, it does not define what is meant by “have knowledge.” Are they supposed to be able to simply define the principles, or be able to apply the principles, etc.?

Better: Graduates will be competent in the principles of engineering design, formulating requirements and constraints, following an open-ended decision process involving tradeoffs, and completing a design addressing a hypothetical engineering need.This statement is better because it lists the specific areas in hypothetical engineering in which a student must be competent. However, it is still vague, as the level of competency is not stated. Are they expected to understand these concepts and how they will apply them?

Best: Graduates will be able to apply and demonstrate the principles of engineering design, formulating requirements and constraints, following an open-ended decision process involving tradeoffs, and completing a design addressing a hypothetical engineering need.This is a much better learning outcome statement for two reasons. First, the specific requirements are listed; and second, the level of competency is also stated. A student must be able to apply and to demonstrate the listed engineering principles.

Example 2:

Poor: Ph.D. students of Hypothetical Engineering will be successful in their research.This statement is very vague and provides no indication of what “successful” means. It does not specify what type or quality of research skills is expected from the student.

Better: Ph.D. students of Hypothetical Engineering will be successful in conducting high-quality research.Although the quality of research expected from the doctoral students is identified, there is no indication of specific research capabilities that a student should possess. Therefore, even though it provides more detail than the previous statement, it is still lacking.

Best: Ph.D. graduates of Hypothetical Engineering will be able to conduct high-quality, doctoral research as evidenced by the results of experiments and projects, dissertation, publications, and technical presentations.What is expected of a doctoral student in this program is clearly defined and stated, making this an effective learning outcome statement. The quality of research expected as well as the specific

Page 17: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

17

research requirements are articulated in the outcome statement.

Example 3:

Poor: Students should know the historically important systems of psychology.This is poor because it says neither what systems nor what information about each system students should know. Are they supposed to know everything about them or just names? Should students be able to recognize the names, recite the central ideas, or criticize the assumptions?

Better: Students should understand the psychoanalytic, Gestalt, behaviorist, humanistic, and cognitive approaches to psychology.This is better because it says what theories students should know, but it still does not detail exactly what they should know about each theory, or how deeply they should understand whatever it is they should understand.

Best: Students should be able to recognize and articulate the foundational assumptions, central ideas, and dominant criticisms of the psychoanalytic, Gestalt, behaviorist, humanistic, and cognitive approaches to psychology.This is the clearest and most specific statement of the three examples. It provides even beginning students an understandable and very specific target to aim for. It provides faculty with a reasonable standard against which they can compare actual student performance.

Example 4:

Poor: Students should be able to independently design and carry out research.The problem with this is that the statement does not specify the type or quality of research to be done.

Better: Students should be able to independently design and carry out experimental and correlational research.This specifies the type of research, but not the quality students much achieve. If a student independently does any research that is experimental or correlational, it would be viewed as acceptable.

Best: Students should be able to independently design and carry out experimental and correlational research that yields valid results.Here, the standard for students to aim for is clear and specific enough to help faculty agree about what students are expected to do. Therefore, they should be able to agree reasonably well about whether students have or have not achieved the objective. Even introductory students can understand the outcome statement, even if they don’t know exactly what experimental and correlational research methods are.

Page 18: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

18

Northern Arizona University (2006) provides the following examples of learning goals from its academic programs prefaced with, “Students will be able to…”

Articulate the role of communication in a diverse and democratic society. Develop detailed lesson plans for teaching secondary or junior college levels. Demonstrate an introductory knowledge of works of art, history, music, philosophy, literature, and

religion as expressions of the Humanities. Present physical and human geography content knowledge, description, analyses, and syntheses

through the use of oral presentations. Develop the skills necessary to collect, analyze, interpret, and present data. Carry out important laboratory procedures in chemistry. Think critically and globally, being able to analyze problems and develop solutions with little direction

from outside sources. Evaluate the quality of reported Justice research. Apply the scientific method to conduct and interpret research inquiries using a combination of

qualitative and quantitative research methods. Apply the discussion to policy and real-world applications. Demonstrate the knowledge of mental structures and processes that underlie individual human

experience and behavior. Organize and orally deliver content based on audience and purpose. Communicate effectively with employees and guests in hospitality industry settings.

Walvoord (2010, p. 14) provides examples of poor learning goals:

Goals must be in the "students will be able to...” format. Here are some goal statements that are not acceptable for this purpose (though they may be perfectly fine statements for other purposes):

The curriculum emphasizes X, Y, Z. The institution values X, Y, Z. The institution prepares its students for X, Y, Z. Students are exposed to X, Y, Z. Students participate in X, Y, Z.

Page 19: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

19

BENCHMARKS

Benchmarks state the level of performance that is expected of students. Each benchmark can be thought of as the minimally acceptable level of performance for an educational outcome. Degree programs should develop a benchmark for each student learning outcome for their program/ department.

There are two general types of benchmarks:

The first type of benchmark compares students to other groups or populations. This type of benchmark is typically used when there is an established assessment instrument that is used in a field. This assessment instrument is often regionally or nationally developed and used at other institutions or agencies (e.g., the bar exam for attorneys) or when professional licensure is required for the field.

Graduating seniors from the Nursing program will score at or above the

The second type compares student performance on a given student learning outcome to a specific performance level. In this type of benchmark, degree programs typically select a percentage of their students who should exhibit competent performance for student learning outcomes.

70% of graduating seniors will be able to articulate their personal philosophy of _________.

Selecting the numerical “threshold” of acceptable performance:

When determining the “threshold” for each degree program/department-‐level student learning outcome, faculty should discuss what number reflects the best threshold of performance for that learning outcome. Although this is not an absolute rule, benchmarks are frequently set at a level that correlates to average performance, which is acceptable performance to graduate for most degree programs. Of course, this number may be different based on the type of degree program (e.g., highly specialized or graduate programs).

Faculty do not always need to select a number reflective of average performance for their benchmarks. Sometimes, faculty choose to use existing data as a baseline benchmark against which to compare future performance. They might also use data from a similar degree program as a benchmark threshold. In this case, this similar degree program is often chosen because it is exemplary and its data are used as a target to strive for, rather than as a baseline. These options are also viable options for establishing benchmark thresholds.

Whichever process degree program faculty use to set benchmark thresholds, it is important to choose a number that is meaningful in the context of the degree program and its learning outcomes.

Page 20: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

20

TIPS FOR DEVELOPING DEGREE-LEVEL

STUDENT LEARNING OUTCOMES STATEMENTS

Limit the total number of student learning outcomes to 5 to 8 statements for the entire degree program

Make sure that each learning outcome statement is measurable

Focus on overarching or general knowledge and/or skills gained from the entire degree program rather than focusing on what happens in any one individual course

Create statements that are student-‐centered rather than faculty-‐centered or program-centered (e.g., “upon completion of this program students should be able to list the names of the 50 states” versus “one objective of this program is to teach the names of the 50 states”)

Incorporate or reflect the institutional and program/department missions and purposes as appropriate

Incorporate various ways for students to show success (outlining, describing, modeling, depicting, etc.) rather than using a single statement such as “at the end of the degree program, students will know _____” as the stem for each expected outcome statement

Make certain the student learning outcomes are in alignment with the USU Core Themes: Learning, Discovery, Engagement

Page 21: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

21

CURRICULUM MAPS

Curriculum maps or curriculum matrices are very effective tools for relating learning outcomes to classes, co-curricular program, and other education opportunities. Three curriculum maps are shown below from the University of Hawaii (n.d.). The first is a simple example for an undergraduate program. The second is a more complex example for an undergraduate program with multiple tracks. The third is for a doctoral program.

Excerpt From a Hypothetical Biology Program Curriculum MatrixKey

I = IntroducedR = Reinforced and opportunity to practice

M = Mastery at the senior exit levelA = Assessment evidence collected

Courses

Intended Student Learning Outcomes

Apply the scientific method

Develop laboratory techniques

Diagram and explain major

cellular processes

Awareness of careers and job opportunities in biological

sciencesBIOL 101 I I IBIOL 202 R R I

BIOL 303 R M, A RBIOL 404 M, A M, A R

Other: Exit Interview

A

Example From an Undergraduate Program With Multiple TracksKey

SLO = Student Learning OutcomeI = Introduced

R = Reinforced/PracticedA = Assessed

Track 1 Track 2 Track 3 SLO 1

SLO 2

SLO 3

SLO 4

SLO 5

Core: CRS 255 (3 credits) I I I I ICore: Three theory courses (9 credits) I I

Core: Writing (3 credits) I I ICore: Design (3 credits) I I

CRS 310, 312, 350

R R

Page 22: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

22

CRS 325 R RCRS 355 R R

CRS 405 R RCRS 410 R

CRS 450 R RCRS 455 R R

CRS 495 A A A A ACRS 215, 315 R R R

CRS 316 R RCRS 318 R R R

CRS 320, 415 R RCRS 420 R R R

CRS 495 A A A A ACRS 352 R R

CRS 360 R RCRS 382 R

CRS 385 R RCRS 460 R R

CRS 480 R R RCRS 485 R R

CRS 495 A A A A A

Example From a Ph.D. ProgramKey

SLO = Student Learning OutcomePh.D. Requirements SLO 1 SLO 2 SLO 3 SLO 4

Course Requirements XQualifying Exam X X

Comprehensive Exam X X XDissertation X X X

Final Exam X X XSeminar Requirements X X

Page 23: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

23

ASSESSMENT PHASE

BRIEF OVERVIEW OF PROGRAM/DEPARTMENT-LEVEL ASSESSMENT

Assessment involves the systematic collection, review, and use of evidence or information related to student learning. Assessment helps faculty and academic chairs understand how well students are mastering the most important knowledge and skills in the degree program.

In other words, assessment is the process of investigating:

1) what students are learning, and

2) how well they are learning it in relation to the stated student learning outcomes for the degree program.

TIPS FOR DEVELOPING ASSESSMENT PLANS

Each student learning outcome should have at least one assessment strategy (although more than one is often preferable since more instruments increase the reliability of your findings)

Incorporate a variety of assessment methods into your assessment plan

Identify the target population (e.g., all seniors, graduating seniors, alumni, faculty, etc.) for each assessment activity

Be sure to establish timelines for gathering and analyzing program assessment data on a regular basis (at least once per academic year)

Remember that if you decide to collect data from graduating seniors, it is best to collect data as close to graduation as possible (fall and spring)

It is also helpful to assign specific personnel for these tasks

Page 24: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

24

SELECTION OF ASSESSMENT METHODS

It is important that at least one appropriate assessment method is selected for each student learning outcome. Generally speaking, there are two types of assessment methods. Direct assessment methods are measures of student learning that require students to display their actual knowledge and skills (rather than report what they think their knowledge and skills are). Because direct assessment taps into students’ actual learning (rather than perceptions of learning) it is often seen as the preferred type of assessment. As such, faculty should look at incorporating some direct assessment methods into their assessment plans. In contrast, indirect assessment methods ask students to reflect on their learning rather than to actually demonstrate it. Indirect assessment methods can often provide very useful information regarding student learning.

Both direct and indirect assessment methods can provide useful insight into students’ experiences and learning. Direct and indirect assessments each have unique advantages and disadvantages in terms of the type of data and information they can provide. As such, many faculty choose to incorporate both types of assessment into an assessment plan.

DIRECT AND INDIRECT MEASURES OF STUDENT LEARNING

It is important to distinguish between direct and indirect methods of collecting assessment information since units must use at least one direct measure. Suskie (2009) explains that direct methods provide demonstrations of what students know and can do that can be evaluated objectively.

Examples of Direct Measures of Student Learning: Course-embedded assessment Ratings of student skills by their field experience supervisors Scores and pass rates on appropriate licensure or certification exams Capstone experiences, such as research projects, presentations, theses, dissertations, oral defenses,

exhibitions, and performances, scored using a rubric Other written work, performances, and presentations, scored using a rubric Portfolios of student work Scores on locally designed multiple-choice or essay tests such as final examinations in key courses,

qualifying examinations, and comprehensive examinations Score gains (referred to as value added) between entry and exit on published or local tests or writing

samples Observations of student behavior (such as presentations and group discussions), undertaken

systematically and with notes recorded systematically Summaries and assessment of electronic class discussion threads Think-alouds, which ask students to think aloud as they work on a problem or assignment Classroom response systems (clickers) that allow students in their classroom seats to answer

questions posed by the instructor instantly and provide an immediate picture of student understanding

Feedback from computer-simulated tasks such as information on patterns of action, decision, and

Page 25: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

25

branches Student reflections on their values, attitudes, and beliefs, if developing those are intended outcomes

of the program

Indirect measures, on the other hand, are often used to collect information from students on what they believe they learned and how and why they learned it (Suskie).

Examples of Indirect Measures of Student Learning Course grades and grade distributions (because they aggregate measurement of many items) Assignment grades, if not accompanied by a rubric, scoring sheet, or scoring criteria Retention and graduation rates Admission rates into graduate programs and graduation rates from those programs Scores on tests required for further study (such as the GRE) that evaluate skills learned over a lifetime Quality and reputation of graduate programs into which alumni are accepted Placement rates of graduates into appropriate career positions and starting salaries Alumni perceptions of their career responsibilities and satisfaction Student feedback of their knowledge and skills, and reflections on what they have learned over the

course of their program Questions on end-of-course student ratings of instruction (e.g., IDEA), or other instruments, that ask

students to self-assess their progress on learning. Student, alumni, and employer satisfaction with learning collected through surveys, exit interviews,

or focus groups Student participation rates in faculty research, publications, and conference publications Honors, awards, and scholarships earned by students and alumni

ANALYZING ASSESSMENT DATA

Page 26: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

26

It is recommended that degree programs incorporate the analysis of all assessment data as a regular part of program/department functioning. The data gathered for each student learning outcome should be analyzed and evaluated either on a semester or annual basis.

Analysis of assessment data should help programs/departments identify the following:

What students are learning in relation to each student learning outcome

How well students are learning the material that relates to those outcomes

How well the selected assessment method(s) measure each student learning outcome

Areas for more focused assessment

Ways that learning outcomes may need to be revised

Areas that may need to be investigated in the next phase of assessment – the Improving Phase

Page 27: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

27

USING RUBRICS FOR DIRECT ASSESSMENT OF STUDENT WORK

What is a rubric?

A rubric is a scoring tool that lays out the specific expectations for an assignment. Rubrics divide an assignment into its component parts and provide a detailed description of what constitutes acceptable or unacceptable levels of performance for each of those parts.

What are the parts of a rubric? Rubrics are composed of four basic parts:

A task description (the assignment) A scale of some sort (levels of achievement, possibly in the form of grades). Scales typically

range from 3 to 5 levels. The dimensions of the assignment (a breakdown of the skills/knowledge involved in

the assignment) Descriptions of what constitutes each level of performance (specific feedback)

Rubrics:

Can be used to classify virtually any product or behavior, such as essays, research reports, portfolios, works of art, recitals, oral presentations, performances, and group activities

Can be used to provide formative feedback to students, to grade students, and to assess programs

Can be used for program assessment in a number of ways:o Faculty can use rubrics in classes and aggregate the data across sectionso Faculty can independently assess student products and then aggregate resultso Faculty can participate in group readings in which they review student

products together and discuss what they have found

Page 28: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

28

Why use Rubrics?

Rubrics provide timely feedback – grading can be done more quicklySince students often make similar mistakes on assignments, incorporating predictable notes into the “descriptions of dimensions” portion of a rubric can simplify grading into circling or checking off all comments that apply to each specific student.

Rubrics prepare students to use detailed feedbackIn the rubric, the highest level descriptions of the dimensions are the highest level of achievement possible, whereas the remaining levels, circled or checked off, are typed versions of the notes/comments an instructor regularly writes on student work explaining how and where the student failed to meet that highest level. Thus, in using a rubric the student obtains details on how and where the assignment did or did not achieve its goal, and even suggestions (in the form of the higher level descriptions) as to how the student might have been done better.

Rubrics encourage critical thinkingBecause of the rubric format, students may notice for themselves the patterns of recurring problems or ongoing improvement in their work.

Rubrics facilitate communication with othersFaculty, counselors/tutors, colleagues, etc. can benefit from the information contained in a rubric; i.e., it provides information to help all involved in a student’s learning process.

Rubrics help faculty refine their teaching skillsRubrics showing a student’s continuing improvement or weaknesses over time, or rubrics showing student development over time, can provide a clearer view of teaching blind spots, omissions, and strengths.

Rubrics help level the playing fieldTo aid first-‐generation or non-‐native speakers of English, rubrics can act as a translation device to help students understand what teachers are talking about.

How can Rubrics be used to assess program/department learning goals?

Embedded course assignments – program/department assessments which are embedded into course assignments can be scored using a rubric

Capstone experiences – theses, oral defenses, exhibitions, presentations, etc. – can be scored using a rubric to provide evidence of the overall effectiveness of a program/department

Field experiences – internships, practicum, etc. – supervisor’s ratings of the student’s performance can be evidence of the overall success of a program

Employer feedback – feedback from the employers of alumni can provide information on how well a program/department is achieving its learning goals

Student self-‐assessments – indirect measures of student learning Portfolios – rubrics can be a useful way to evaluate portfolios

Page 29: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

29

Sample of a Rubric for a Slide Presentation on Findings From Research Sources(Suskie)

(5)Well done

(4-3)Satisfactory

(2-1)Needs

Improvement

(0)Incomplete

Organization Clearly, concisely written. Logical, intuitive progression of ideas and supporting information. Clear and direct cues to all information.

Logical progression of ideas and supporting information. Most cues to information are clear and direct.

Vague in conveying viewpoint and purpose. Some logical progression of ideas and supporting information but cues are confusing or flawed.

Lacks a clear point of view and logical sequence of information. Cues to information are not evident.

Introduction Presents overall topic. Draws in audience with compelling questions or by relating audience’s interests or goals.

Clear, coherent, and related to topic.

Some structure but does not create a sense of what follows. May be overly detailed or incomplete. Somewhat appealing.

Does not orient audience to what will follow.

The University of Virginia offers the following guidelines on developing rubrics: Clearly define the assignment including the topic, the process that students will work through, and

the product they are expected to produce. Brainstorm a list of what you expect to see in the student work that demonstrates the particular

learning outcome(s) you are assessing. Keep the list manageable (3-8 items) and focus on the most important abilities, knowledge, or

attitudes expected. Edit the list so that each component is specific and concrete (for instance, what do you mean by

coherence), use action verbs when possible and descriptive, meaningful adjectives (e.g., not adequate or appropriate but correctly or carefully).

Establish clear and detailed standards for performance for each component. Avoid relying on comparative language when distinguishing among performance levels. For instance, do not define the highest level as thorough and the medium level as less thorough. Find descriptors that are unique to each level.

Develop a scoring scale. Test the rubric with more than one rater by scoring a small sample of student work. Are your

expectations too high or too low? Are some items difficult to rate and in need of revision?

Page 30: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

30

And the University of Virginia offers the following advice on using rubrics: Evaluators should meet together for a training/norming session. A sample of student work should be examined and scored. More than one faculty member should score the student work. Check to see if rates are applying the

standards consistently. If two faculty members disagree significantly (e.g., more than 1 point on a 4-point scale), a third

person should score the work. If frequent disagreements arise about a particular item, the item may need to be refined or removed.

Page 31: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

31

ACTION PHASE

IMPROVING PHASE

A significant amount of time can be spent in developing student learning outcomes and gathering data, and occasionally people stop there. It is important to “close the loop” and make sure that assessment data for each student learning outcome is reviewed and used to make improvements to degree programs that will increase the quality of student experiences and learning. In fact, many assessment experts consider this phase to be the most important part of assessment.

Using Assessment Results Effectively and Appropriately

You will want to use your assessment results effectively and appropriately. In order to do so, it is suggested that you:

Share assessment data analysis results with program/department faculty and staff.

Discuss these assessment results as they relate to each student learning outcome.

Review assessment results to determine programmatic strengths and areas for improvement.

Decide if different assessment methods are needed in order to obtain more targeted information.

Determine how assessment results can be used to make improvements to the program/ department (e.g., changes to the curriculum, provide professional development for teaching personnel in certain areas, etc.).

Develop an action plan to implement these improvements.

Identify specific strategies regarding the implementation of the action plan.

Review what needs to be done as the assessment cycle heads back to the Planning Phase (e.g., do student learning outcomes need to be revised?, are different assessment methods necessary?, etc.).

When Assessment Results are Good

For those times when your assessment results are good, Suskie highly recommends that you:

Celebrate!

Reward!

Share!

Keep going!

Page 32: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

32

When Assessment Results are Bad

Suskie suggests that you look at everything carefully.

Do you have the right learning outcomes?

Do you have too many learning outcomes?

Take a hard look at your courses/programs:

o Content and requirements

o Sequencing and prerequisites

o Admissions criteria

o Placement criteria

o Advising

o Tutoring

o Teaching methods

o Co-curricular activities

Do you need to improve your assessment methods?

Sometimes it really is the students’ fault.

Keep going.

Moving From Assessment Results to Action

You have completed your assessment. To ensure that you appropriately use your findings, Maki (2004) and Walvoord (2010) suggest that you:

Determine what is most important in the results. In addition to discussion among themselves, faculty members can consult program accreditation bodies, alumni, employers, faculty members at other institutions, librarians, writing specialists, and student affairs staff members.

Focus on the areas that show the greatest weaknesses.

Determine what is feasible now and what might be addressed in the future. Consider what changes can be made within the department and what changes involve others. Investigate resources and available assistance.

Keep good notes, both for your own follow-up and for reports that you might have to submit.

Page 33: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

33

Nationally Accredited Initial Emerging Developed Highly Developed

Assessable Outcomes

There is no accessible report or information about outcomes

Non-specific outcomes. Do not state student learning outcomes

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Most outcomes indicate how students demonstrate learning

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Each outcome describes student demonstration of learning

Assessable Outcomes are Developed

Outcomes describe demonstration of student learning. Outcomes used for improvement.

Assessment Planning

The most recent accreditation report from the national accreditation body, or its relevant sections, is not available online

No formal assessment plan

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Relies on intermittent planning

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Clear regular plan

Assessment Planning is Developed

Clear multi-year plan with several years of implementation

Assessment Implementation

The most recent accreditation report from the national accreditation body, or its relevant sections, is not available online

Not clear that assessment data is collected

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Evidence collected. Faculty have discussed relevant criteria for reviewing

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Evidence is collected and faculty use relevant criteria

Assessment Implementation is Developed

Evidence collected, criteria determined and faculty discuss multiple sets of data. Data is used.

Results are Used

The most recent accreditation report from the national accreditation body, or its relevant sections, is not available online

Results for outcomes are collected but not discussed

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Results collected, discussed but not used

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Results collected, discussed and used.

Results are Used is Developed

Results collected, discussed, used and evidence to confirm that changes lead to improved learning

Annual Feedback on

Assessment Efforts

The most recent accreditation report from the national accreditation body, or its relevant sections, is not available online

No person or committee provides feedback to departments on quality of their assessment plan

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Occasional feedback by person or committee

The most recent accreditation report from the national accreditation body, or its relevant sections, is available online

Annual feedback by person or committee. Departments use feedback.

Annual Feedback is Developed

Annual feedback, departmental use and institutional support

Page 34: What is assessment? - Welcome to Utah State University Handbook Jan... · Web viewBenchmarking – comparing performance to a comparison sample or group (e.g. peers). A benchmark

34

Non-Nationally Accredited Initial Emerging Developed Highly Developed

Assessable Outcomes

Non-specific outcomes. Do not state student learning outcomes

Most outcomes indicate how students demonstrate learning

E.g. “Students display knowledge of X by…”

Each outcome describes student demonstration of learning

E.g. “Students display knowledge of X by…”

Outcomes describe demonstration of student learning. Outcomes used for improvement.

E.g. “This outcome improves course design, faculty training etc. by…”

Assessment Planning

No formal assessment plan

Relies on intermittent planning

E.g. Plan is updated every few years, with little evidence of connection to prior work

Clear regular plan

The assessment plan specifically measures each learning objective using direct measures. Indirect measures may be present.

Clear multi-year plan with several years of implementation

Assessment Implementation

Not clear that assessment data is collected

Evidence collected. Faculty have discussed relevant criteria for reviewing

Evidence includes student level data and data for each outcome

Evidence is collected and faculty use relevant criteria

E.g. “Faculty used this data to…”

Evidence collected, criteria determined and faculty discuss multiple sets of data. Data is used.

Results are Used Results for outcomes are

collected but not discussed

Results collected, discussed but not used

E.g. “Faculty discussed at annual department meeting, but no changes to program were implemented”

Results collected, discussed and used.

E.g. “Faculty discussions led to the following actions for the academic year…”

Results collected, discussed, used and evidence to confirm that changes lead to improved learning

“E.g. “Here are the results of the actions we took, and how we determined their effectiveness…”

Annual Feedback on

Assessment Efforts

No person or committee provides feedback to departments on quality of their assessment plan

Occasional feedback by person or committee

E.g. We talk about assessment every 3-7 years

Annual feedback by person or committee. Departments use feedback.

E.g. AAA provides annual feedback on assessment efforts and departments document efforts in response to feedback.

Annual feedback, departmental use and institutional support

E.g. USU provides support for assessment efforts via staffing, training, or other resources