37
V. Implementing Assessment When considering how to assess student learning (How do we know that our students have learned what we expect them to?) you may wish to consider: What does this program or department do and why does it exist? What are learning outcomes and how are good ones written? Where are the particular learning objectives addressed in the class/course or activity? What approaches are useful to assess learning outcomes? What tools are needed to make this happen? Who will look at the learning outcomes and the assessment results? How can assessment results be used to improve teaching and learning? How will the data be used for program improvement? This chapter will attempt to answer some of these questions and should provide a basic course in how to design and implement an assessment project at any level. A. Selecting an Arena Before jumping in to an assessment device, it is important to be explicit about the group or target for the assessment. Assessment can be done in many arenas, but each one has unique, though related, learning objectives and participants. The learning facilitators should collaborate to establish the learning outcomes, then design and implement the ways in which to achieve and measure that learning. Some possible arenas for assessment include: An individual instructor’s class or section A course with multiple instructors A sequence of related courses A clearly defined program Institution-wide A campus department or service office See Section IV for more details on specific arenas at PVCC.

V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

  • Upload
    voque

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

V. Implementing Assessment When considering how to assess student learning (How do we know that our students have learned what we expect them to?) you may wish to consider:

• What does this program or department do and why does it exist? • What are learning outcomes and how are good ones written? • Where are the particular learning objectives addressed in the class/course

or activity? • What approaches are useful to assess learning outcomes? • What tools are needed to make this happen? • Who will look at the learning outcomes and the assessment results? • How can assessment results be used to improve teaching and learning? • How will the data be used for program improvement?

This chapter will attempt to answer some of these questions and should provide a basic course in how to design and implement an assessment project at any level. A. Selecting an Arena Before jumping in to an assessment device, it is important to be explicit about the group or target for the assessment. Assessment can be done in many arenas, but each one has unique, though related, learning objectives and participants. The learning facilitators should collaborate to establish the learning outcomes, then design and implement the ways in which to achieve and measure that learning. Some possible arenas for assessment include:

• An individual instructor’s class or section • A course with multiple instructors • A sequence of related courses • A clearly defined program • Institution-wide • A campus department or service office

See Section IV for more details on specific arenas at PVCC.

Page 2: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

B. Learning Outcomes Learning outcomes are the essential and enduring knowledge, abilities, and attitudes that constitute the integrated learning needed by a student after completing a course or program. Learning outcomes are best viewed as part of an approach to thinking about teaching and learning. In particular, curriculum goals should be based on an analysis of what is essential for students to know or be able to do after the course or program. At their core, learning outcomes are simply statements of what is expected that a student will know and be able to do as a result of a learning activity. They focus on the outputs, not on the inputs or processes. Course objectives or course content usually are not outcomes as they express educational aims for a class and commonly include statements about what the instructor intends to do or list topics that the students will be “exposed”. These statements about what the instructor intends to cover (as opposed to what the students are supposed to learn) fail to clarify for the student what they are to gain from such exposure. Course competencies may not be the same as learning outcomes. If course competencies are written as miniscule, disconnected tasks or skills that the student must demonstrate for a grade, they also fail to clarify for the student what they are to gain from the skill development. A meaningful learning outcome requires synthesis of understanding and skill development and informs a student of what they will be able to do with what they learn in the course. Objective: “The student will be introduced to the essentials of good writing.” Outcome: “The student will be able to write effectively for different audiences and different purposes.” The key word that distinguishes a learning outcome is DO. Learning outcome statements, therefore, should use active verbs. One way to think about depth of learning is to use Blooms Taxonomy in the writing of learning outcome statements. Here are some specific suggestions for getting started with learning outcomes:

• Think of what you expect students to know and be able to do after being exposed to your teaching, service, advising or activity. What will the students be aware of or have knowledge of because of the service you provide? Why are you (faculty or staff) providing the service to the student and how will it help them?

• Start by using these simple fill in the blank sentences:

The student will be able to ________________________________

Page 3: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

OR

Students will be able to demonstrate a knowledge or understanding of ________________________________

• Keep in mind the learning outcome(s) should be focused on student

learning and be measurable and meaningful. • Use active words.

Action Verbs

Verbs to Use: a sample of verbs identified in Bloom’s Taxonomy is provided below to generate additional ideas for writing outcomes.

Knowledge Comprehension Application Analysis Synthesis Evaluationdefine describe apply analyze arrange appraise list discuss demonstrate appraise assemble assess name explain dramatize calculate collect choose recall express employ categorize compose compare record depict illustrate criticize construct estimate relate locate interpret debate create evaluate underline recognize operate diagram design judge label report practice differentiate formulate measure quote restate schedule distinguish manage rate locate review sketch examine organize revise match translate use experiment plan score cite inspect prepare select reproduce question propose value identify relate combine defend state solve integrate justify test classify

Verbs to Avoid: appreciate, be aware of, become acquainted with,

comprehend, know, learn, realize, and understand. Source: http://www.catl.uwa.edu.au/current_initiatives/obe/outcomes

Specific examples Academic:

• Students will be able to recognize and evaluate assumptions based on information presented in a short passage.

Page 4: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• Students will be able to determine the appropriate method of inquiry when presented with a problem.

• Students will be able to use online and electronic resources to communicate, collaborate, and retrieve information.

• Students will be able to advocate and apply positive social and ethical behaviors when using technology and identify the consequences of misuse.

• Students will be able to use pronunciation and articulation appropriate to the topic, audience, occasion and purpose.

• Students will be able to comprehend, apply, synthesize, evaluate, form opinions, and make appropriate decisions based on written text.

• Students will be able to distinguish between plausible and implausible inferences, predictions, and interpretations based upon a problem presented.

Career Services:

• Students will be able to conduct informational career interviews with literal and critical comprehension.

• Students will be able to research print and online resources for career exploration.

• Students will be able to communicate his/her job interests and qualifications through a resume and cover letter.

Service-Learning:

• Students will be able to exhibit an understanding of self and their involvement in the community.

• Students will be able to utilize problem-solving and critical-thinking skills.

• Students will be able to utilize effective oral and written communication skills.

Student Life Center:

• Students will be able to identify and apply diverse leadership skills.

• Students will be able to develop and utilize problem solving and critical thinking skills.

Page 5: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• Students will be able to recognize and apply time and stress management strategies for balancing education, service, work and leisure.

• Students will be able to identify and utilize co-curricular and extra-curricular programs in support for academic goal achievement.

Remember: It is not what we (faculty, staff) DO, but what the students will know and be able to DO…………….. C. Choosing a Method Introduction The following passage (edited) written by Trudy Banta and Catherine Palomba gives a general overview of some of the important issues to consider when selecting an assessment methodology or instrument.

To select among assessment instruments, faculty must discuss and establish their selection criteria and become familiar with various assessment methods. The most important selection criteria is whether the method will provide useful information that indicates whether students are learning and developing in ways faculty have agreed are important. Assessment methods must be linked to goals and objectives for learning and to the instructional activities that support these goals. For example, future teachers should be observed interacting with students, not simply examined with a multiple-choice test. Assessment methods (also called techniques or instruments) include both direct and indirect approaches. Direct measures of learning require students to display their knowledge and skills as they respond to the instrument itself. Objective tests, essays, presentations, and class assignments all meet this criterion. Indirect methods such as surveys and interviews ask students to reflect on their learning rather than demonstrate it. A further distinction that may be made is between quantitative methods that rely on numerical scores or ratings and qualitative methods that rely on descriptions rather than numbers. The goal of quantitative methods is to provide a narration or description about what is occurring with emphasis on illuminating the meaning of behavior. Because of the rich information they provide, current trends in assessment include increased use of performance measures and qualitative

Page 6: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

approaches. Educators increasingly believe that assessment itself should contribute to learning. Over time, educational research has identified conditions that are beneficial to student learning. The premise of assessment is that all educators, not just educational researchers, care about whether their students learn. Based on that premise, faculty and staff who select and design assessment strategies need to consider what is known about learning. Because learning is enhanced by doing, it makes sense to design assessment strategies that actively engage students. Such methods should also allow students the chance to receive feedback and respond to it. All assessment practitioners need not be educational researchers, but they should ask focused questions about each assessment strategy. Will it, by itself, enhance student learning? Will it provide students with opportunities for self-evaluation? In addition to the methods chosen, faculty must decide when information will be collected. From students at entry, midpoint, or exit? From alumni one, two, or five years after graduation? If students are the source, faculty must decide how the information will affect student progress. Will it be required or graded? The site of data collection must also be determined. One possibility is to create (or take advantage of) data-collection opportunities outside the classroom. The current trend is to collect assessment information within the classroom, not simply for convenience but because of the opportunity this provides to use already in-place assignments and coursework for assessment purposes. The specific approach that is used needs to reflect the overall purposes of the assessment program. (Palomba, C & Banta, T. (1999). "The Essentials of Successful Assessment" in Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education, Jossey-Bass.)

Summary of Methods In the following pages, we look at ten categories of assessment methodologies, although there are many variants within each category. In addition, some specific instruments can be categorized in many different ways. The categories chosen were adapted from Jeffrey Seybert's presentation to MCCCD in 2001 at Chandler-Gilbert Community College. For each category, we try to give a generic definition followed by a list of related costs, advantages, disadvantages, and implementation issues.

Page 7: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

a. Student Portfolios Definition: A student portfolio, compiled by a student and/or instructor, is a purposeful selection of samples of student work, in a single discipline, or multiple disciplines, accumulated throughout an assessment period. Rubrics, which are developed to reflect the goals of the institution, are used to assess the work in the portfolio. Purposeful is emphasized, because, without a clearly identifiable purpose, a portfolio will be a mere accumulation of products to be stored in a file cabinet. Examples: A writing portfolio could include writing samples demonstrating growth in critical thinking, interdisciplinary thinking, an unsatisfying piece, and a favorite piece selected by the student. It might also include a student’s reflection describing his or her experience as a writer. Costs:

• Time for scoring and grading • Clerical support • Storage • Time to review results and make improvement decisions • Training

Advantages:

• Provides documented evidence as to how effectively the college is meeting the educational needs of students.

• Can be linked to programmatic learning objectives. • Offers students unique opportunities for self-assessment and reflection on

their educational experiences and growth at the college. • Provides a longitudinal view of learning and development. • May be used in cross-disciplinary assessment. • May be used by the student to show to potential employers. • Institution and faculty have control over the design, context, format, and

analysis that can provide authentic, direct measures of institution-specific student learning outcomes.

• Samples in a portfolio may reflect, more genuinely than test results, student ability as it relates to common work setting situations.

• The process of creating a portfolio assessment program, along with the evaluation and scoring offers ample opportunity for faculty exchange, professional growth and discussion of curricular goals and objectives.

• Minimal time commitment for students since a separate assessment instrument isn’t necessary.

Page 8: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• No test anxiety on the part of the student. • If student is responsible for selection of samples, then student

participation is increased in the assessment process. • Results can be meaningful at many levels (the individual student’s, the

program or institution). • Allows assessment students’ maximum performance over the more

artificial or restrictive measures of a test or in-class performance. • Can be more accommodating to learning style differences. • Portrays the process by which students produce work, not just the final

product. • Are flexible in that the content can be chosen to reflect the needs of the

student, the course, the program or the institution. • Helps students reflect on the bigger picture, that is, how all the classes

taken and extra-curricular activities contribute to a well-rounded education. • Contributes to students’ lifelong learning, as well as teachers if they, in

turn develop teacher portfolios. Disadvantages:

• Commitment of both staff and financial resources. o With institutional support, portfolio assessment requires a great

deal of time and effort on the part of the evaluators. o Collecting, scoring and establishing valid scoring rubrics is

challenging. • Without careful planning, results are often disappointing. • Faculty may consider portfolios intrusive. • Resistance to allowing students to select content. • Longitudinal nature can prolong program improvement. • May not provide for externality. • Faculty may be concerned there is a hidden agenda of validating their

grading if the samples which are included were also submitted for course grades.

• There are potential security concerns of how to be certain the submitted student samples are their own work.

• Storage space considerations. • Potential confidentiality breeches if not managed well. • There exists very little hard evidence that demonstrates the impact of

portfolios on student learning, most is anecdotal. Implementation Suggestions:

• The design, implementation and analysis of the data must be carefully thought through before adopting portfolios for assessment. Questions which must be asked:

o What are the focus and scope of the assessment?

Page 9: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

o Which learning objectives will be measured? o What is the role of faculty and students? o Which format, electronic or paper, should be used? o How will the portfolios be assessed? o Who is going to be responsible for analysis of data? o What mechanisms are in place to evaluate the data? o How will the results be linked to the curriculum and impact change?

• Protocols/rubrics should be universal for the department, course, etc., being assessed to enable comparative data.

• Set priorities. It may not be feasible to assess every outcome using a portfolio. Decide at the outset which learning outcomes are to be assessed and why.

• The process should reinforce and be aligned with the educational process. • Decide when and by whom items in the portfolio will be selected. • Develop a feedback mechanism. • Involve students in meaningful ways; be clear what’s in it for them. • Develop a plan to evaluate the portfolios. • Decide who owns the portfolios and who will have access to them. • Be aware that portfolios are a means to an end, not an end themselves. • Use portfolios as part of a course requirement. This works especially well

if a capstone-type course is available. • May be more manageable if a random, representative sample of student

portfolios is assessed rather than all students in a cohort. This may save time, but may have its own set of problems.

• Have more than one rater for each portfolio. Pilot to establish inter-rater reliability.

• Train raters. • Be aware that portfolios in which samples are selected by students

represent the student’s best work.

Page 10: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

b. Electronic Portfolios Definition: As with Student Portfolios, Electronic Portfolios represent a selection of samples of student work, (a resume of sorts) in a single discipline, or multiple disciplines, throughout the assessment period. The Electronic Portfolio gives students the opportunity to display diverse learning styles through a variety of presentation media and allows faculty to scan collections of work quickly for evidence of specific kinds of learning. *Note: All advantages, disadvantages and practical suggestions listed for Student Portfolios apply to Electronic Portfolios with the following additions. Examples: Electronic Portfolios can include: writing samples, art work, audio files and video clips Cost:

• Turnkey software products such as Grady Profile v.2.3.2 (Macintosh only) by Aurbach & Associates-$1,500 unlimited user license. Other software packages are: The Super School Portfolio Assessment Kit II (Mac & PC) @ $99.95 per classroom, special institutional license pricing available. Portfolio Building by Visions Technology in Education @ $499.95 unlimited license. It is also possible to cobble together the appropriate software tools such as word processing, presentation software, etc. to create a portfolio product, along with an HTML for organizing material on the internet.

• Time to design, collect, assess. • Electronic storage costs.

Advantages:

• A variety of multi-media formats may be included in a student’s electronic portfolio, in addition to more traditional papers, problem sets, etc.

• Can be easily updated & transported. • Takes less physical storage space. • Brings an institution’s vision and standards to life for the student. • Students take ownership of their digital portfolio. • Communicating with digital portfolios is easier than with paper portfolios. • Can develop teachers' as well as students' multimedia development skills. • Allows for asynchronous use for both student and faculty. • Minimizes administrative processes that can be overwhelming in a paper-

based system.

Page 11: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• Provides for student-controlled access (other than faculty raters and advisers).

• Offers use of search strategies for easy access to artifacts. • Allows faculty scoring results to be automatically logged and aggregated

for analysis. • Provides students feedback online.

Disadvantages:

• Takes server space. • Requires technical support and maintenance. • Will require either purchase of portfolio software or local development of

software. • Security and confidentiality issues. • Lack of expertise that may be available to develop the electronic platform,

requires dedicated support. • Though there are commercial products available, these are generally

limited in their ability to meet the specific needs of the institution and may not produce evidence of learning outcomes.

Implementation Suggestions:

• It’s important to get an early start generating portfolio artifacts from the beginning of a semester, an academic year, or the college career of a student, depending on the purpose of the portfolio.

• Work must be done ahead of time into the purpose and design of the portfolio. For instance, do you want an artifact to demonstrate each learning outcome? Once again, if this portfolio is to be used by a single instructor, then the instructor would spend the time developing it. However, if the portfolio will be used with an identified cohort cross-disciplinary, then a team will need to spend some time in the design and development process.

• Creating a table-of-content/index of all the Electronic Artifacts from the beginning aids in finding each artifact.

• Be sure to backup and store elsewhere. • Having a well-designed portfolio is critical to both the efficiency and the

effectiveness of measuring student outcomes for the purpose of improving the educational process.

• It is important to have a mechanism to link the assessment findings to curricular process.

• While designing the portfolio, the decision-making process should be inclusive of stakeholders who are affected by the process.

Page 12: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

Recommendation: Developing portfolios of any kind is ambitious. This method would be useful for individual faculty who want to track the progress of students in a one-semester class. It would be possible to move toward course, program and/or institutional portfolios, further down the road in our assessment timeline. We would have to put a great deal of thought into the design and development. In the future we may be able to use some of the artifacts from other assessment methods as the foundation for a portfolio approach. Bibliography/Resources: Alverno’s Diagnostic Digital Portfolio

<http://www.alverno.edu/academics/ddp.html> Barret, H. Dr. How to Create Your Own Electronic Portfolio, 2000.

<http://electronicportfolios.com/portfolios/howto/index.html> Organizational Issues Related to Portfolio Assessment Implementation in the Classroom

<http://eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=EJ638492&ERICExtSearch_SearchType_0=no&accno=EJ638492>

The Portfolio Clearinghouse

<http://ctl.du.edu/portfolioclearinghouse/> A Collection of Papers on Self-Study and Institutional Improvement: Proceedings

of the Annual Meeting of the North Central Association. Chicago, The Higher Learning Commission. <http://hlcommission.org/index.php?option=com_content&task=view&id=140&Itemid=282>

Page 13: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

c. Institutional Portfolios Definition: Institutional portfolios are a compilation of several measures of an institution’s evidence that the mission and learning outcomes identified by the institution are being realized. It is a tool that serves to communicate learning outcomes to internal and external constituencies and it also serves as a learning tool. Institutional portfolios demonstrate accountability to stakeholders and may be used as a vehicle for institution-wide reflection, learning, and improvement. Another definition: An institutional portfolio is a focused selection of authentic work, data, and analysis that demonstrates institutional accountability and serves as a vehicle for institution-wide reflection, learning and improvement. Examples: Categories of evidence include:

• Direct measures of learning such as: test results, evaluations of authentic performance, and student portfolios.

• Other measures of attainment of value such as: retention; success in further study; graduates’ satisfaction with preparation for work and citizenship; graduates’ behavior as workers and community members; employer satisfaction with graduates.

• Good practices: these are examples of practices that research suggests contribute to student learning opportunities such as internships, undergraduate research, service learning, learning communities and collaborative learning.

• Enabling environment such as: faculty development opportunities; administrative practices that support the learning mission; availability of current technology; cross-functional instructional teams; and a physical plant conducive to student learning inside and outside the class.

Cost:

• Time commitment of staff. • Training of staff. • Needed staff skills sets such as web development if the institutional

portfolio is electronic. • Upgrades may be necessary to the technology infrastructure to

accommodate the components of an electronic institutional portfolio. One institutional portfolio doesn’t take a lot of processing power, but if students are also creating electronic portfolios as a component of the institutional portfolio, then the current infrastructure may be stressed.

• There could be some software costs as more staff may need html authoring software.

Page 14: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

*Note: Many of the advantages and disadvantages listed for both Student & Electronic Portfolios pertain to Institutional Portfolios. Advantages:

• Gives the institution the opportunity to communicate its mission and outcomes.

• Enables the faculty and staff of the institution to understand their teaching, research and service activities in relation to the institutional mission.

• Assists students and their families when making college choices. • Adds to the data already compiled on institutional effectiveness. • It is invisible to students, obviating the motivation and other significant

problems with standardized tests. • It can be minimally intrusive for faculty. • It requires no special “sessions,” no sacrifice of class time, no external

incentives for students to perform well. • If the institutional portfolio is in an online format, reviewers could click until

they saw evidence to be satisfied about the particular area they are evaluating instead of progressing in a linear fashion through the materials. Also, this online “mothership portfolio” would be conveniently available for the institution’s employees to review.

Disadvantages:

• Security and privacy issues. • Requires a considerable amount of time from faculty, staff and students.

Implementation Suggestions:

• Articulating purpose and strategic approach early in the process are essential to a successful development effort.

• Involving a broad base of campus constituents increases the likelihood that the project will be supported and useful. At the same time it tends to create burdensome expectations. The best approach may be to incorporate a broad vision, while taking modest, incremental steps in scope and involvement.

• Institutions should develop their own models for organizing and presenting their portfolios.

• Portfolios should contain summaries about what is being claimed and in what way.

• Data and exhibits may be qualitative or quantitative. • The totality of the exhibits, not one piece of documentation, is the best

indicator of institutional effectiveness.

Page 15: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• The contents of the portfolio are subject to verification, so backup documentation and activities may be a part of the external review.

• Do a functional needs assessment to determine what skill sets would be needed, who will develop the portfolio, who will analyze the content, what committee structures and organizational responsibilities are needed.

• It is necessary to draw clear lines between electronic portfolio projects and other related campus technology initiatives, such as the campus Web site, or the migration to a new operational information system. Ultimately these projects may converge; that determination should be made after the portfolio project has time to take shape in it’s own right.

• The institutional portfolio project should not be approached as a marginal task which is added to the responsibilities of existing faculty and staff.

Recommendation: As far as Institutional Portfolios, PVCC is in good shape in three of the four categories of evidence. We have been collecting data in the other measures of attainment. We are engaged in numerous good practices. And, we can easily describe our college as one enabling student learning. Once we make some decisions on direct measures of learning, we should be able to compile an impressive institutional portfolio. Bibliography/Resources: Bordon, Victor and Timothy Thomas. “A Baker’s Dozen Lessons Learned About

What it Takes to Develop and Sustain Electronic Portfolios for Program and Institutional Assessment.” The Urban Universities Portfolio Project. June 26, 2001. <http://www.imir.iupui.edu/portfolio/lessons.htm>

California State University, Sacramento <http://www.csus.edu/portfolio/> Cambridge, Barbara, Margaret Miller and William Plater. Public Communication

Through Institutional Portfolios: Quality Assurance at Urban Public Comprehensive Universities: A Proposal to the Pew Charitable Trusts. Indiana University Purdue University. November 17, 1997. <http://www.imir.iupui.edu/portfolio/documents/final.pdf>

Indiana University Purdue University Indianapolis (IUPUI)

<http://www.iport.iupui.edu/> Portland State University <http://www.portfolio.pdx.edu/>

Provides source code for portfolio software. Seybert, Jeffrey A. “The Institutional Portfolio: A Performance-Based Model for

Assessment of General Education.” Johnson County Community College.

Page 16: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

d. Standardized Tests Definition: 1. A test with specific tasks and procedures so that comparable measurements may be made by testers working in different geographical areas. 2. A test for which norms on a reference group, ordinarily drawn from many schools or communities, are provided. Examples:

Examples of standardized tests being used for assessment: • Major Field Achievement Test [MFAT] in cognate areas or for General

Education. Major field examinations are administered in a variety of disciplines. They often are given to students upon or near completion of their major field of study. These tests assess the ability of students to analyze and solve problems, understand relationships, and interpret material.

• ETS’ Academic Profile. • ACT's Collegiate Assessment of Academic Proficiency (CAAP). • ACT - COMP (College Outcome Measures Program).

Measures knowledge and skills acquired by students in general education courses.

• See http://www.mcli.dist.maricopa.edu/ae0/al_tools.html and http://ericae.net/testcol.htm, for listings of broad-based standardized tests and standardized tests within disciplines.

Some specialized standardized tests in:

Critical Thinking o Test of Critical Thinking Ability o Watson-Glaser Critical Thinking Appraisal o Ennis, Robert H. “An Annotated List of Critical Thinking

Tests” June 2002. http://faculty.ed.uiuc.edu/rhennis/TestListM9Y01.html

Critical Thinking/Problem Solving & Writing o NPEC Sourcebook of Assessment Information

http://nces.ed.gov/npec/evaltests/

Science/Math o “Field-tested learning Assessment Guide

http://www.flaguide.org/tools/tools.htm o Math Forum @ Drexel-Library-Assessment/Testing

http://mathforum.org/library/ed_topics/assessment

Page 17: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

There are also the exams for licensure or certification in specific programs. Costs: Costs vary depending on the selected standardized test. In some ways, using a pre-tested, validated and normed test costs less then the time put into developing, administering and assessing locally developed tests. Advantages:

• Easy to administer to groups. • Require minimal training for test administrators. • Very little professional time is needed beyond faculty efforts to analyze

examinations results and develop appropriate curricular changes that address the findings.

• Tests are traditionally given to students in large numbers and do not require faculty involvement when exams are taken by students.

• Have documentation of reliability (consistency of results over time) and predictive validity (ability to forecast students' performance on a criterion, such as first-year GPA in graduate school).

• In most cases, nationally developed tests are devised by experts in the discipline.

• Institutional comparison of students is possible. • Can assist departments in determining programmatic strengths and

weaknesses when compared to other programs and national data. • In most cases, standardized testing is useful in demonstrating external

validity. • Funding sources accept them as part of the documentation of program

accountability. • Provide the ability to baseline and benchmark.

Disadvantages:

• Potential of teaching to the test and thus narrowing the curriculum. • Can promote an emphasis on lower-order thinking if the test questions

aren’t scrutinized for higher order thinking. • May not capture incremental changes in learning over short periods of

instructional time. • Lack of a guarantee that the instrument will cover a program's learning

objectives. • Students may not be motivated to do well on exams, and incentives may

be required. • It may be difficult to schedule time for students to take exams. • Choosing the best exam may be time-consuming. • Standardized tests can be expensive to administer on a yearly basis.

Page 18: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• By their very nature, tend to be generic and not well focused on specific skills or competencies.

• Language, literacy, and culture are not treated distinctly; that is, they do not tell us whether a learner has trouble with an item because he or she is unfamiliar with the cultural notion underlying the task.

• They do not reflect what has been taught and do not capture all the learning that has taken place, especially in the affective domain.

• They focus on pencil and paper tasks, and therefore do not provide opportunities for literacy learners to show what they can do in "real life".

Implementation Suggestions:

• The results of testing will have meaning to the learners and instructors only if the test content is related to the goals and content of the instruction and instructional time is sufficient.

• Clearly identify the purpose of the assessment (why the learners are being assessed) and what learning is to be assessed.

• Explore the variety of choices by first reading the commercial literature. • Rank order choices and review actual exams. • Consider sample size, logistics and incentives. • Strive for administrator, faculty and student buy-in. • If an institution is using multiple measures, there can be a place in the

assessment plan for using standardized tests. • Ensure that adequate resources are available to carry out the

assessments (e.g., enough materials, comfortable environment, adequately trained administrators and scorers).

• Be aware of the limitations of the assessments selected. • Remember that assessment is not an end in itself, but a means to an end.

Share assessment results with learners and instructors, as well as with administrative staff and funders and the results as a basis for decisions.

• If the standardized instruments do not measure the cognitive areas articulated in mission and purposes statements (e.g., ethnically responsible decision-making), then it’s best to locally develop relevant instruments.

• Evaluators suggest that passing rates on licensure or certification exams do not per se provide direct evidence of the level of achievement in the specific area of student competencies or skills tested. Unless licensure or certification exam scores are supplemented by information about how well students did in each of the subject matter areas covered in the exam, results are not likely to be useful to academic units that intend to use the scores as an indicator of student learning.

Recommendation: Direct measures of student learning yield useful information about the value added to a student's learning by the general education program, the major, or

Page 19: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

professional program, especially when the results from multiple measures are triangulated and are compared with (1) baseline data and/or with (2) data from other measures taken over time. Bibliography/Resources: “Assessment Instruments and Methods Available to Assess Student Learning in

the Major.” UW-Madison Assessment Manual. April 2000. < http://www.provost.wisc.edu/assessment/manual/ >

“Direct Measures of Student Learning.” NAU Assessment. Office of the Vice

Provost for Undergraduate Studies. January 28, 2002. < http://www4.nau.edu/assessment/ >

“Tests (Standardized and Locally Developed.” Institutional Effectiveness.

Assessment. University of Kentucky. July 19, 2002. <http://www.uky.edu/Assessment/mtest.shtml>

A Collection of Papers on Self-Study and Institutional Improvement: Proceedings

of the Annual Meeting of the North Central Association. Chicago, The Higher Learning Commission. <http://hlcommission.org/index.php?option=com_content&task=view&id=140&Itemid=282>

Page 20: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

e. Pre-post Tests Definition: Pre-Post tests are administered upon an agreed upon “entry point” and “exit point.” These tests can be standardized or locally-developed and test for broad general education learning or within a specific discipline or course. These might also be performance-based. Examples: Any standardized or locally developed test which is given at an agreed upon entry point, or exit point. Costs: If using a standardized test, costs will include the purchasing, administering and scoring of the tests (see costs associated with standardized tests). If using locally developed tests then the costs will include development, validation, and scoring for the tests (see costs associated with locally developed items). Advantages:

• Useful method for measuring the "value-added" by a program of study. • The “after-only” design of documenting learning is a weak approach

because positive change cannot necessarily be attributed to the effectiveness of a program.

• Pre-tests serve several purposes: knowledge of the current status of a group may provide guidance for future activities as well as the basis of comparison for a post-test results; administering a test of entry behavior can determine whether assumed prerequisites have been achieved.

Disadvantages:

• Hard to discern if the positive change charted in a pre-post test is due to learning in the class or simply natural maturation.

• Due to students dropping out, the post-test results may be higher because those who remain are more successful or persistent.

• Problems with statistics: if the control group scored so low that they can only go up, or the control group that scored so high little improvement will be indicated in the post-test scores.

• If using the same test for both the pre- and post-test, some argue that students will absorb knowledge just from taking the test and will attend more readily to the content.

• Concentrates on value-added rather than outcomes assessment. • Tendency to teach to the post-test.

Page 21: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

Implementation Suggestions:

• Use alternative approaches to pre-post tests: 1) non-equivalent control group design; 2) Time-series design; 3) Causal modeling.

• Divide student sample randomly into two subgroups. Give one subgroup test form A at the pre-test and give the other subgroup test from B at the pre-test. Then switch for the post-test. This could provide two independent measures of gain. 1) The post-test mean for the second subgroup (on test form A) minus the pre-test mean for the first subgroup (on test form A). 2) The post-test mean for the first subgroup minus the pre-test mean for the second subgroup-both on test form B. An unbiased estimate of the average gain is the average (or weighted average if the sample sizes are different) of these two differences.

• Pre-tests are useful in identifying the course of action necessary to achieve the desired outcomes, but it isn’t necessary to then match the post-test to the pre-test.

Recommendation: The main question to be asked is whether we want to assess value-added, or assess learning outcomes. Bibliography/Resources: Krumwiede, Robert. “Pre and Post Test.s” Online posting. 26 Aug. 1996. Assess-

Assessment in Higher Education. < [email protected]>. Micceri, Ted. “Pre and Post Test.s” Online posting. 20 Aug. 1996. Assess-

Assessment in Higher Education. < [email protected]>.

“Pre/Post Assessment.” Institutional Effectiveness. Assessment. University of

Kentucky. July 19, 2002. < http://www.uky.edu/Assessment/mpre.shtml> Whitney, Douglas. “Pre and Post Test.s” Online posting. 20 Aug. 1996. Assess-

Assessment in Higher Education. < [email protected]>.

Page 22: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

f. Locally Developed Items Definition: A locally developed item is any instrument designed and implemented by the faculty or assessing institution at a specific time or as a single component of a specific course. Examples: Locally developed items can include: exams, simulations, performance appraisals, oral exams, papers, or projects. Costs:

• Significant time investment for development, leadership, and coordination • Additional time for scoring and grading • Clerical support • Storage • Time to review results and make improvement decisions • Training

Advantages:

• Content and style can be customized to fit well with goals and outcomes. • Students are familiar with these types of assessments. • Student performance is assessed in a uniform environment. • Since the assignments/performances/tests are the same, students can be

compared easily. • Provides longitudinal data for the institution (student performance can be

compared from one semester to another). • Performance criteria can be established relative to curriculum. • Development process can lead to clarification of outcomes as well as the

process and content of student learning. • Relatively rapid feedback. • Faculty control over interpretation and use of results. • Results should suggest program improvements. • Depending on choice of instrument, may provide depth and breadth of

student development. • Flexibility. • Results can be meaningful on many levels. • Performances, simulations, etc. can measure application, generalization,

and higher-order thinking skills.

Page 23: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

Disadvantages:

• Cost to develop and maintain (time and effort). • Cannot be used for benchmarking or longitudinal data for each student

(snapshot). • Demands expertise in measurement to assure reliability and validity. • May not provide external validity. • Data security (FERPA). • Sample of behavior or performance may not be typical.

Implementation Suggestions:

• Work with other departments/programs/institutions to reduce (share) cost and provide an element of externality.

• Utilize on-campus measurement experts during development for item validation.

• Contract faculty “consultants” for development and grading. • Incorporate outside experts, community leaders, etc. into development

and grading. • Embed items into course requirements to maximize relevance and

minimize disruption. This also promotes student involvement/interest. • Use triangulation (multi-method approach) to validate results. • Develop specific, measurable criteria, especially for performances. • Pilot test instrument for training and inter-rater reliability. • Use multiple measures to cross-validate. • Establish open, non-threatening evaluation atmosphere.

Recommendation: Locally developed instruments seem to fit well with our chosen learning outcomes, especially due to the course mapping. Items should be embedded into the curriculum without much intrusion. Communication outcomes could be measured via speeches, papers and performances. Information Literacy fits well with simulationand project. Problem Solving can be measured via simulation or exam. Technology fits well with projects and simulations. These measures (and Classroom Based Assessment instruments) yield the most relevant information and can be easily implemented with only a small intrusion. Thus, they should be strongly considered as a piece of our assessment plan. Bibliography/Resources: Banta, T.W., “Assessment 101”, Notes from presentation at the annual meeting

of the Higher Learning Commission, March 2001.

Page 24: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

Banta, Trudy W., Palomba, Catherine A., Assessment Essentials : Planning, Implementing, and Improving Assessment in Higher Education. Jossey-Bass. 1999.

Chandler-Gilbert Community College, Program and classroom level rubrics and

tools Maki, Peggy. Using Multiple Assessment Methods to Explore Student Learning

and Development Inside and Outside of the Classroom Mesa Community College, The Mesa Community College Program to Assess

Student Learning Nichols, James O., Assessment Case Studies: Common Issues in

Implementation with Various Campus Approaches to Resolution. Agathon Press. 1995

South Mountain Community College, Critical Thinking Assessment, Spring 2001 Wiggins, Grant, “The Case for Authentic Assessment.”, ERIC Digest December

1990 A Collection of Papers on Self-Study and Institutional Improvement: Proceedings

of the Annual Meeting of the North Central Association. Chicago, The Higher Learning Commission.

Page 25: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

g. Capstone Experiences/Courses Definition: A capstone is an entire course, portion of a course, or field experience (internship, work placement, etc.) expected at or near the end of a student’s academic career. These experiences usually require students to demonstrate all or a portion of the skills they have acquired as a part of their matriculation in and through a given program or curriculum. They may be part of a formal course, program, or graduation requirement. Costs:

• Significant time investment for development and implementation • Instructor compensation • Additional time for scoring and grading • Data collection and storage • Time to review results and make improvement decisions • Any other costs associated with specific instruments embedded in the

course Advantages:

• Multiple measures can be administered to an attentive and motivated group of students.

• Flexible. • Benefits to the student – summative experience, preparation for the future,

learning community. • Consistency – all students get the same assessments at the same time. • All the other benefits associated with the chosen instruments.

Disadvantages:

• Cost to develop and maintain (time and effort). • If required, it can be a disincentive to graduation. • If optional, the student population may not be representative. • Cannot be used for benchmarking or longitudinal data for each student

beyond a single semester. • Mostly a summative assessment for the student, not formative. • All other disadvantages associated with the chosen instruments.

Implementation Suggestions:

• Make earlier course work inquiry-based to prepare and stimulate the students for the course.

Page 26: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• The nature of the capstone experience will vary, but it should be of equal value regardless of future discipline.

• Encourage and require collaboration. • The experience should enable the student to bring to a symbolic

conclusion the acquisition of knowledge and skills. • Faculty and student work together in shared or mutually reinforcing

projects. • Select multiple appropriate, reliable, and valid instruments for assessment. • Develop specific, measurable criteria, especially for performances. • Survey students for course/experience improvement.

Recommendation: Capstone courses/experiences seem most appropriate for four-year institutions with a fairly well defined or constant student population. At PVCC, a capstone course may be useful for some specific programs, but it does not seem feasible for our general education population due to the swirl and graduation disincentive. Bibliography/Resources: Mesa Technical College, 2000-2001 Student Learning Assessment Model,

handouts from 2001 AAHE Assessment Forum Seybert, Jeffrey, “Assessing Student Learning”, Assessment Update, Volume 6,

Number 4, 1994 A Collection of Papers on Self-Study and Institutional Improvement: Proceedings

of the Annual Meeting of the North Central Association. Chicago, The Higher Learning Commission.

Page 27: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

h. Classroom Based Assessment Definition: Class assessment is a simple method faculty can use to collect feedback, early and often, on how well students are learning what they are being taught. The purpose of class assessment is to provide faculty and students with information and insights needed to improve teaching effectiveness and learning quality. College instructors use feedback gleaned through classroom assessment to inform adjustments in their teaching. Faculty also share feedback with students, using it to help them improve their learning strategies and study habits in order to become more independent, successful learners. (Angelo 1991) The Seven Basic Assumptions of Classroom Assessment: (Angelo and Cross 1993)

1. The quality of learning is directly, although not exclusively, related to the quality of teaching. Therefore, one of the most promising ways to improve learning is to improve teaching.

2. To improve their effectiveness, teachers need first to make their goals and objectives explicit and then to get specific, comprehensible feedback on the extent to which they are achieving those goals and objectives.

3. To improve their learning, students need to receive appropriate and focused feedback early and often; they also need to learn how to assess their own learning.

4. The type of assessment most likely to improve teaching and learning is that conducted by faculty to answer questions they themselves have formulated in response to issues or problems in their own teaching.

5. Systematic inquiry and intellectual challenge are powerful sources of motivation, growth, and renewal for college teachers, and Classroom Assessment can provide such challenge.

6. Classroom Assessment does not require specialized training; it can be carried out by dedicated teachers from all disciplines.

7. By collaborating with colleagues and actively involving students in Classroom Assessment efforts, faculty (and students) enhance learning and personal satisfaction.

Primary Trait Analysis is a method of explicitly stating the criteria for evaluation of a performance. It is assignment specific; for each performance, the assessor builds a unique set of criteria. PTA identifies the factors or “traits” that will count for the scoring (e.g. thesis, materials and methods, use of color, eye contact with client), and then builds a scale for scoring the student’s performance within each trait. (Denton 2002)

Page 28: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

Costs:

• Training • Primary trait (rubric) design time • Time for scoring • Data compilation and storage • Time to review results and make improvement decisions (at the program

and institution level) • Any other costs associated with specific instruments embedded in the

course Advantages:

• Can use existing assignments and course requirements. • Meaningful to the individual faculty and students. • Provides immediate feedback. • Formative assessment helps the students and faculty improve. • Individual instructors can save time in the grading process by having clear

standards and criteria. • Consistency in the grading process. • Students participate in their own learning when they know the standards. • The department and institution’s role in assessment is simply to support

and help respond to problems. • Fits well with a learning centered philosophy. • Development of rubrics (PTAs) help clarify course, department, and

institutional objectives. Disadvantages:

• Up front time – design of PTAs, rubrics, criteria, and standards. • Faculty must be willing to share their grading process, assignments, and

syllabi for external validity. • Training required. • If not done well, we have nothing but grades – not true assessment

information. • Poor performance may be excused or explained instead of improved. • Student data is from semester to semester – no longitudinal data about

student improvement over several years. • Reliability and validity require collaboration between faculty. • Class assessment by itself is not institutional assessment without

additional structure.

Page 29: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

Implementation Suggestions:

• To implement PTA, go slow. The process must be gradual as trust is important. Start at the class and go up to the department level.

• Hold conceptual and practice workshops at the start of the year. • Consistently remind faculty of the rationale and significance. • Distribute general instructions with specific examples. • Create a standard form for each instructor to report student performance

in targeted outcomes. • Align course objectives with general education outcomes. • Instructors can make syllabi assignment focused instead of content

focused. “In order to complete this assignment, here are the things you will need to learn.”

• In one department meeting a year, share and discuss rubrics and the grading process.

• Share best practices. Recommendation: As a faculty member, classroom based assessment makes sense. It yields immediate, meaningful data for personal improvement. We think we are doing assessment when we grade, so why not use the grading process for assessment? Classroom based assessment is also a close fit with the objective of being a learning centered college. Class assessment should be considered as a strong candidate for the basis of our general education plan. Bibliography/Resources: Angelo, T.A. & Cross, P.K., “Classroom Assessment Techniques. A Handbook

for College Teachers (2nd Ed.)”, Jossey-Bass 1993 Anderson, V., Bardes, B., Denton, J., Walvoord, B., Challenges in Classroom

Based Assessment: Reliability, Validity, and Closing the Loop , IUPUI Assessment Institute, November 2000

Denton, Janice, Performance-Based Assessment: Papers, Projects, and

Portfolios handouts from February 2002 MCLI Dialogue Day Stroede, R. & Weaner, J., How 42 Faculty Assess 52 General Education

Outcomes: A Course Embedded Model, Defiance College, Presentation to the HLC, March 2002

Walvoord, B.E. & Anderson, V.J. “Effective Grading: A Tool for Learning and

Assessment”, Jossey-Bass 1998

Page 30: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

A Collection of Papers on Self-Study and Institutional Improvement: Proceedings of the Annual Meeting of the North Central Association. Chicago, The Higher Learning Commission.

Page 31: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

i. Written Surveys Definition: A written survey is a written response from a group in response to a series of prompts asking the individuals to share their perceptions about the study target – e.g., their own or others skills/attitudes/behavior, or program/course qualities or attributes. Costs:

• Survey design • Data compilation and processing • Storage • Time to review results and make improvement decisions (at the program

and institution level) • Clerical support • Supplies (paper, scantron, etc.)

Advantages:

• Typically yield the perspective that students, alumni, the public, etc., have of the institution. Results may lead to changes especially beneficial to relationships with these groups.

• Can cover a broad range of attributes within a brief period of time. • Results tend to be more easily understood by non-experts (public, external

agencies, those not involved in the assessment). • Can cover areas of development, which might be difficult to assess

directly. • Can provide accessibility to individuals who otherwise would be difficult to

include in assessment efforts (alumni, parents, employers, etc.). • Third-party surveys can provide unique stakeholder input. How is a

course/program/institution serving their purposes? • Third-party surveys offer different perspectives. • Third-party surveys increase both internal validity (through triangulation)

and external validity. Disadvantages:

• Results are highly dependent on the wording of the items, relevancy of the survey or questionnaire, and organization of the instrument.

• Good surveys can be difficult to construct. • Biased sample – only get feedback from those that choose to respond. • Mailed surveys get very low response rates.

Page 32: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• Careful organization is required to process data entry and analysis for large samples.

• Commercially prepared surveys are not always relevant to an institution. • Forced response choices do not provide opportunities for respondents to

express their true opinion. • Results reflect perceptions, which may not agree with facts or outcomes. • Logistical details for third-party surveys can be difficult and/or costly. • Confidentiality may be an issue if information is requested about specific

individuals. Implementation Suggestions:

• Use carefully constructed instruments and have them reviewed by survey experts.

• Include open-ended, respondent worded items along with forced-choice response items.

• If truly random sampling is not possible, use the maximum sample size possible.

• Follow up with non-respondents to increase the sample size further. • Add locally developed items to commercially prepared surveys to increase

relevance. • Include externally-referenced items to any locally developed surveys to

increase validity. • Pilot instruments and request formative feedback from respondents on the

clarity, sensitivity, and format of the instrument. • Cross-validate results through other sources of data. • Give very careful, explicit directions.

Recommendation: Surveys are a relatively inexpensive way to collect data that may otherwise be inaccessible. It may be the best way to measure attitudinal outcomes. These instruments will also yield some unexpected results. Respondents may give feedback that leads to improvement, even though it was not the original intention of the instrument. Thus, surveys should be considered as a practical way to get at some of our general education outcomes (e.g. leadership, life-long learning, etc.). Bibliography/Resources: Sudman, Seymour & Bradburn, Norman, Asking Questions: A practical Guide to

Questionnaire Design, Jossey-Bass, 1982 Suskie, Linda, Questionnaire Survey Research: What Works?, Association for

Institutional Research, Resources for Institutional Research, Number 6.

Page 33: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

A Collection of Papers on Self-Study and Institutional Improvement: Proceedings of the Annual Meeting of the North Central Association. Chicago, The Higher Learning Commission.

Page 34: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

j. Other Indirect Methods Definition: An indirect method is any instrument in which a student does not demonstrate his or her individual learning and abilities. A focus group is a group discussion conducted by a moderator with typically 7-12 individuals who share certain characteristics. Careful and systematic analysis of the discussions provides information that can be used for improvement. Exit interviews ask individuals to share their perceptions of their own attitudes and/or behaviors. Archival records are biographical, academic, or other data available from the college or other agencies. Examples: Indirect methods (other than written surveys) may include focus groups, archival records, and exit interviews. Costs:

• Moderator compensation and training • Data compilation and processing • Storage • Time to review results and make improvement decisions • Clerical support • Supplies (paper, scantron, etc.) • Time and space for interviews and/or focus groups • Programming/technical support for archive searches/reports

Advantages: Focus Groups

• Useful to gather ideas, details, new insights and to improve question design.

• Helpful in the design of surveys. • Can be used to get more in-depth information on issues identified by a

survey. • Can clarify issues not completely understood from another instrument. • Unlike a survey, a moderator can ask follow up questions when

necessary. • Can be used in conjunction with a quantitative study to confirm or broaden

one’s understanding of an issue.

Page 35: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

Exit Interviews • Provide immediate feedback to the interviewer. • Frequently yield benefits beyond data collection through interaction with

students. • Can include a greater variety of items than is usually possible on

questionnaires or surveys. • If done by a third-party, many of the same externality and validity

advantages of a survey apply.

Archival Records • Tend to be accessible. • Built upon data collection efforts that have already occurred. • Can be cost efficient. • Nonintrusive measurement. • Useful for longitudinal studies. • Good way to establish baseline data.

Disadvantages: Focus Groups

• Not suitable for generalizations – biased sample. • Moderators require training. • Differences in the responses between groups can be troublesome. • Groups can be difficult to assemble. • Moderator has less control than in individual interviews. • The data are complex and thus difficult to analyze.

Exit Interviews • May be difficult to contact student. • May be intimidating to interviewees, thus biasing results. • As with surveys, results tend to be highly dependent on the wording of the

items. • Time consuming for large populations. • If done by a third-party, logistics can be difficult to arrange. • Confidentiality issues.

Archival Records • May be hard to determine which data are relevant and available. • Datasets may need to be combined and transferred from multiple sources. • Confidentiality issues. • It is difficult to identify the cause of a problem. • Availability of data may discourage the development of other more

appropriate measures.

Page 36: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

• May encourage attempts to “find ways to use the data” rather than assessment related to specific goals and objectives.

Implementation Suggestions:

• Offer incentives to focus group participants. • Anticipate low turn-out and therefore over recruit. • Train moderators to use open-ended questions, pauses, and probes. • Train moderators to find opportunities to move into new topic areas. • Plan exit interviews carefully, being careful with logistics and question

design (as in surveys). • Train interviewers and moderators to put students at ease. • Interview purposeful samples of students when it is not possible to include

all. • Consider telephone interviews as well as face-to-face. Encourage

dialogue. • Have a time limit for focus groups and interviews. • Give very careful, explicit directions. • Obtain informed consent when necessary. • Be wary of FERPA regulations when using archival records. • Only use archival records that are relevant to specific goals and objectives

of learning and development. Recommendation: Focus groups should be used to increase validity and clarity of surveys. Exit interviews may be a good idea, but exactly when does a student “exit”? Archival records should be used to examine general trends. These records are already in place. The difficulty will be to harness the relevant information. Bibliography/Resources: Dobson, Ann, Conducting Effective Interviews: How to Find Out What You Need

to Know and Achieve the Right Results, Trans-Atlantic Publishers, 1996 Morgan, D., Focus Groups as Qualitative Research, University Paper series on

Quantitative Applications in Social Sciences, Sage Krueger, R., Developing Questions for Focus Groups, University Paper series on

Quantitative Applications in Social Sciences, Sage Stewart, D. and Shamdasani, P., Focus Groups: Theory and Practice, University

Paper series on Quantitative Applications in Social Sciences, Sage

Page 37: V. Implementing Assessment€¦ ·  · 2016-06-13When considering how to assess student learning (How do we know that our ... • Think of what you expect students to know and be

A Collection of Papers on Self-Study and Institutional Improvement: Proceedings of the Annual Meeting of the North Central Association. Chicago, The Higher Learning Commission.