29
Creating An Architecture of Assessment: using benchmarks to measure library instruction progress and success Candice Benjes-Small Eric Ackermann Radford University

Candice Benjes-Small Eric Ackermann Radford University

  • Upload
    jesus

  • View
    33

  • Download
    0

Embed Size (px)

DESCRIPTION

Creating An Architecture of Assessment: using benchmarks to measure library instruction progress and success. Candice Benjes-Small Eric Ackermann Radford University. “So, Candice, how many library sessions have we taught this year?”. Look at all these instruction librarians!. But…. - PowerPoint PPT Presentation

Citation preview

Page 1: Candice Benjes-Small Eric Ackermann Radford University

Creating An Architecture of Assessment: using benchmarks

to measure library instruction progress and success

Candice Benjes-SmallEric Ackermann

Radford University

Page 2: Candice Benjes-Small Eric Ackermann Radford University

“So, Candice, how many library sessions have we taught this year?”

Page 3: Candice Benjes-Small Eric Ackermann Radford University

0

50

100

150

200

250

300

350

400

2001 2003 2005 2006

BI sessions

Page 4: Candice Benjes-Small Eric Ackermann Radford University

Look at all these instruction librarians!

Page 5: Candice Benjes-Small Eric Ackermann Radford University

But…

• Curricular changes• Librarian burnout• Students reported BI overload

Page 6: Candice Benjes-Small Eric Ackermann Radford University

On the other hand

• University administration wants to see progress

Page 7: Candice Benjes-Small Eric Ackermann Radford University

Looking for alternatives

• Number of sessions plateau• Scoured literature• Attended conferences• Networked with colleagues

Page 8: Candice Benjes-Small Eric Ackermann Radford University

Our environment

• Public university• 9000+ students• Courses not sequenced• Instruction built on one-shots

Page 9: Candice Benjes-Small Eric Ackermann Radford University

Macro look at program

• Focus on us, not students• Search for improvements over time• Student evaluations as basis

Page 10: Candice Benjes-Small Eric Ackermann Radford University

A little bit about our evaluation form

Page 11: Candice Benjes-Small Eric Ackermann Radford University

Goals

• Provide data to satisfy three constituents– Instruction librarians: immediate

feedback– Instruction team leader: annual

evaluations– Library Admin: justify instruction

program

Page 12: Candice Benjes-Small Eric Ackermann Radford University

Background

• Began in 2005• Iterative process

Page 13: Candice Benjes-Small Eric Ackermann Radford University

Development

• 4-point Likert scale• Originally had a comment box at end• Major concern: linking comments to scale

responses

Page 14: Candice Benjes-Small Eric Ackermann Radford University

Solution: Linked score and comment responses

• Q1. I learned something useful from this workshop.

• Q2. I think this librarian was a good teacher.

Page 15: Candice Benjes-Small Eric Ackermann Radford University

Inspiration for benchmarks

•University of Virginia library system use of metrics to determine success•Targets outlined •We would do one department rather than entire library

To learn more about UVA’s efforts, visit http://www.lib.virginia.edu/bsc/

Page 16: Candice Benjes-Small Eric Ackermann Radford University

Benchmark baby steps

• Look at just one small part of instruction program

• Begin with a single benchmark• Identify one area to assess• Decided to do one particular class

Page 17: Candice Benjes-Small Eric Ackermann Radford University

Introduction to Psychology

•Teach fall and spring, beginning 2006•14 sections of 60+ students•Shared script and PPT•Everyone teaches over 2 days

To see our shared PPT, visit http://lib.radford.edu/instruction/intropsych.ppt

Page 18: Candice Benjes-Small Eric Ackermann Radford University

Developing benchmarks

• Selected a comment based metric for Instruction Team

• Chose class of comments: “What did you dislike about the teaching?” (Question #2)

Page 19: Candice Benjes-Small Eric Ackermann Radford University

Current benchmarks

• Partial success: 5 < 10% total comments for Question 2 are negative

• Total success: < 5% total comments for Question 2 are negative

Page 20: Candice Benjes-Small Eric Ackermann Radford University

How did we do?

Page 21: Candice Benjes-Small Eric Ackermann Radford University

Results

Page 22: Candice Benjes-Small Eric Ackermann Radford University

Success?

• Reached our desired benchmark for partial success- never quite went below 5%

• Tweaking the script again• Continuous improvement

Page 23: Candice Benjes-Small Eric Ackermann Radford University

Scaling for your program

• Adjust the benchmark levels• Only look at score responses (quantitative)

instead of comments (qualitative)• Adjust the number of benchmarks used

Page 24: Candice Benjes-Small Eric Ackermann Radford University

Sharing with administrators

• Team annual reports• Stress evidence-based nature• Use percentages, not a 4-point scale

Page 25: Candice Benjes-Small Eric Ackermann Radford University

Disadvantages

• Time intensive• Follow through required• Evaluation forms not easy to change

Page 26: Candice Benjes-Small Eric Ackermann Radford University

More disadvantages

• Labor intensive to analyze comments• Results may reveal your failures

Page 27: Candice Benjes-Small Eric Ackermann Radford University

Advantages

• Flexiblity to measure what you want to know

• Provides structured goal• Evidence-based results more convincing

Page 28: Candice Benjes-Small Eric Ackermann Radford University

More advantages

• Continuous evaluation results over time• Data-driven decisions about instruction

program• Do-able

Page 29: Candice Benjes-Small Eric Ackermann Radford University

Contact

Candice [email protected]

Eric [email protected]