16
In at the Deep End: Quality Assuring Summative Assessment Glenis Lambert Learning and Teaching Enhancement Unit

In at the Deep End: Quality Assuring Summative Assessment Glenis Lambert Learning and Teaching Enhancement Unit

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

In at the Deep End: Quality Assuring Summative Assessment

Glenis Lambert

Learning and Teaching Enhancement Unit

Online Summative Assessment: A Game of Snakes and Ladders?

• Helping students from different cultures and of all abilities to demonstrate knowledge and skills on an equitable level

• The creation of assessment data for the purposes of analysis and curriculum design/development

• Reinforcement of a “smart teaching” agenda• Increase administrative frequency• Decrease marking load

Online Assessment: A Game of Snakes and Ladders?

• The immediacy and clarity of feedback• The linking of feedback to other online resources

to extend learning opportunities• The “Martini Factor“, i.e. “any time, any place,

any where” potentialities for this type of assessment

• The possibility for easy question analysis to ensure fit-for-purpose assessments

A Game of Snakes and Ladders?Online Assessment

• The time gained by automated marking procedures is sometimes less than the time taken to construct effective assessments

• Not all educators, students and learners are comfortable using technology.

• Writing questions which test higher-level concepts and skills require skill on the part of the educator.

• Doubts about the security of online systems and the possibility of cheating and plagiarism may lead to a reluctance to engage in computer aided assessment at various levels

• Short-answer questions are sometimes seen as trivial• Technology goes wrong

Why We Had to Jump in at the Deep End

Why? • Existing practice in using CAA for summative

assessments was not benefiting from new technology and increased student numbers

Why Questionmark/Perception?• Questionmark/Perception seemed to offer a

higher level of security which would work with our existing systems set up and resources.

The Initial Strategy Was to1. consult all stakeholders and make sure everyone was kept

informed2. make a slow start: to get it right before the system was allowed

to grow3. build procedures for summative assessment and get this right

before transferring relevant QA procedures for formative assessments

4. consult the exams office at all stages5. communicate with operations and computing services6. provide clear instructions and policy for staff7. start in such a way as to allow further development8. promote confidence in the system and sustainability of the

resource

2004/2005 Pilot• Over the academic year 8 end of course exams

in 18 sessions were successfully delivered, totalling 660 individual examinations.

• 5 of these were organised and delivered by the departments, 3 by the examinations office.

• 120 students responded to an “Exit Questionnaire” on their experience after the summer examinations.

• Reached agreement with the examinations office……

(to date, a further 8 examinations have been held and 5 end-of-year examinations are planned for 2006)

Local Context

• Control of the testing environment

• Good relationships with other support departments

• Exams office involvement = enhanced confidence

Challenges• Confidence in CAA, especially security of systems• Size of computer labs- small labs = increased

invigilation• Student images were being upgraded• Inconsistency of resources in numerous computer

labs• Old assessments were organised and delivered

centrally, making it difficult to move to a more devolved pattern of responsibilities

• Devolving Central Practice to Academics– Old exams were created, presented and invigilated by

computing services staff– Staff were required to create assessments, test, deliver and

invigilate new examinations

Some solutions

Exams office agreements:• In addition to the procedures written with reference to

BS7988(2002)• One session was held where possible. If two or more

sittings were held, there was to be no late entry to the second sitting

• A tick sheet for MCQ questions was devised to allow for data checking and “second marking” until confidence in the system was established

• The examinations Office was only actively involved in end of year exams; end of course assessments were run by the departments concerned

Documentation• Draft policy produced combining University

policies, good practice gleaned from other institutions QAA precepts and BS7988(2002) progressing through committees)

• Instructions to those in charge of assessments

• Instructions for invigilators• Schedule/check list for individual

assessments for academic and computing staff

• Pro-forma instructions for students

What Happened• Early results were encouraging- the first 11sessions run by

departments were technically problem free• Exams under control of Exams office were less

straightforward– Although invigilators had received briefing, starters

hadn’t and invigilators left starters to it resulting in conflicting information.

– Use of examination number as student identifier caused problems for students who had used Questionmark before. (Anonymous marking wasn’t relevant to the computer)

– The number of people involved in the Examination Office made the exams very costly.

Lessons Learned• Confidence in support services and clear instructions

help develop staff practice• Confidence in policy and procedures allows the

examinations office to be “hands-off”• All stakeholders need to be consulted if policies are to

be implemented to allow the quality of online assessments to be assured

• Buy-in to these policies is essential• Check, get someone else to check and then check

again! • Always have clear fall-back positions• Preserve control of the testing environment, even if this

means investment in infrastrucure

Beware!

Doing this will raise issues of the suitability of online exams at HE level

Human error will occur

Internal References• Canterbury Christ Church University Assessment

Handbook http://www.canterbury.ac.uk/support/learning-teaching-enhancement-unit/assessment/assessment-handbook/

• Staff Development in CAA http://www.canterbury.ac.uk/support/learning-teaching-enhancement-unit/e-learning/staff-development/caa/index.asp

• QuestionMark/Perception for Summative Assessment, Case Study

• http://www.canterbury.ac.uk/support/learning-teaching-enhancement-unit/publications/case-studies.asp

References• British Standard BS788:2002 Code of Practice for the use of

information technology (IT) in the delivery of assessments• QAA Code of Practice Precept B7 and B8 Assessment of

Sudents. (2004) http://www.qaa.ac.uk/academicinfrastructure/codeOfPractice/section2/default.asp#assessment Last accessed March 25th 2005

• McKenna C. and Bull, J. Blueprint for computer-assisted assessment CAA Centre, 2001

• JISC 2002 Implementing online assessment in an emerging MLE: a generic guidance document with practical examples http://www.jisc.ac.uk/index.cfm?name=project_integrating_caa Last accessed March 2005

• Race P and Brown S, “The Lecturer’s Toolkit, a practical guide to teaching learning and assessment”, London: Kogan Page. 1998.

• University of Dundee, February 2005 Computer Aided Assessment Policy and Procedures