6
Performance-based assessment in undergraduate medical education Jeremy Morton, Allan Cumming, Helen Cameron The University of Edinburgh Medical School, Edinburgh, UK P ostgraduate medical educa- tion in the UK is currently undergoing major reform – Modernising Medical Careers (MMC). At the heart of these changes is the adoption of an assessment strategy that aims to make an accurate measure of a doctor’s performance in his or her workplace. There has been a dramatic move recently towards compe- tency-based assessment in undergraduate medical education, stimulated initially by the inno- vative work on objective struc- tured clinical examinations (OSCEs) by Harden et al., 1 and the publication of Tomorrow’s Doctors in 1993. Competencies have been assessed through clinically rele- vant tasks, though these were often simulated and on occasion distant from the workplace. The limitations of this practice have led us to explore the concept of performance based assessment (PBA). There has been a dramatic move recently towards competency- based assessment Practical assessment 36 Ó Blackwell Publishing Ltd 2007. THE CLINICAL TEACHER 2007; 4: 36–41

Performance-based assessment in undergraduate medical education

Embed Size (px)

Citation preview

Page 1: Performance-based assessment in undergraduate medical education

Performance-basedassessment inundergraduate medicaleducationJeremy Morton, Allan Cumming, Helen CameronThe University of Edinburgh Medical School, Edinburgh, UK

Postgraduate medical educa-tion in the UK is currentlyundergoing major reform –

Modernising Medical Careers(MMC). At the heart of thesechanges is the adoption of anassessment strategy that aims tomake an accurate measure of adoctor’s performance in his or herworkplace.

There has been a dramaticmove recently towards compe-tency-based assessment inundergraduate medical education,stimulated initially by the inno-vative work on objective struc-tured clinical examinations(OSCEs) by Harden et al.,1 and thepublication of Tomorrow’s Doctors

in 1993. Competencies have beenassessed through clinically rele-vant tasks, though these wereoften simulated and on occasiondistant from the workplace. Thelimitations of this practice haveled us to explore the concept ofperformance based assessment(PBA).

There has beena dramatic move

recentlytowards

competency-based

assessment

Practicalassessment

36 � Blackwell Publishing Ltd 2007. THE CLINICAL TEACHER 2007; 4: 36–41

Page 2: Performance-based assessment in undergraduate medical education

We have reviewed theliterature and others’ experiencesin this area, and surveyed theopinions of our own clinicalteachers and final-year studentsto improve our understanding ofthe particular challenges ofadopting PBA in undergraduatemedical education.

THE CASE FORPERFORMANCE-BASEDASSESSMENT (PBA) INUNDERGRADUATE MEDICALEDUCATION

Strong arguments have been madefor reducing the importance ofhigh-stakes undergraduate exam-inations and instead concentra-ting on assessing students’ day-to-day performance. Traditionalhigh-stakes assessment drivesstudents to adopt a superficialapproach to learning and usuallyfails to provide feedback thatcould help to identify and correctpoor performance. It is alsoknown that success in FinalExaminations does not correlatewell with the amount of clinicalexperience as an undergraduate,2

or with effectiveness as a doctorafter graduation.3

As the shortcomings oftraditional end-of-courseexaminations have beenrecognised, attention has turnedtowards developing assessmentstrategies that are educationallysound and identify accuratelymedical students who arecompetent to progress.A framework for structuring theassessment of clinical skills wasproposed by the psychologist,George Miller, in 1990.4In thiscontext, clinical skills encompasshistory taking, physicalexamination, clinical decision-making, and giving explanationsand advice. Miller’s pyramidalmodel identifies four levels atwhich clinical assessment shouldtake place. At the apex of hispyramid is testing designed toevaluate how a doctor or medicalstudent performs in clinicalpractice, labelled ‘Does’ by Miller.

Miller acknowledges the diffi-culties of gaining an accurate andreliable measurement of thisbehaviour. Indeed, it has beendescribed as ‘the internationalchallenge of the century for allinvolved in clinical competencetesting’. However, there is a con-sensus that developing a tool to

evaluate accurately the clinicalcompetence of a student and todirect his or her learning appro-priately is a worthwhile endeavour.

PBA of clinical skills has beenproposed as one means ofachieving this goal. Directobservation of a studentinteracting with a real patient inthe course of a normal workingday is at the heart of this form ofassessment. Following anobserved encounter, the studentreceives immediate feedback onhis or her performance. Theseassessments should be made onmultiple occasions by multipleassessors to optimise theirmeasurement characteristics.

Direct observation improvesthe validity and reliability ofclinical competence assessmentsand is an imperative in suchevaluations. Feedback has beenshown to be highly sought-afterby students, and to be a strongmotivator for behavioural changeand improvement in performance.

ATTITUDES TO PBA – ASURVEY

In 2004/5, we surveyed theexperiences and opinions of thefinal-year students and their con-sultant clinical teachers at TheUniversity of Edinburgh MedicalSchool on the issue of clinicalcompetence assessment in under-graduate medical education.Of those surveyed, 100 clinicalteachers and 158 studentsreturned questionnaires –response rates of 71 per cent and63 per cent, respectively (forfurther details of the question-naire, see Table 1). The surveyresults demonstrated almostuniversal agreement amongstudents and teachers that directobservation linked to immediatefeedback should be an importantfeature of clinical competenceassessment (see Table 2).

When asked how clinical com-petence should be assessed

Traditionalassessmentdrives studentsto adopt asuperficialapproach tolearning

Directobservationimproves thevalidity andreliability ofclinicalcompetenceassessments

� Blackwell Publishing Ltd 2007. THE CLINICAL TEACHER 2007; 4: 36–41 37

Page 3: Performance-based assessment in undergraduate medical education

summatively, 70 per cent ofstudents favoured a PBA strategy.In contrast, only 30 per cent ofconsultant clinical teachers werein agreement – the majority ofclinical teachers (60 per cent)viewed PBA as unachievable, andsupported formalised testing as amore realistic option.

The predicament facing clin-ical teachers is summarised by aconsultant as follows:

I think you have to be aware ofthe pressure on mainstream spe-ciality consultants to provideteaching. I have seven studentsfor a block of seven weeks fourtimes per year and I am theirtutor. This is on top of a fullclinical workload. I struggle to

find time to do what I do (andtrain our own juniors) so althoughwhat I say is desirable I don’tthink I can achieve it.

Our survey investigated theextent to which direct observa-tion and global judgement (withno direct observation) are used indetermining students’ clinicalcompetence. Over 75 per cent ofconsultants indicated that globaljudgement (with no direct obser-vation) was the principal methodused to rate a student’s compet-ence, and only 20 per cent rou-tinely used direct observation.These observations resonate withthe comments made by van derVleuten et al., in which they statethat in-course assessment gener-ally relies on ‘global evaluations

from quite unstandardised testingsituations and is often based onlimited samples of (real-life) stu-dents’ clinical behaviour’.5

In the survey, 47 per cent ofstudents reported that, during thepreceding year, they had beendirectly observed performing aclinical skill on four or feweroccasions (see Table 3). Thisreflects published evidence in theUSA suggesting that students arerarely observed during patientcontact, and it is even uncommonfor them to have structured andobserved clinical assessments insimulated environments.6

Our survey also revealed thatmost students report thatopportunities to provide

Table 1. Details of the questionnaires given to consultant teachers and students

Consultant teachers Students

Question selection Hypothesis driven using knowledge of local issues affecting in-course assessment

Final design Derived using modified Delphi technique

Sample Population All consultants registered as final year clinical teachers All final year students

Mode of distribution Postal Online

Total no. of questions 16 12

No. of close-ended questions[No. with Likert scale]

15 [8] 11 [7]

No. of questions with provisionfor free-text annotation

7 2

Analysis of free-text responses Grounded approach: - emergent themes identified and then comments sorted andcategorised accordingly

Table 2. Attitudes towards the roles of direct observation and feedback inundergraduate clinical competence assessments

Consultant teachers (n ¼ 99) Students (n ¼ 149)

SA A D SD SA A N D SD

Direct observation of studentsperforming their clinical skillson a real patient should bean important feature ofin-course assessment?

52[52.5%]

46[46.5%]

1[1%]

0 78[52%]

62[42%]

7[5%]

2[1%]

0

Students receiving feedback onthe strengths and weaknessesof their clinical skills shouldbe an important feature ofin-course assessment?

52[52.5%]

46[46.5%]

1[1%]

0 94[63%]

47[32%]

6[4%]

2[1%]

0

(SA-strongly agree; A-agree; N-neither; D-disagree; SD-strongly disagree)

Feedback hasbeen shown to

be highlysought-after by

students

38 � Blackwell Publishing Ltd 2007. THE CLINICAL TEACHER 2007; 4: 36–41

Page 4: Performance-based assessment in undergraduate medical education

feedback on clinical performanceare overlooked, and whenstudents do receive feedback itappears to them to be limited indetail.

BARRIERS TOPERFORMANCE-BASEDTESTING

In our survey, the following werecited as current barriers that deter

clinical teachers from employingPBA in undergraduate medicaleducation:

• Lack of time

• Large class sizes

• Inadequate resources

• Lack of educational training

• Conflicting priorities for clin-ical teachers – for example,service demands and research

The literature7 indicates thatfurther obstacles can include:

• Lack of recognition for teach-ing

• Sense of dissatisfaction with apromotion system regarded aspreferentially rewardingexcellence in research

• Alienation from the main bodyof the university

The survey also alluded toanother significant barrier.We found that over 30 per cent ofour consultant teachers werereluctant to award a student a failmark. This derived from a generalanxiety about the reliability oftheir own assessments, andseveral teachers admitted togiving borderline students ‘thebenefit of the doubt’. Adoptingleniency in the face of uncer-tainty has been well described inthe literature on undergraduatemedical assessment8 andcontributes to the lobby in favourof maintaining the place ofhigh stakes examinations inundergraduate medical education.

While teachers reportedreluctance to award a fail mark,students complained that theirassessors seemed unwilling toreward excellence and showed atendency to grade towards themiddle. The perception bystudents that clinical competenceassessments are based onunreliable, isolated or unobservedbehaviours, and their reports ofsignificant variability inassessment practice may help toexplain this paradox.

Table 3. Direct observation of the clinical performance of final-year students(n ¼ 137)

0 1–4 5–8 >8

How many times in the preceding year,have you been directly observed by asenior clinical teacher performing aclinical skill*?

4[3%]

60[44%]

54[39%]

19[14%]

*(Clinical skills defined as before to include history taking, physical examination, clinical decision-making andexplanation and advice)

Studentscomplained thattheir assessorsseemedunwilling torewardexcellence

� Blackwell Publishing Ltd 2007. THE CLINICAL TEACHER 2007; 4: 36–41 39

Page 5: Performance-based assessment in undergraduate medical education

The main focus of students’dissatisfaction with current prac-tice is encapsulated in the fol-lowing statement from onestudent responding to our ques-tionnaire:

The way that the in-courseassessment is carried out is veryvariable from one placement toanother. It has ranged from theexcellent, where a consultantregularly observes examinationskills etc. on ward rounds and inteaching sessions and gives con-structive feedback, to the cursory,where a tutor simply gives a markbased on their general impressionof your abilities gleaned from yourpassive presence on their wardrounds and answers to the occa-sional question.

‘Good assessment requires athoughtful compromise betweenwhat is achievable and what isideal’.5 We know that PBA hasbetter content and predictivevalidity and should take preced-ence in the curriculum, but is itachievable, given these barriers?The answer appears to be ‘yes’.Groups in the Netherlands, theUSA and the UK have all describedthe successful implementation ofPBA in undergraduate medicaleducation.

TESTING WITH MINI-CEX

Of the many suggested approa-ches, mini-CEX testing hasattracted attention recently andshows promise as a means ofachieving valid and reliablemeasurements of students’ clin-ical skills. John Norcini providesan excellent summary of themethodology, evolution and cur-rent applications of mini-CEX inthe June 2005 edition of TheClinical Teacher. It should benoted that the mini-CEX hasproved to be an effective assess-ment instrument for senior med-ical students, with eight ratingshaving a reproducibility coeffi-cient of 0.77.9

Experience in postgraduatemedical education has suggestedthat as few as four mini-CEXassessments will reliably identifythose who are well above orbelow the expected level ofcompetency.10 In contrast,borderline candidates may requirebetween seven and elevenassessments to achieve the samereliability.8 This begs thequestion: should assessmentstrategies be tailored to reflectthis phenomenon?

Researchers at the Universityof Southampton have evaluatedthe use of mini-CEX with final-year medical students. Their evi-dence confirms the value of thisassessment tool and its use aspart of a programme of PBA inundergraduate medical education.Each student had 15 mini-CEXassessments during their finalyear, a total of 2,340 mini-CEXassessments being carried out perannum. They found that mini-CEXwas liked by students, easy forexaminers to undertake, and lessdisruptive to patients and familiesthan previous assessment exerci-ses. However, there were concernsover the reductionist nature ofmini-CEX compared to the previ-ous long cases, and an impressionthat students with good commu-nication skills but poor clinicalcompetence could still pass. Thislatter issue has been addressed bychanging the weightings withinthe assessment (Faith Hill andKathy Kendall, personal commu-nication, January 2006).

LESSONS LEARNED

What are the key lessons from oursurvey and the literature con-cerning programmes of perform-ance-based assessment?

• An assessment tool should beselected that is simple to use,can be integrated into thefabric of the working day andtakes a relatively short time toperform

• A marking schedule should bedevised that defines clearlythe level of performanceexpected from a student

• In general, the onus should beplaced on the student to seekout opportunities to havetheir clinical skills assessed

• The burden of assessmentshould possibly be weightedto focus on borderline candi-dates

• Junior medical staff can beinvolved as assessors, withappropriate staff develop-ment, in order to lessen theburden on consultants

• Links between the assessmentbodies and clinicians respon-sible for carrying out assess-ments should be optimised toclarify the principles thatunderpin PBA and encouragestandardised practice

• Staff development and qualitycontrol should be undertakento engender a positive attitudetowards clinical assessment,reduce the inconsistency withwhich grades are awarded, andaddress some of the psycholo-gical barriers to rewarding andfailing students when appro-priate

• Consideration should be givento separating the roles ofteacher and assessor to coun-ter the ‘leniency effect’ thatcan confound a supervisingteacher’s marking

• Group review of the outcomeof multiple observed clinicalencounters is more likely toaward a ‘fail’ mark whenappropriate

• Time should be written intothe job plans of clinicalteachers to facilitate theseassessments

• Excellence in teaching mustbe acknowledged and rewar-ded

Excellence inteaching must

beacknowledgedand rewarded

40 � Blackwell Publishing Ltd 2007. THE CLINICAL TEACHER 2007; 4: 36–41

Page 6: Performance-based assessment in undergraduate medical education

CONCLUSIONS

The outcomes of our survey andliterature review indicate thatstudents and consultant clinicalteachers acknowledge the meritsof performance-based assessmentand related feedback. However,many teachers doubt the feasi-bility of supporting these prac-tices systematically in anundergraduate setting.

The alternative to PBA is areturn to cumbersome, pressu-rised and expensive high-stakesexaminations, but this step neednot be taken if some simplemeasures are instigated. Atten-tion needs to be given to thedevelopment of an appropriatetoolkit, investment in staff train-ing, time and resources, and theoptimisation of the context inwhich PBA is delivered. Thisincludes a commitment to buildstrong links between medicalschools, their assessment bodiesand clinical teachers.

We are optimistic that, byaddressing each of these in turn,we can develop reliable perform-ance based assessments that areboth practical for clinicians andoffer quality feedback to students.

SUPPLEMENTARYMATERIAL

The following supplementary ma-terial is available for this article:

Appendix S1. Additional refer-ences.

This material is available aspart of the online article from:http://www.blackwell-synergy.com/doi/abs/10.1111/j.1743-498X.2006.00138.x (This link willtake you to the article abstract).

Please note: BlackwellPublishing is not responsible forthe content or functionality ofany supplementary materialssupplied by the authors. Anyqueries (other than missingmaterial) should be directed tothe corresponding author for thearticle.

REFERENCES

1. Harden RM, Stevenson M, Downie

WW, Wilson GM. Assessment of

clinical competence using objective

structured examination. BMJ

1975;1:447–451.

2. McManus IC, Richards P, Winder BC,

Sproston KA. Clinical experience,

performance in final examinations,

and learning style in medical stu-

dents: prospective study. BMJ

1998;316(7128):345–350.

3. Probert CS, Cahill DJ, McCann GL,

Ben-Shlomo Y. Traditional finals and

OSCEs in predicting consultant and

self-reported clinical skills of PRHOs:

a pilot study. Med Educ 2003;37(7):

597–602.

4. Miller GE. The assessment of clinical

skills/competence/ performance.

Acad Med 1990;65(9 Suppl):S63–67.

5. van der Vleuten CPM, Scherpbier

AJJA, Dolmans DHJM, Schuwirth

LWT, Verwijnen GM, Wolfhagen HAP.

Clerkship assessment assessed. Med

Teach 2000;22(6):592–600.

6. Kassebaum DG, Eaglen RH.

Shortcomings in the evaluation of

students’ clinical skills and

behaviors in medical school. Acad

Med 1999;74(7):842–849.

7. Seabrook MA. Medical teachers’ con-

cerns about the clinical teaching con-

text. Med Educ 2003;37(3):213–222.

8. Williams RG, Klamen DA, McGaghie

WC. Cognitive, social and environ-

mental sources of bias in clinical

performance ratings. Teach Learn

Med 2003;15(4):270–292.

9. Kogan JR, Bellini LM, Shea JA. Fea-

sibility, reliability, and validity of the

mini-clinical evaluation exercise

(mCEX) in a medicine core clerkship.

Acad Med 2003;78(10 Suppl):S33–35.

10. Norcini JJ, Blank LL, Arnold GK,

Kimball HR. The mini-CEX (clinical

evaluation exercise): a preliminary

investigation. Ann Intern Med

1995;123(10):795–799.

� Blackwell Publishing Ltd 2007. THE CLINICAL TEACHER 2007; 4: 36–41 41