Upload
lamdiep
View
215
Download
2
Embed Size (px)
Citation preview
Jeremy M. BrownKathryn LoweJill FillinghamPhilip N. MurphyBamforth, M, Shaw, N.J.
Publication date 2014.
An investigation into the use of Multi Source Feedback (MSF) as a work based assessment tool.
Medical Teacher, 36, 997-1004.
1
Background
This paper reports on a mixed methods study that investigated the use of Multi Source
Feedback (MSF) in one Deanery. Anecdotal evidence from discussions with senior
Consultants suggested that there remains some scepticism amongst Educational
Supervisors regarding the reliability of MSF as a work based assessment tool. The
choice of assessors was the focus of their concerns.
To investigate whether these concerns had any foundation this study aimed to
compare Specialist Trainees' hand selected MSF assessor scores with those made by
their own Clinical Supervisors. The study also aimed to explore the perceptions of
Specialist Trainees and their assessors on MSF as a work based assessment tool.
The intention of this study was to explore the complexities of choice of assessor
whilst acknowledging that these variations are ultimately addressed through Norcini’s
(2003) three principles that guard against any threats to the reliability of peer
assessments: the number of relevant performances observed, the number of peers
involved, and the number of aspects of competence being evaluated (p. 541).
The establishment of work-based assessment tools in postgraduate medical education
was introduced to add validity and reliability to gauge the performance of doctors in
training and ultimately identify doctors who may be in difficulty (Archer et al, 2005).
This has resulted in a significant shift away from the reliance on Clinical and
Educational Supervisors’ individual judgements of professional performance. MSF is
designed to gather as much feedback as possible to form the basis of constructive
discussion between the Educational Supervisor and the trainee. As Wright et al
2
(2013) re-affirm MSF should not be taken in isolation. Moonen-van Loon et al (2013)
argue that when making ‘high stake’ judgements a series of work based assessments
including mini-Clinical Evaluation Exercise, direct observation of procedural skills,
and MSF have to be taken into consideration before a reliable decision on a trainee’s
progress can be made.
MSF does though offer a more formalised, systematic, team-focused assessment
approach (Archer et al, 2008; Violato et al, 2003). Doctors in training are asked to
nominate a number of assessors (at least 12 in this Deanery’s programme) at any one
time who then complete the MSF questionnaire confidentially rating the trainees’
performance and fitness to practise in areas such as routine clinical care, team
working and communication with patients (Royal College of Psychiatrists, 2012).
The trainee also completes a questionnaire as a self-assessment before meeting with
their Educational Supervisor to consider the results which are processed centrally and
presented as mean scores with verbatim free text comments.
Recent systematic reviews have concluded that the use of MSF to assess both
physicians’ and surgeons’ practice is shown to be highly reliable and valid (Donnan et
al, 2014; Al Khalifa et al, 2013). There are some questions though that remain about
the MSF process. Who gives feedback and how much guidance and training is
needed for assessors is explored in the qualitative findings later in this paper. Archer
et al (2010) highlighted the risks associated with the unregulated selection of
assessors. Cohen et al (2009) reported that there was concern amongst some
dermatology trainees about possible victimization by MSF assessors. Bullock et al
(2009) also reported discrepancies in assessment ratings between some staff groups
3
and peers (administrators or managers being less likely to raise concerns compared to
senior nurses and consultants).
Methods
This mixed methods study consisted of quantitative and qualitative phases of data
collection that ran in parallel with each other. Grouped responses of hand selected
assessors were measured against those made by Clinical Supervisors. An exploration
of the personal accounts of MSF assessors and those being assessed was also
undertaken. This allowed the research team to develop understanding of individual
situations and perspectives without trying to generalize findings to the wider
audience.
Ethical Considerations
This study received University, Strategic Health Authority and NHS Research Ethics
and local NHS Trust Research & Development approval.
Confidentiality was assured to all participants. Any identifiable information was
removed from interview transcripts. Only members of the research team had access
to information. The exception to the maintenance of confidentiality would be solely
where unsafe practice was highlighted during interviews. This did not occur.
Quantitative phase
4
There are different multisource feedback models used but for the purposes of
this study the mini-PAT (see figure 1) was chosen (with permission) as it is used
extensively in postgraduate medical education (Archer et al, 2008).
Specialist trainees (STs) were emailed asking to respond if they would be
interested in taking part in the study. Those that were, were then contacted directly by
a member of the study team and after giving their written consent to take part in the
study were asked to hand out one mini-PAT questionnaire to a clinical colleague of
their choice and the other to their Clinical Supervisor each with an information sheet
explaining the study. Each assessor returned their completed mini-PAT questionnaire
in a stamped addressed envelope to the research team. Each questionnaire was coded
for each ST so the research team could collate the completed questionnaires in pairs.
The questionnaires were also colour coded to differentiate Clinical Supervisors (green
form) from the hand chosen assessors (yellow form). All data was stored
anonymously in SPSS 18.0™ with only code numbers to identify individual
participants. Statistical analysis was carried out using the Wilcoxon rank sum test to
determine any differences in responses between Clinical Supervisors and hand chosen
assessors with respect to the total assessment scores for each of five domains on the
mini-PAT (1. good clinical care, 2. maintaining good medical practice, 3. teaching,
training appraising and assessing, 4. relationship with patients and 5. working with
colleagues) as well as the overall impression of the trainee. The analysis focusses on
differences in assessment identified for the sample, rather than for individuals.
Initially, potential recruits to the study were identified by trainee listings
supplied by clinical tutors in two large hospitals in the North West of England. A
significant number of trainees expressed interest, but despite questionnaires being sent
5
to them, the number of returned pairs of mini-PAT questionnaires was low. In order to
recruit more participants, therefore, opportunistic recruitment was carried out by one
of the authors (NJS) by attending teaching sessions and asking clinical colleagues in
several hospitals in the region to ask trainees to take part.
Qualitative phase
The study population for the qualitative phase was postgraduate doctors in specialist
training, hospital Consultants, and nursing staff across one Deanery. Potential recruits
were emailed and provided with an information sheet that explained that the project
sought opinions and attitudes concerning MSF. The email asked for replies from
those interested in taking part in a semi-structured interview that would be digitally
recorded. It was undertaken at a time and place convenient to the interviewee.
Written consent was taken immediately before the interview started. Semi-structured
interviews, which lasted between 20 and 40 minutes, were held with 7 nurses, 7
Consultants and 6 postgraduate doctors. All were experienced in the use of MSF as
an assessment tool, either as an assessor or as being assessed.
Analysis of the transcribed semi-structured interviews was undertaken using a
thematic framework (Ritchie et al, 2003). All interviews were transcribed verbatim
and subjected to further in-depth analysis independently by two researchers (JF and
JB) to enhance the credibility of the findings. This phase of the analysis involved both
researchers independently identifying key themes before JF coding all interview data
into the themes identified. Theme descriptors were defined and re-defined until all
data was fully represented (Miles and Huberman, 1994).
Quantitative findings
6
Forty Specialist trainees took part in the study over a period of eighteen
months. The median scores for the responses of the Clinical Supervisors and hand
chosen assessors with respect to the total assessment scores for each of the five
domains as well as the overall impression of the trainee on the mini-PAT are shown in
Table 1. Not all assessors could respond to all questions regarding the trainees
(indicating that they had not had the opportunity to observe some behaviours in the
workplace) therefore only results for trainees where questions were completed by
both assessors were included in the analysis. The profiles of the 40 hand chosen
assessors were: 29 Specialist Registrars (SpRs)/STs; 6 nurses; 4 Consultants; 1 Staff
and Associate Specialist /Specialty doctor.
Hand chosen assessors’ ratings for good clinical care, maintaining good
medical practice, teaching, training appraising and assessing, relationship with
patients, and the overall impression of the trainee were significantly higher than those
for Clinical Supervisors. Ratings for working with colleagues were the same for both
groups of assessors.
28 (70%, n=40) Clinical Supervisors and 28 (70%, n=40) hand chosen
assessors made free text comments under ‘anything especially good’. 8 (20%)
Clinical Supervisors and 5 (12.5%) hand chosen assessors made free text comments
under ‘please describe any behaviour that has raised concerns or should be a
particular focus for development’. All 5 of these hand chosen assessors were
SpRs/STs. No further analysis was undertaken on the free text comments.
Qualitative findings
Five key themes were identified during thematic framework analysis of the semi-
structured interviews. Three related specifically to the issue of selection of assessors:
7
the validity of selecting assessors; anonymity of assessors; and the value of multi
professional assessors. Two related to MSF issues more generally: usefulness of
feedback; and grading.
Theme 1: the validity of selecting assessors
Self-selection of assessors was a recurrent theme during interviews with STs, nurses
and Consultants. Trainees acknowledged they often chose assessors who they had a
positive relationship with: You choose people who you know, people you like or like
you. (T 4); I think you are always going to pick someone who likes you and you get on
well with (T 2). There was an awareness that this could introduce bias into the
assessment process: It can be biased can’t it? Someone selecting people who are nice
or you know, who they feel are going to give them a nice report (T 2). There was
recognition though that constructive criticism was important and it would be
damaging if trainees tried to guard against this by avoiding potentially negative
assessments:
Being honest, you do select people that you get on with. If I’d had a problem with somebody I wouldn’t give them a form and whether that makes them valid…well it doesn’t make them valid does it because that person’s opinion might be quite important as part of the process. (T5)
Trainees recognised that for this process to be useful they should seek out what they
perceived as quite demanding assessors:
I actually quite appreciate constructive feedback and I would tend to ask quite a wide number of Consultants and I will ask people who I know are likely to be relatively strict. (T 3)
8
Consultants agreed that trainees select people they like and have positive relationships
with: It’s only human to select those who’ll give favourable feedback. (C 5);
Consultant colleagues as well, they cherry pick. (C 4); Trainees being able to self-
select has an impact on the process. (C 1).
Consultants felt that the trainees actively avoided those potential assessors they may
have had conflict with: Self-selection, you choose those who you’ve had no conflict
with. (C 2); The Speciality Registrars are a bit savvy, they’ve got the opportunity to
select out people they don’t get on with. (C 3) It was also felt that they returned to
assessors who had given them favourable feedback on previous occasions: They tend
to use assessors who have previously given them good feedback (C 1); Obviously they
tend to go with people that have given good feedback previously. (C 6)
Some Consultants felt that it was their role to approve assessors: As Educational
Supervisors I think they should be asking me; are these appropriate people? (C 4)
Other Consultants felt it would be unfair to be involved in the selection process:
Obviously you’ve got to have some method of choosing assessors but it would be
unfair to inflict assessors on trainees. (C 5)
Nurses shared the same concerns regarding the validity of the assessor selection
process: They cherry-pick of course someone who will say all nice pleasant things
about them don’t they? (N 1); It does give them the opportunity for them to select
people who might give them a favourable view I think. (N 2); If you’ve had a bad
experience with someone and say, oh, no I’m not going to ask them. (N 5) I think it
must add bias because of the relationship they would select and choose individuals
9
they think will give them a more favourable response. (N3) They pick particular
members of nursing staff who they’ve good relationships with so to some extent it
could be manipulated. (N 6)
Theme 2: anonymity of assessors
There were conflicting views around the anonymity of the process. Some trainees felt
that anonymity gave transparency to the process: People can say what they really feel
(T 5); The anonymous part of it gives people the chance to be honest, ‘cause often
people will put their name to things and say that everything’s all right’. (T 1) Some
trainees felt that the process was not truly anonymous which had the potential to have
a negative impact on feedback: Well you know roughly where it’s come from. You
know whether it’s a Consultant or one of the Nurses or one of the SHO’s that’s
written it. (T 3);
I’ve actually at the moment got an MSF which is for one of my seniors who erm, I’m not, I find him…he doesn’t really work in a team. I find it difficult when I have to put that, because I may be the only person who he’s asked in the unit, so if I say he doesn’t really help out with the juniors, he’ll know exactly who said that. (T 4)
There were also conflicting views regarding anonymity amongst Consultants. Some
felt the electronic version provided appropriate anonymity: The anonymity and speed
with the electronic version are more suitable. (C 1) Others felt that even the
electronic version lacked anonymity: Because of the way it’s set up it’s not
anonymous really. You can tell the comments people make, who it is. (C 3); I think
there’s a danger of MSFs not having appropriate anonymity, and I think you have to
have some sort of MSF that really, truly keeps people anonymous. (C 4)
10
Theme 3: the value of multi professional assessors
Another key theme for the Trainees was the multi-source aspect of the process and
how it could increase validity: I think that if you get enough people from enough
backgrounds, if you are a good Doctor, if you pick up a wide selection of people, I
think that increases the validity (T 2); I think it’s invaluable as it gives a view of a
person from not just a medical view, not just academic achievement. It’s particularly
useful if you are getting mixed views. (T 1); You get an idea of how the whole team
perceives you as a team member. (T 3)
Trainees aired a view that the selection of other professionals should be mandatory:
I think it would be good if it was mandatory, you had to get feedback from the Nurses
as well as the Doctors, so you know it wasn’t just your colleagues you were getting it
from and make sure that it’s all from different levels. (T 6)
The multi professional aspect of MSF was also valued by Consultants: I would say
that actually the most powerful feedback is that of a mixture of people (C 1). It
enables assessment to move away from the previous one-dimensional approach giving
a more rounded view of the trainee:
I think that training in the past had been one dimensional. In this way (the MSF) you
get a more rounded view of your trainees doing well in their practice and not just a
jaundiced opinion of the Consultant only. (C 5)
Clinical Supervisors can monitor achievement and progression in areas they would
not necessarily have access to: The MSF tool enables feedback from places we as
11
Clinical Supervisors don’t necessarily see them in. (C 1); It assesses the trainee on
the ground, on the ward. (C 5)
As with other groups the theme of the multi-source aspect of MSF was central to
discussions with nurses. All nurses interviewed felt that this was a positive aspect of
the approach enabling assessment from others other than the doctor’s peer group: It
gives the opportunity to give a 360 degree opinion of a doctor. It’s not just peer
assessment from medical colleagues. (N 4); Well it gives a wider view point. Nurses
will have different viewpoints. I think it’s a very good idea. (N 5)
Engagement in the process makes them feel valued: It enables another view and I
think it may be makes everyone feel they’ve got a part to play in medical education. I
can see that’s a positive aspect. People can feel that their view is valued. (N 2).
A more integrated approach to care makes nurses’ involvement in assessment seem a
natural progression: I think it’s a good idea ‘cause for a long time professionals
weren’t gauged by colleagues of “lower standings” (in inverted commas), so I think
it’s a good principle. (N 3)
I think because we don’t work in isolation as professionals, you need to get that
feedback, because …just because you think you’re good in your profession, does
someone else think you’re a good part of that team? (N 7)
Nurses like to be involved in the process: I always feel quite delighted from a
personal point of view that people have actually asked me to fill in this form for them.
12
(N 1); I feel quite privileged that she asked me (N 5); Particularly with nursing staff
making up the majority of the workforce, it’s important that Nursing staff do have
some opportunity to give feedback to Doctors. (N 6)
Theme 4: usefulness of feedback
Trainees feel that there is sometimes a reluctance to give negative feedback: It’s more
like a pat on the back. I think people find it difficult to give negative feedback, so it
tends to be positive more often than not. (T 3); People tend to just put generally vague
and generally positive feedback. I don’t think it’s effective from that point of view. (T
5)
Feedback is valued by the trainees with particular reference made to the free-text
component: The free-text part of the pro-forma is very valuable because that gives
you the opportunity to clarify why they’ve ticked. (T2) Trainees valued the fact that it
was personalised: It’s nice to get personalised feedback and comments because you
know that people have actually thought about the person they were assessing when
they fill it in. (T1)
Consultants provided the most opportunities for the trainees to utilise feedback: It
(feedback) tends to be from Consultants, I think probably from reading them, it’s the
Consultants who tend to actually make the effort to put anything you could improve.
(T 4)
13
Sometimes feedback just focussed on negative aspects: …actually in an environment
where we work quite hard, and you know it’s important to get things right, you quite
often only get picked up when you’re not doing things right. (T 3)
Some trainees raised some concern that assessors could be reluctant to rate anything
below satisfactory as this merited explanation in free text:
It is a possibility that people could just put that things are satisfactory where perhaps they might not think that because, if you put things are less than satisfactory, you have to comment. (T 4)
Trainees do use the feedback to inform their practice: It’s the one time you actually
get proper constructive feedback and comments written so that you can actually put
them to some use. (T 1); It does give you a bit of an idea what view a few different
disciplines have and you know where you can improve. (T 5)
Nurses reported using the free text to enhance the process in order to improve the
quality of the feedback: I always try to put something because I feel that I should put
something, because to me it’s tick box, and often when you tick mid-line it’s really no
help (N 1); If it was just a tick box people aren’t able to talk about the person’s
attitude .. sometimes you get a more overall picture of someone’s ability. (N 4)
Nurses used the free-text box to give positive feedback when trainees were viewed as
“good”:
Whenever I did, and particularly more so if they were good, if people weren’t as good I’d probably have struggled what should I write in the free text? If people were excellent, it was easier, I could sing their praises, so probably more so, they got more free text. (N 7)
14
Theme 5: grading
Consultants expressed concerns with grading. It was felt that there may be a tendency
to underestimate: Most people tend to be a bit modest in the medical profession. (C
2); From presentations and from my own reflections on it is that self-assessment tends
to undervalue you. (C 5)
Other Consultants felt some people being assessed may over-estimate their abilities:
The outcomes of their work are not what they perceive they are. (C 4) There was also
a view that the process would not always identify poor practice: This all came after
Harold Shipman and it wouldn’t have detected Harold Shipman because everybody
loved him. (C 3)
Nurses also expressed concern about grading. Although nurses appeared during the
interviews to demonstrate they engaged with the assessment process objectively,
benchmarking the doctor’s performance had proved problematic. Nurses were handed
MSF forms to be completed without any specific guidance on how to rate trainees.
This differed from medical professionals who did have online guidance to support
them. Nurses stated that there was no defined standard of performance for the grades
and no direct guidance: I do worry that we haven’t got a defined standard that we
should reaching at this step of their training. (N 1); That’s the part I did find difficult
because there was no direct guidance..(N 3); I’ve never had anyone say to me O.K, by
this stage they should be able to do this, this and this. (N 2) This led nurses to try
and benchmark doctors’ performance, not from any written guidance, but based on
their own experiences of working with other doctors but not necessarily of the same
grade: I suppose I compared her a lot with the Consultant I’d worked with… (N 5).
15
Others used the their own expectations of where they thought the doctor‘s
performance should be for the grade: I try to judge them to the best of my ability. (N
1); I’m going on my own expectations of where they should be… (N 2).; You weren’t
really sure of what they were supposed to be doing at that level because I wasn’t
aware of medical training apart from seeing what I saw ... (N 4).
Nurses also voiced concerns that the process was open to interpretation and that this
could impact on validity: You were basing it on your own judgement and I did feel at
the time that it could be open to interpretation (N 3); The criteria for the ability they
should be working at, I’d question the validity from that point of view. How is it
scored and is it open to interpretation? (N 6)
Discussion
Findings suggest that MSF is a very effective assessment tool when used
appropriately. The concerns raised by trainees and assessors about the MSF process
should be tempered by the fact it is not a tool that can stop progress as a doctor in
training. It is designed to focus and generate meaningful discussions on evidenced
strengths or weaknesses between the trainee and their Educational Supervisor.
Semi-structured interviews highlight concerns raised amongst assessors and those
being assessed regarding the validity of a process that relies on hand chosen
colleagues to assess performance. Although multi professional opinion was very
much welcomed by all participants, STs recognised they could strategically only hand
MSFs to those colleagues they felt would give a positive opinion. STs were aware
though that this ultimately would be counter-productive.
16
In the quantitative phase of this study we have shown that Clinical Supervisors
rate trainee’s performance more harshly than hand-picked assessors. The qualitative
investigation highlighted that MSF assessment was open to the interpretation of the
assessor rather than set marking criteria. Nurses expressed concern about the lack of
guidance they received on how they were to rate trainees. Consultants and doctors in
training reported they could refer to online guidelines but according to nurses this was
either unavailable or they were unaware of it. Nurses felt they needed more explicit
guidance that would enable them to make judgements based on what was expected of
a doctor at that level of training. If nurses have no guidance they may not know how
the MSF tool is used to manage the trainee’s development. In other words, assessors
without the necessary guidance, may be reluctant to make comments that they feel
may have a direct impact on whether a trainee progresses through their training or not.
If it was perceived widely to be a development tool more constructive free text
comments may in fact be made more freely.
In the mini-PAT exercise, the vast majority of assessors completed free text
comments regarding particular strengths demonstrated by the trainee. Far fewer
commented on areas for improvement. During the interviews free text comments on
the MSF form were reported to be a valuable part of the feedback process. It added
an individualised, personal element that gave assessors an opportunity to explain their
scores and gave trainees constructive and pertinent feedback.
Limitations
Recruitment for the quantitative phase of this study proved difficult. Recruitment was
probably hampered by the number of stages in the process of gathering data that were
17
reliant on different people: the trainees had to distribute two mini-PAT forms and then
both assessors had to return them – thus there was plenty of opportunity for the
process to be incomplete (a number of participants only had one mini-PAT returned
and therefore could not be included in the study). No data was available on which to
perform a sample size calculation; the decision was made to stop recruiting once 40
participants had returns of both mini-PATs. We felt that this was reasonable number
of participants on which to perform statistical analysis and time constraints and ‘study
fatigue’ would have resulted in very little further data being collected.
Conclusion
There is a need for increased guidance and training for MSF assessors, especially non-
medical professionals. This could improve consistency in scoring. Hand-picked
assessors (mainly peers) did over score during the mini-PAT exercise compared to
Clinical Supervisors. It is important that a diverse group of assessors, with
appropriate training and guidance, are chosen to assess. Free text responses should be
encouraged to a greater degree to build a richer picture of the trainee and to aid in the
clarification of outlying scores. It could be argued that assessors would be more
circumspect and objective if free text was a mandatory part of the MSF process. The
appropriate use of the MSF tool is also dependent on how the results are fed back to
the trainee. It is vital the Educational Supervisor has the training and guidance to
ensure that the evidence presented is dealt with in a constructive manner in the way it
was designed for.
Acknowledgments
18
The authors would like to thank all the Specialist Trainees and MSF assessors who
took part in this study.
Funding
Mersey Deanery funded this project.
Declaration of Interest
The authors report no declarations of interest.
Research Ethics
This study received University, Strategic Health Authority and NHS Research Ethics
Committee and local NHS Trust Research & Development approval.
Word count: 4585
References
19
Al Khalifa K, Al Ansari A, Violato C, Donnon T. (2013) Multisource feedback to
assess surgical practice: a systematic review. Journal of Surgical Education.
70(4):475-86.
Archer J, McGraw M, Davies H. (2010) Republished paper: Assuring validity of
multisource feedback in a national programme. Postgraduate Medical Journal 86
(1019):526-31.
Archer J, Norcini J, Southgate L, Heard S, Davies H. (2008) mini-PAT (Peer
Assessment Tool): a valid component of a national assessment programme in the UK?
Advances in Health Science Education Theory Practice. 13(2):181-92.
Archer J, Norcini J, Davies H. (2005) Use of SPRAT for peer review of paediatricians
in training. British Medical Journal, 330, 1251-1254.
Bullock AD, Hassell A, Markham WA, Wall DW, Whitehouse AB. (2009) How
ratings vary by staff group in multi-source feedback assessment of junior doctors.
Medical Education. 43(6):516-20.
Cohen S, Farrant P, Taibjee S. (2009) Assessing the assessments: UK dermatology
trainees’ views of the workplace assessment tools. British Journal of Dermatology.
161(1): 34-9.
Donnon T, Al Ansari A, Al Alawi S, Violato C. (2014) The Reliability, Validity, and
Feasibility of Multisource Feedback Physician Assessment: A Systematic
Review. .Academic Medicine. Jan 20. [Epub ahead of print]
Miles B, Huberman M. (1994) Qualitative Data Analysis. California, Sage
Publications.
Moonen-van Loon JM, Overeem K, Donkers HH, van der Vleuten CP, Driessen EW.
(2013) Composite reliability of a workplace-based assessment toolbox for
20
postgraduate medical education. Advances in Health Science Education Theory
Practice, 18(5):1087-102.
Norcini J. The Metric of Medical Education: Peer assessment of competence (2003)
Medical Education. 37 (6):539-43.
Ritchie J., Spencer L., O’Connor W. (2003) Carrying out qualitative analysis. IN:
Ritchie J., Lewis J. eds. Qualitative research practice: a guide for social
science students and researchers. London, Sage Publications.
Royal College of Psychiatrists (2012) Information about mini-PAT Assessments.
https://training.rcpsych.ac.uk/information-about-mini-pat-assessments. (accessed 10th
July 2013)
Violato C, Lockyer JM, Fidler H. (2003) Multisource feedback: a method of assessing
surgical practice. British Medical Journal. 326: 546-548.
Wright C, Richards SH, Hill JJ, Roberts MJ, Norman GR, Greco M, Taylor MR,
Campbell JL. (2012) Multisource feedback in evaluating the performance of doctors:
the example of the UK General Medical Council patient and colleague questionnaires.
Academic Medicine, 87(12):1668-78.
Figure 1 Mini-PAT (Peer Assessment Tool)
Trainee’s surname:
21
Please complete the questions using a cross: Please use black ink and CAPITAL LETTERSx
Trainee’s forename
Trainee’s GMC No Hospital
Trainee level Specialty ________________________
How do you rate this trainee in their:
Standard: the assessment should be judged against the standard expected at completion of this level of training. Levels of training are defined in the syllabus
Belowexpectations
Borderline Meets expectations
Aboveexpectations
U/C
1 2 3 4 5 6Good Clinical Care1.Ability to diagnose patient problems
2. Ability to formulate appropriate management plans3.Awareness of own limitations
4.Ability to respond to psychosocial aspects of illness5.Appropriate utilisation of resources eg. Ordering investigationsMaintaining Good Medical Practice6.Ability to manage time effectively / prioritise7.Technical skills (appropriate to current practice)Teaching and Training, Appraising and Assessing8.Willingness and effectiveness when teaching/training colleaguesRelationship with patients9.Communication with patients10.Communication with cares and/or family11.Respect for patients and their right to confidentialityWorking with colleagues12.Verbal communication with colleagues13.Written communication with colleagues14.Ability to recognise and value the contribution to others15.Accessibilty/Reliability16.Overall how do you rate this doctor compared to a doctor ready to complete this level of training?
22
Table 1: Scores for the responses of the Clinical Supervisors and hand chosen
assessors with respect to the total assessment scores for each of six domains on the
mini-PAT.
Domain
Clinical Supervisors
score (median (IQ
range))
Hand-chosen
assessor’s score
(median (IQ range))
Number of
paired
results
analysed
P value
Good clinical care 22 (20-23) 24 (21-25.25) 30 0.004
Good medical practice 8.5 (8-9) 10 (8-10) 36 0.01
Teaching and training,
appraising and assessing4 (4-5) 5 (4-6) 36 0.002
Relationship with patients 12 (12-15) 15 (13-16.25) 23 0.019
Working with colleagues 18 (16-20) 19 (18-22) 29 0.131
Overall impression 4 (4-5) 5 (4-6) 35 0.026
23