23
Are Students Good Judges of Their Assessment Performance? - Laurie Lomas, Yvonne Hill and Janet MacGregor Paper presented at the European Conference on Educational Research, University of Crete, 22-25 September 2004 ABSTRACT This paper examines how students’ assessments of their written assignments on two professional programmes at a Higher Education Institution (HEI) compare with the marks awarded by their tutors. Students were asked in individual semi-structured interviews to state the score that they thought they would achieve on an assessed essay and justify their decision using the marking grid. The score predicted by the student was then compared with the actual score awarded. Through reference to relevant literature, explanations were sought to explain why these different groups varied in their accuracy in predicting their scores. It was concluded that experienced participants in terms of both employment and academic experience were more confident and therefore less likely to underestimate their abilities. Also, a clear understanding of the assessment grid lexicon helped participants to compare their work witht the assessment criteria. NOTES ON CONTRIBUTORS Laurie Lomas is Assistant Director of King’s Institute of Learning and Teaching, King’s College London, UK. Yvonne Hill is Head of Adult Nursing Studies at Canterbury Christ Church University College, UK. Janet MacGregor is Head of Midwifery and Child Studies at Canterbury Christ Church University College, UK. ADDRESS FOR CORRESPONDENCE Dr Laurie Lomas Assistant Director King’s Institute for Learning and Teaching King’s College London 1

Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: [email protected]. Phone: 44 (0)20

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

Are Students Good Judges of Their Assessment Performance?

- Laurie Lomas, Yvonne Hill and Janet MacGregor

Paper presented at the European Conference on Educational Research, University of Crete, 22-25 September 2004

ABSTRACT

This paper examines how students’ assessments of their written assignments on two professional programmes at a Higher Education Institution (HEI) compare with the marks awarded by their tutors. Students were asked in individual semi-structured interviews to state the score that they thought they would achieve on an assessed essay and justify their decision using the marking grid. The score predicted by the student was then compared with the actual score awarded. Through reference to relevant literature, explanations were sought to explain why these different groups varied in their accuracy in predicting their scores. It was concluded that experienced participants in terms of both employment and academic experience were more confident and therefore less likely to underestimate their abilities. Also, a clear understanding of the assessment grid lexicon helped participants to compare their work witht the assessment criteria.

NOTES ON CONTRIBUTORS

Laurie Lomas is Assistant Director of King’s Institute of Learning and Teaching, King’s College London, UK.

Yvonne Hill is Head of Adult Nursing Studies at Canterbury Christ Church University College, UK.

Janet MacGregor is Head of Midwifery and Child Studies at Canterbury Christ Church University College, UK.

ADDRESS FOR CORRESPONDENCE

Dr Laurie LomasAssistant DirectorKing’s Institute for Learning and TeachingKing’s College LondonJames Clerk Maxwell Building57 Waterloo RoadLondon SE1 8WA UK

E-mail: [email protected]: 44 (0)20 7848 3941Fax: 44 (0)20 7848 3481

1

Page 2: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

Are Students Good Judges of Their Assessment Performance?

Introduction

This paper reports on the outcomes of a small-scale study which investigated students’ ability

to assess the grade of their essays on two practice-based programmes; nursing and teaching

and learning in a Higher Education Institution.

Assessment is a key process in the learning and teaching of higher education students.

Indeed, Brown (1997) argues that,

‘If you want to change student learning then change the methods of assessment’ (p. 7).

Assessment serves as a tool to rank and grade students and this ultimately leads to a

certification of their academic achievements. There are also other purposes that assessment

serves and these include, when the assessment task is appropriate, the formative diagnosing of

learning, evaluating teaching and motivating students (MacLellan, 2001). Assessment should

be seen as a tool for learning as well as simply for accreditation, and it should provide both

formative and summative feedback (Knight, 2002). Dochy et al. (1999) argue that existing

assessment practices need to change if students are to engage effectively in problem-solving,

reflection, developing professional skills and learning in real-life contexts.

As with many other aspects of higher education, reliability is an important factor when

considering any form of assessment. Reliability is synonymous with consistency and the clear

linking of learning outcomes, teaching methods and assessment with little or no ambiguity

over what is to be assessed. Reliability is more likely to be achieved when scores or grades

are not affected by who is the assessor and where and when the piece of work is marked

(Race, 2001). Reliability is greatest when there is uncontentious evidence of achievement. For

this reason there is a temptation to assess simple, unambiguous and non-controversial

achievement and avoid judgements of complex learning. For example, the promotion of self-

motivation is often one of the goals of higher education institutions but it is difficult to assess

the achievement of this goal reliably (Knight, op. cit.). Students are more likely to adopt a

‘deep’ approach to learning if they are self-motivated. A ‘deep’ approach involves making

sense of what is to be learnt and not just being able to reproduce the subject matter at a later

date. Learning should be very much more than the superficial reproduction of knowledge

(Rust, 2002). A ‘deep’ approach is also to be encouraged if students believe that they need to

understand the material fully to successfully complete the assessment task (MacLellan, op.

cit.). However, if there is too much emphasis on the reliability of assessment, then there is the

danger that the range of assessment tasks set by lecturers will be limited (Knight, op. cit.) and

it will be the more easily identified ‘surface’ rather than ‘deep’ learning that is assessed.

2

Page 3: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

Throughout debates on the problems relating to the reliability of assessment, it has often

been argued that professionals, lecturers and tutors, are the most appropriate people to assess

students’ work because they are able to make judgements that are largely unaffected by

personal feelings and values (Leach et al., 2001). This paper challenges this assumption by

examining the concept of student self-assessment in a particular HEI.

Self-assessment

Self-assessment can be considered as involving students in identifying appropriate standards

and/or criteria to apply to their work and then making judgements about the extent to which

they have met these (Boud, 1995). Boud argues that students are always self-assessing and

that, before they hand in a written assignment, many of them have a clear idea of how good

they think the piece of work is. Self-assessment requires this active involvement of learners in

making judgements about their own learning. As implied above, all assessment involves an

appreciation of appropriate standards and criteria and also the ability to make judgements

about whether or not the work meets these. Boud claims that staff and students usually

concentrate on the latter, making judgements about whether work meets particular standards.

However, self-assessment is much more than students grading their own work, it should also

require students to be involved in the process of determining what is ‘good’ work.

The success of any self-assessment process relies on the ability of the student to evaluate

accurately his/her performance and strengths and weaknesses. Mature self-reflection is the

recognition that self-assessment is concerned with the student’s performance in terms of

meeting clear criteria. It is an assessment of the performance of the student and not an

assessment of the student as a person (Woods et al., 1988).

There is now greater use of self-assessment in Further Education and Higher Education

Institutions as it is becoming acknowledged that it is a necessary core skill for lifelong

learning and can assist students to become effective and responsible learners. Another

important feature of the move to greater student self-assessment is that it encourages students

and lecturers to examine existing assessment practices (Boud, 1995).

Self-assessment needs practice, as with other personal and intellectual skills, especially the

technique of judgement and reflection on learning (Boud, 1992). Once mastered, self-

assessment is likely to lead to a deep approach to learning, although it must be used in the

context of a student’s subject domain otherwise only technical competence is adopted and

understood. Assessment criteria need to be absolutely clear, because teachers and students

may interpret criteria differently. Longhurst and Norton, (1997) suggest that students value

content whereas teachers value argument. Students also require clarification of the meaning of

criteria on marking grids. These differences need to be addressed and reconciled because,

without active involvement through discussion and debate, the development of a common

3

Page 4: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

view on standards and levels will be problematic for both teachers and students (Rust et al.,

2003).

A major advantage of discussing assessment criteria for written work is that it encourages

students to reflect and this leads to greater understanding of their individual learning

processes (Dochy et al., op. cit.).

Self-assessment helps to give learners greater ownership of the learning that they undertake

and improve their performance (Price et al., 2001). As Brown and Knight (1994) note,

‘Assessment is not then a process done to them, but is a participative process in which they themselves are involved. This in turn tends to motivate students, who feel they have a greater involvement in what they are doing’ (p.52).

However, like any form of innovative assessment, there are risks involved. Although self-

assessment may be more open and transparent, there could be a far greater risk of bias as the

students who are assessing their own work have a vested interest in securing a high mark

(Fleming, 1999). It also has to be taken into account that some students doubt their ability to

self-assess themselves and other students think that assessment is the job of the teacher

(Boud, 1992). In addition, Boud (1994) identifies three other risk factors (see Table below)

which would render any self-assessment ineffective. However, Toohey (1996) claims that

problems of bias can be addressed and supports Boud’s view that self-assessment is generally

reliable if criteria are explicit and students are involved in developing assessment criteria and

given opportunities to practice them.

Boud (1994) summarises how self-assessment can be either liberating or oppressive,

depending on a number of factors.

Liberating factors

the motive for its introduction is related to enhancing learning;

it is introduced with a clear rationale and there is an opportunity to question it;

learners are involved in establishing criteria;

learners have a direct role in influencing the process;

it is one of a number of complementary strategies to promote self-directed and interdependent learning.Oppressive factors

it is related to meeting institutional or other external requirements;

learners are using criteria determined solely by others;

the process is imposed on learners.

Traditionally, it has been lecturers who decide what knowledge is and consequently what is

to be assessed. However, today’s students are able to exert more control over their learning

experiences through, for example, choosing the modules to make up many professional

4

Page 5: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

modular degree or postgraduate certificate programmes. Often students can also choose to

reflect upon their own practice experiences through work-base learning and these can be the

subject of an assessed option module. Recent government papers have identified students as

key stakeholders in higher education provision, thus there is a need for them to have full

access to information about their academic programmes and performance levels. Their views

on the education and services they receive have usually been routinely canvassed but have

now become much more influential (QAAHE, 2002; DfES, 2003). The distribution of power

amongst the different higher education stakeholders is changing and self-assessment involves

an increase in the students’ share. However, some students may not want this greater power,

but those that do, need to be central to the assessment process rather than reacting to it. They

are now challenging the lecturers’ hegemony in the assessment process (Leach et al., op. cit.).

With many self-assessment schemes, the first step involves encouraging students to decide

how they are to be assessed. This requires student involvement in the setting of intended

learning outcomes and the evidence required to achieve them. In this situation it is important

that students clearly understand the criteria and are schooled in techniques of self-assessment

(Reynolds and Trehan, 2000) so that they have confidence in the assessment process as more

equal partners and do not regard it as something of a lottery (MacLellan, op. cit.). It is the

experience of Reynolds and Trehan (op. cit.) in using self-assessment with postgraduate

management students that it is very rare for a moderator to be called in to adjudicate. Other

schemes may give differing amounts of power but where this largely resides with the lecturer,

the process perhaps cannot be regarded as truly participative or empowering (Race, op. cit.).

The successful introduction of self-assessment is more likely if there is interest and

commitment amongst staff, a facilitative learning environment for students to be autonomous

and regular open discussion of educational issues (Boud, 1994).

The Empirical Study

The small-scale study aimed to investigate students’ ability to assess the grade of their essays

on two practice-based programmes; nursing and teaching and learning in a particular Higher

Education Institution. The study addressed whether students tended to over-rate or under-rate

their academic performance compared with the ratings given by their tutors. This line of

inquiry is not new as Boud and Falchikov (1989) made this comparison over fifteen years ago

and found that weaker students were more likely to over-value their work whereas stronger

students tended to undervalue their work.

It cannot be assumed that students and tutors are using the same criteria as it could be, for

example, that students are rating effort rather than the product of the effort is as more likely

the case with the tutors (Boud and Falchikov, op. cit.). Common practice is for both groups in

the study to be given the relevant marking grid in the student handbook at the commencement

5

Page 6: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

of their particular module. Feedback on written work thereafter then utilises the language of

those grids to advise students of their achievement and guide them on improvement. Although

the use of assessment grids provides common criteria for both parties, the question we ask is

whether just providing the assessment grid is adequate for students to understand the

requirements for this written work or should tutors be explaining and practising with marking

grids in order that the students may better engage with them?

There are many dimensions of self-assessment which were considered in the study

including the nature of students’ prior essay writing experience, level of study, personal

learning goals (Burton and Jackson, 2003), and self-confidence (Boud and Brew, 1995)

before comparing their marks with those of the lecturers. It is suggested that any student

needs a minimum level of experience in academic work before the notion of self-assessment

is meaningful to them.

GRAPH 1

Pre-Registration Nursing Students – HE Level 2

Although age and gender were not formally recorded, it is known that all but one of

the 26 pre-registration students was female.

All

Respondents

Estimated

40-49%

Estimated

50-59%

Estimated

60-69%

Estimated

70%+

STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR

50.77 60.69 42.50 60 50.42 58.42 61 64.40 70 75

-9.92 -17.5 -8 -3.4 -5

6

Page 7: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

GRAPH 2

Post-Registration Nursing Students – HE Level 3

All

Respondents

Estimated

40-49%

Estimated

50-59%

Estimated

60-69%

Estimated

70%+

STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR

50 48 44.5 39.5 51.33 49.89 60 65 - -

+2 +5 +1.44 -5

GRAPH 3

7

Page 8: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

Postgraduate Certificate in Learning and Teaching Students – HE Level 4

59.09 60.09

-1.000.00 0.00 0.00

52.00 52.80

-0.80

62.5064.25

-1.75

70.00 70.00

0.00

-10.00

0.00

10.00

20.00

30.00

40.00

50.00

60.00

70.00

All respondents Estimated 40 - 49 Estimated 50 - 59 Estimated 60 - 69 Estimated 70 +

Estimated Actual Difference

All

Respondents

Estimated

40-49%

Estimated

50-59%

Estimated

60-69%

Estimated

70%+

STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR

59 60 - - 52 52.8 62.5 64.25 70 70

-1 -0.8 -1.75 0

8

Page 9: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

GRAPH 4

All Three Groups of Students in the Sample

52.35

57.08

-4.73

43.17

53.17

-10.00

51.0454.38

-3.35

61.5064.40

-2.90

70.0071.67

-1.67

-10.00

0.00

10.00

20.00

30.00

40.00

50.00

60.00

70.00

80.00

All respondents Estimated 40 - 49 Estimated 50 - 59 Estimated 60 - 69 Estimated 70 +

Estimated Actual Difference

All

Respondents

Estimated

40-49%

Estimated

50-59%

Estimated

60-69%

Estimated

70%+

STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR STUDENT TUTOR

52.35 57.08 43.17 53.17 51.04 54.38 61.5 64.4 70 71.67

-4.73 -10 -3.34 -2.9 -1.67

Discussion

9

Page 10: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

In the discussion that follows, the figures in brackets refer to the mark the student believes

he/she will receive, followed by the mark that is awarded by the tutor. For example, (50-45)

would mean that the student thought the piece of work would receive 50% but it was actually

awarded 45%.

In examining the results of this small survey, three areas for discussion emerged and these

are now considered below.

Theme 1: Identifying and understanding the meaning of standards/criteria in the assessment of written work

Boud (1995) explains how students find problems in succeeding when criteria are determined

by others. However, by inviting these students to discuss marking criteria before attempting

the assignment, the results show a majority are assessing themselves very closely to their

teachers’ allocated grade (see graph 4). If one looks at the standard deviations for the three

sample groups, it is the junior group (graph 1) that show the widest spread of data (standard

deviation 12.46). This suggests that novices need the most support in self-assessing academic

work. By using the criteria grid, many students appear able to steer their essay writing skills

forward, thus seeming to learn through the process of assessment (MacLellan, op. cit.). This

was seen in the comments such as ‘use the feedback mark sheets’ (50-75 marks) and ‘try to

improve my writing style’ (50-58 and 50-65 marks). However, one could argue they are

learning to ‘play the game’ and their understanding is superficial rather than deep and, as Rust

(2002) warns, their skills will be lost on completion of their study programme. Their success

in life long learning will be judged over a career time frame rather than a three-year

undergraduate programme. Comments such as ‘tend to be descriptive, try to discuss’ (50-45

marks), ‘don’t read enough’ and ‘done level 3 before and know what critical analysis is’

support the view that students are using the criteria language to communicate with their

teachers and learning to use the lexicon of academia.

The students studying at HE level 2 and 3 were all working to the Faculty of Health agreed

assessment criteria for academic work which is based on SEEC (2001) Level descriptors and

QAAHE (2001) Framework of HE qualifications. These are regularly reworded and updated

by teaching staff in the Faculty. However, Boud (1994) describes this process as ‘oppressive’

when it is related to meeting institutional requirements and the criteria are determined solely

by others than the learners and the process is imposed on them. The Faculty of Health agreed

criteria for a pass (40-49%) at HE level 2 are that,

‘work shows a tendency to be descriptive, but some discussion evident. Safe knowledge for practice demonstrated. Relevant references used and focus to the assignment achieved. Word processed with some errors’ and, at level 3, ‘Demonstrates some knowledge and understanding of the issues involved. Some

10

Page 11: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

critical thinking but shows difficulty in balancing and substantiating arguments. Evidence of reading around the subject chosen for the assignment although material may be used uncritically. Relates theory to practice appropriately and work is word processed adequately’.

Some authors such as Boud (1995) and Dochy et al. (op. cit.) would argue that students would

only achieve true understanding of their achievement if they had designed the assessment

criteria themselves but others (Race 2001, Knight 2002) warn that reliability would then be

sacrificed between modules and programmes awarding the same credit rating, and therefore

be inequitable and challenge present HEI benchmarks such as those offered by the QAA

(2001). However, as Boud has found, most HEIs are wary of such freedom.

It could be suggested from our results that it is the students who are borderline satisfactory

(see graphs 1 and 2 40-49% band) who are for some reason the most vulnerable in

understanding the assessment process, as they seem to have difficulty in gauging their

learning from given standards. It would be interesting to work with this group to understand

their difficulties, in order to improve their performance. The Postgraduate Certificate in

Learning and Teaching (PGCLT) students shown in graph 3 are members of academic staff.

They are working at HE level 4 and they may have been advantaged because, although they

do not develop their own assessment criteria for this programme, they are well aware of

learning outcomes and assessment criteria through their work as lecturers.

Theme 2: Judgement on own learning ability, strengths and weaknesses

Comments such as ‘I need to enjoy the subject to learn’ and ‘I need a lot of time and have to

make a lot of effort’ (50-45, 40-62) supports Brown and Knight’s (op. cit.) view that

assessment is a participative process in which students are involved. This challenges Woods’

(1988) assertion that assessment is not of the person but the performance against set criteria.

These students saw their personal attributes as crucial for success. Learning in higher

education is more complex than understanding learning and assessment strategies (Knight

2002). Personal interests can have vicarious moderating effects on students and may have

profound implications from one assignment to the next. Students represented in graphs 2 and

3 were studying part-time within a full-time work commitment. Those represented in graph 1

were studying as trainee professionals and being assessed in both theoretical and practical

skills. Many in all the groups were parents of small children who are themselves in full time

education. Thus some students suffer some privation at some time in a three-year programme.

The comparison of the study levels in graphs 1, 2 and 3 show those at a higher level, the

PGCLT students shown in graph 3, who have progressed through more HE programmes, have

better judgement of their academic achievement as have those who have high grades (70%+)

in all groups (graph 3 with standard deviation of 2.52). Thus we could suggest that ‘practice

11

Page 12: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

makes perfect’ and there is no substitute for experience with assessment criteria and that some

students are better able than others to learn this quickly for a variety of reasons. Students

achieving in the higher criteria bands commented ‘have done quite well at level 2 in the past’,

‘know what the criteria mean to me’ and ‘have got around 55% before and have now

improved my writing style’. It would be interesting to tease out what they mean by ‘done

well’ and ‘improved writing style’ in relation to writing an academic assignment, which is a

more subjective appraisal of written work. From our experience in moderating academic

written work, discussion often centres on module objectives, language use and personal

preference. Sometimes it maybe where guidelines for assignments do not match the

assessment grid. It could improve if students were self-assessing the assessment tasks and

criteria for grading.

Theme 3: Who is the judge of ‘good’ work and is it content or argument?

Leach (2001) notes that many teachers argue that they are the best judges of student

achievement and Race (2001) supports this notion in explaining how reliability is most likely

to be achieved when scoring is not reliant on a large number of assessors. Thus, individual

students or groups of students assessing their own work could be seen as unreliable.

Alternatively, student input could be seen as strengthening the reliability of assessment

outcomes and criteria as an additional element is being included in the overall process. Boud

(1995) advocates benchmarking to external criteria, supporting our suggestion that a tripartite

arrangement involving tutors, students and external reference points could be the ideal

situation. All students in this study were benchmarked against Faculty criteria which are

externally benchmarked and marked by their module teacher. By including their self-

assessment certainly provoked discussion for the future.

The assessment criteria regard 70%+ as ‘first-class’ work. At HE level 2 the lexicon of the

assessment grid states that it should,

‘demonstrate an ability to sustain an argument, relate theory to practice and start to use the skill of critical analysis’

and at HE level 3 it should,‘demonstrate excellent ability to write critically using research material from a wide selection of knowledge bases, synthesize concepts, and relate theory to practice showing creative thought’.

In many cases the students underestimated their ability in relation to the marker (see graph 4).

However, it is more pronounced in the lower grades. It may have been that the students were

rating their content knowledge as suggested by comments such as ‘did not read enough’, ‘was

not interested in the subject therefore not motivated to get involved in this module’ and ‘could

not get the articles or references for this module’ and the teacher was rating something

12

Page 13: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

different. The teacher was probably rating their work more closely to the assessment criteria

which may not specify actual content requirement other than what should be done with it –

the process of ‘writing critically’ and the module learning outcomes. Longhurst and Norton

(1997) support this notion that students may value content more than teachers who may look

for facts being manipulated to formulate arguments. This appears to be supported in the

student interview comments above. Rust (2002) warns against awarding high assessment

grades being for the correct, superficial regurgitation of facts. This often occurs because of

previous educational experience of providing evidence for learning which values correct

content. Perhaps HEI assessment needs to pay more attention to educational experience as

wider access to high education study becomes the norm.

Conclusions

This relative small-scale research project on students’ ability to assess the grade of their

essays leads to the following conclusions which are related to the three themes identified

earlier.

Generally, the students in the sample awarded themselves similar scores to those that were

awarded by the tutors. The biggest gap between the tutors’ and the students’ marks was on the

HE level 2 pre-registration nursing programme. The students tended to underestimate their

score and, because they are relatively new to academic study and the professional practice of

nursing, it is likely that these students in particular will benefit from the tutor’s guidance in

undertaking a thorough study of the assessment criteria grid and thereby understand what the

assessment criteria grid means and what is being expected of them. In contrast, the

experienced participants in terms of both employment and academic experience on

the HE level 4 PGCLT programme were more confident and therefore less likely to

underestimate their abilities.

It is difficult to address the complex social, personal, cultural and educational issues that

lead to students either under or overestimating their achievements. Nevertheless, with tutors

and students working closely together on the grids, the level of ambiguity will be reduced and

a shared understanding of the meaning of the criteria will develop. This should help to bridge

the gap between what students believe constitutes ‘good’ work and the tutors’ views on this.

There was a tendency for students to emphasise content knowledge whereas tutors were more

interested in whether there was critical analysis. The assessment criteria grids do stress the

importance of critical analysis in ‘good’ written assignments. However, this may be stated

clearly in the grids but tutors do need to make this explicit when talking to students and be

sure that they are well aware of what is expected in order to achieve a ‘good’ mark. If

students are to take a more active part in their education, they need help and assistance from

13

Page 14: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

their tutors to understand the statements in the grids so that they can become better judges of

their performance.

References

BROWN, S. (1997) Assessing Student Learning in Higher Education, (London, Routledge).

BROWN, S. AND KNIGHT, P. (1994) Assessing Learners in Higher Education, (London, Kogan Page).

BOUD, D. (1992) The use of self-assessment schedules in negotiated learning, Studies in Higher Education, 17(2) pp 185-200.

BOUD, D. (1994) ‘The move to self-assessment: liberation or a new mechanism for oppression?’ in Armstrong P., Bright, B. and Zukas, M., Reflecting on Changing Practices, Contexts and Identities, Proceedings of the 24th Annual Conference of the Standing Conference on University Teaching and Research in the Education of Adults, University of Hull, 12-14th July. Leeds, Department of Adult Continuing Education, University of Leeds, pp 10-14.

BOUD, D. (1995) Learning through Self-assessment (London, Kogan Page).BOUD, D. and BREW, A. (1995) Developing a typology for learner self-assessment

practices. Research and Development in Higher Education, 18, pp 130-135.BOUD, D. and FALCHIKOV, N. (1989) Quantitative studies of student self-assessment in

higher education: a critical analysis of findings. Higher Education, 18(5), pp 529-549.BURTON, J. and JACKSON, N. (eds) ( 2003) Work Based Learning in Primary Care

(Oxford, Radcliffe Medical Press).DEPARTMENT FOR EDUCATION AND SKILLS (DfES) (2003) The Future of Higher

Education, Cmd 5735, (Norwich, HMSO).DOCHY, F., SEGERS, M. AND SLUIJSMANS, D. (1999) The use of self-, peer and co-

assessment in Higher Education: a review, Studies in Higher Education, 24(3), pp 331-348.

FLEMING, N. (1999) Biases in marking students’ written work: quality? In BROWN, S. and GLASNER, A. (Eds) Assessment Matters in Higher Education, (Buckingham, SRHE/Open University Press).

KNIGHT, P. (2002) Summative assessment in Higher Education, Studies in Higher Education, 27(3), pp 275-286.

LEACH, L., NEUTZE, G. AND ZEPKE, N. (2001) Assessment and empowerment: some critical questions, Assessment and Evaluation in Higher Education, 26(4), pp 293-305.

LONGHURST, N. and NORTON, L. (1997) Self-assessment and Coursework Essays, Studies in Education Evaluation, 23(4) pp 319-330.

MacLELLAN, E. (2001) Assessment for learning: the differing perceptions of tutors and students, Assessment and Evaluation in Higher Education, 26(4), pp 307-318.

PRICE, M., O’DONOVAN, B. AND RUST, C. (2001) Strategies to develop students’ understanding of assessment criteria and processes, in: RUST, C. (Ed) Improving Student Learning: Improving Student Learning Strategically (Oxford, Oxford Centre for Staff and Learning Development).

QAAHE (Quality Assurance Agency for Higher Education) (2001) The Framework for Higher Education Qualifications in England, Wales and Northern Ireland, (Gloucester, QAAHE).

QAAHE (Quality Assurance Agency for Higher Education) (2002) Handbook for Institutional Audit: England, (Gloucester, QAAHE).

RACE, P. (2001) LTSN Generic Centre Assessment Series: A briefing on self, peer and group assessment, (York, Learning and Teaching Support Network).

REYNOLDS, M. and TREHAN, K. (2000) Assessment: a critical perspective, Studies in Higher Education, 25(3), pp 267-278.

14

Page 15: Are Students Good Judges of Their Assessment Performance  · Web viewJames Clerk Maxwell Building. 57 Waterloo Road. London SE1 8WA UK E-mail: laurie.lomas@kcl.ac.uk. Phone: 44 (0)20

RUST, C. (2002) The impact of assessment on student learning, Active Learning in Higher Education, 3(2), pp 145-157.

RUST C., PRICE, M. and O’DONOVAN, B. (2003) Improving students learning by developing their understanding of assessment criteria and processes, Assessment and Evaluation in Higher Education, 28(2), pp 147-164.

SEEC (Southern England Consortium for Credit Accumulation and Transfer) (2001) SEEC Credit Level Descriptors, (Essex, SEEC).

TOOHEY, S. (1996) Implementing student self-assessment: some guidance from the research literature, in: NIGHTINGALE, P. et al. (Eds), Assessing Learning in Universities, (Sydney, University of New South Wales).

WOODS, D., MARSHALL, R. AND HRYMAK, A. (1988) Self-assessment in the context of the McMaster problem-solving programme, Assessment and Evaluation in Higher Education, 13, pp 107-127.

15