7
This article was downloaded by: [Memorial University of Newfoundland] On: 03 August 2014, At: 01:20 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Change: The Magazine of Higher Learning Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/vchn20 Academic Standards: The British Experience Roger Brown a a Liverpool Hope University Published online: 25 May 2011. To cite this article: Roger Brown (2011) Academic Standards: The British Experience, Change: The Magazine of Higher Learning, 43:3, 65-70, DOI: 10.1080/00091383.2011.569289 To link to this article: http://dx.doi.org/10.1080/00091383.2011.569289 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Academic Standards: The British Experience

  • Upload
    roger

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

This article was downloaded by: [Memorial University of Newfoundland]On: 03 August 2014, At: 01:20Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

Change: The Magazine of Higher LearningPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/vchn20

Academic Standards: The British ExperienceRoger Brown aa Liverpool Hope UniversityPublished online: 25 May 2011.

To cite this article: Roger Brown (2011) Academic Standards: The British Experience, Change: The Magazine of HigherLearning, 43:3, 65-70, DOI: 10.1080/00091383.2011.569289

To link to this article: http://dx.doi.org/10.1080/00091383.2011.569289

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose ofthe Content. Any opinions and views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be reliedupon and should be independently verified with primary sources of information. Taylor and Francis shall not beliable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilitieswhatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out ofthe use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

www.changemag.org 65

Roger Brown ([email protected]) is professor of higher education policy at Liverpool Hope University. He was previously vice chancellor of Southampton Solent University, and before that he served as chief executive of the Higher Education Quality Council. His book, Higher Education and the Market (Routledge, 2010), is a survey of the worldwide marketization of higher education.

By Roger Brown

THE BRITISH EXPERIENCE

ACADEMIC STANDARDS:

Although many of the recommendations in A Test of Leadership: Charting the Future of U.S. Higher Education (the Spellings Commission report) have still to be im-plemented, policy makers’ insistence that

American colleges and universities measure student learning outcomes in a way that permits comparisons seems unlikely to go away. Nor does the recommendation that surfaced in the meetings following the report’s release — that accrediting agencies should start insisting on at least minimal standards for graduation — seem likely to disappear, especially given the development of common standards for K-12 described by Kati Haycock in the July/August 2010 issue of Change.

Proponents of the recommendations argue that being able to compare the learning outcomes of various colleges and universities enables those institutions to improve and students to make informed choices, and that common standards will clarify what it means to earn a college degree. Meanwhile, opponents argue that institutions are too diverse to compare and that common standards are either impossible to develop or not useful, as well as an infringement of institutional au-tonomy and academic freedom.

Nevertheless, these concerns and pressures seem unlikely to abate. Does the British experience offer a possible means of squaring this particular circle?

This article begins by outlining the British approach to quality assurance. It describes challenges to the system that surfaced in the mid-90s and the institutions’ responses to

them. It then looks at the further challenges that have arisen with marketization, challenges that can be expected to in-tensify when what is in effect a voucher system for funding university teaching is introduced in 2012. It concludes by sketching out a reform program that attempts to put academic control of standards on a more secure and professional foot-ing and that may also be of some relevance to the US.

The UK QUaliTy-assUrance sysTem

By international standards, British universities have quite elaborate (and old—some can be traced back to 1561) inter-nal quality controls. They rely on external examiners, faculty from other universities or colleges employed by, answerable to, and (modestly) remunerated by the institution. External examiners are almost unique to the UK (Denmark and Malta also have them), as is the system whereby the degrees that students are awarded in most subjects are classified—for example as first class, upper second, lower second, etc. (Australia also has degree classifications, although its degree structures are closer to those of Scotland than England).

The external examiners’ reviews are conducted under the broad jurisdiction of the Quality Assurance Agency for Higher Education (QAA); a sister agency, the Higher Education Academy, is responsible for promoting quality improvement. Both agencies are jointly “owned” by the government and the institutions. Programs in subjects like engineering and accountancy are also subject to accreditation by professional and statutory bodies.

Dow

nloa

ded

by [

Mem

oria

l Uni

vers

ity o

f N

ewfo

undl

and]

at 0

1:20

03

Aug

ust 2

014

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

66 Change • May/June 2011

The extent of the scrutiny of outcomes depends on the level at which the external team is operating. Subject-level external examiners will look at individual assessments to see that the student scores are fair and consistent. As well as ex-amining student work, they may meet with students and even participate in oral exams. By contrast, program-level external examiners are more concerned with procedural issues, such as whether the exam board has conducted its business appro-priately.

While there is no set format, external examiners’ reports usually comment on such matters as

• the standards met by the students compared to the per-formance in other comparable programs;

• the strengths and weaknesses of the student cohort;• the quality of teaching and assessment as disclosed by

the students’ performance; • the extent to which the standards set and achieved are

appropriate; • the design, structure, and marking of assessments; • the exam procedures; and • the service provided by the institution to the external

team. They may also comment on factors that might have influ-

enced performance, such as lecturer illness or resource con-straints. These reports are taken very seriously, and the insti-tution’s handling and use of them is one of the key pieces of evidence considered in external quality assurance (see below).

The effectiveness of these institutional arrangements for protecting quality and standards is periodically tested through a process of institutional review. This process has a good deal in common with American regional accreditation, although the reports have been published since 1993.

These long-standing quality-assurance arrangements have been subject to two main sets of challenges since the unified system was created in 1992, when the former polytechnics acquired a university title. It is the way in which the system has coped and is coping with these challenges that may be of interest in the US.

The FirsT challenge

The first challenge came from the British government in the mid-90s. British universities and colleges have been active in recruiting overseas students and developing part-nerships with overseas institutions for many years. Britain remains the second-largest recruiter of international students after the US, and it has been estimated that there are a fur-ther 200,000 students studying abroad in programs leading to British qualifications awarded by more than 1,600 partner organisations.

In January 1994 the then-secretary of state for education visited Malaysia and Singapore. He was deluged with com-plaints from his hosts about some of the British institutions’ local entrepreneurial activities. This raised the question of

Box 1: BriTish QUaliTy conTrols

Box 2: exTernal examiners

In assessing student learning outcomes, the external ex-aminers see that students are fairly assessed and assure some degree of comparability in the awards offered by different in-stitutions in the same subjects. Typically they do this by mod-erating the planned assessments, scrutinising the assessment outcomes, and reporting to the vice chancellor (president).

Moderation involves considering and approving the form and content of the proposed assessments of student learn-ing. The external examiner’s comments are fed back to the author, who is expected to take them into account and tell the external team of the action taken. In theory, an examiner who is seriously unhappy with any aspect of the assessment proc-ess can decline to sign off the exam board report, although in practice this is rare.

• admissions policies, so that only students capable of benefitting from particular programs are enrolled;

• program validation, monitoring, and review, so that only those that are deemed fit to lead to an institu-tion’s degree are offered;

• assessment regulations and mechanisms, so that only students who reach the required level of attainment receive awards. This includes a significant amount of double marking of students’ work, much of it “blind”;

• monitoring and feedback processes, so that oppor-tunities are taken to improve the quality of what is offered;

• staff selection and development, so that only suitably qualified and trained staff teach students;

• staff appraisal, so that staff receive regular, structured feedback on their performance.

• The academic standard of each degree and its compo-nent parts (typically, modules) are set and maintained at the appropriate level and that student performance is judged against this standard;

• The assessment process measures student achieve-ment appropriately against the intended outcomes of the program of study and is rigorous, fairly operated, and in accordance with the institution’s policies and regulations; and

• Institutions are able to compare the standards of their degrees with those of other institutions.

Dow

nloa

ded

by [

Mem

oria

l Uni

vers

ity o

f N

ewfo

undl

and]

at 0

1:20

03

Aug

ust 2

014

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

www.changemag.org 67

mentioned. It should be emphasised that because institu-tions alone are responsible for setting the conditions for their degrees, these frameworks can only be reference points. Institutions are nevertheless asked to say how they make use of them and to justify any significant departures. At the very least, the infrastructure requires institutions to reflect on the conditions for awarding their degrees and creates some com-monality in curricular requirements across the sector.

When the infrastructure was being devised, there was considerable opposition to it on the grounds of interference with academic freedom, reduction of diversity, and increased bureaucracy. Some twelve years on, the HEQC approach has wide acceptance not only nationally but internationally—for example, through the European Tuning Project described by Barbara Kehm in the May/June 2010 issue of Change.

The second challenge

The second and more recent challenge is to the notion of comparability and to the role of external examiners in ensur-ing it.

As can be seen, there is considerable attachment in Britain to the idea that there should be a genuine equivalence in the levels of learning required of, and achieved by, students fol-lowing different programs of study at one or more institutions in the same or different subjects and leading to the same or a cognate award. As good a statement as any of the reasons can be found in the report of the 1985 Lindop Committee:

In Britain it has long been thought desirable that all degrees should be broadly comparable in standards regardless of the institution where they were obtained, and in the public sector external validation, backed up by the external examining system, could play an im-portant part in ensuring that no institution’s degrees fell below a certain minimum standard. If it was open to any institution that wished to confer its own degrees, it would be hard to maintain the confidence of the public, that for all that there were informal institutional “peck-ing orders”, all degrees were broadly comparable. This might seriously undermine the credibility of the qualifi-cations obtained by some students.

Within the UK, students, employers, and others value consistency, which has also been reflected in common under-graduate fee levels (at least up to now). Externally, the UK’s considerable success in attracting international students, part-ners, and staff has depended very heavily on the continuing currency and standing of — and some consistency between — institutions, subjects, and programs. However, this princi-ple of comparability has now come under pressure from three sources: assessment, expansion and competition.

There is longstanding and substantial evidence of insuf-ficient professionalism by institutions, departments, and faculty in the practice of assessment (Cox, 1967; Williams,

whether, following the absorption of the polytechnics, aca-demic standards were being compromised by the ways in which the universities recruited, educated, and accredited students.

The sector’s response, through what was then the Higher Education Quality Council (of which the author was chief executive) and subsequently became the QAA, was to con-duct extensive research into academic standards, eventuat-ing in the Graduate Standards Programme (see Resources). Academic standards were defined as the levels of achieve-ment required to gain a specific university degree.

The broad conclusion was that because of the extent to which academic standards depend on tacit knowledge and socialization into assessor groups, written definitions would have only a limited value.

Nevertheless, institutions needed to be more explicit about the knowledge, skills, and understanding that they were aiming to develop in their students and ready to benchmark their expectations of student achievements against those of other institutions. Accordingly, the Council proposed that there should be a set of guidelines about the structuring and nomenclature of degrees and the levels at which they were offered.

As a result of this work—and work by the Council’s suc-cessor body, the QAA—the UK now has an “academic in-frastructure” with four main elements: a code of practice, a framework for higher education qualifications, subject bench-mark statements, and guidelines for program specifications.

Universities’ and colleges’ use of the infrastructure is eval-uated through the periodic institutional reviews already

Box 3: The UK’s academic inFrasTrUcTUre

• a code of practice covering all aspects of quality man-agement, including assessment and course approval and review, as well as external examination;

• a framework for higher education qualifications con-taining a broad description of the academic expecta-tions associated with each level of degree, together with more detailed descriptors of the skills and com-petences associated with degree holders;

• subject benchmark statements outlining what can be expected of a graduate in terms of the abilities and skills needed to develop understanding and compe-tence in a particular subject. There are now 57 such statements covering a very wide range of subjects and fields of practice;

• guidelines for program specifications setting out the intended aims and learning outcomes of each pro-gram of study (see Resources).

Dow

nloa

ded

by [

Mem

oria

l Uni

vers

ity o

f N

ewfo

undl

and]

at 0

1:20

03

Aug

ust 2

014

68 Change • May/June 2011

1979; Warren-Piper, 1994; Yorke, 2009). This has lead, among other things, to significant variations in the levels of achievement aimed at and achieved by students.

The QAA summed the problem up in a 2008 report based on the outcomes of the latest cycle of institutional reviews:

Worries include doubts in some cases about the dou-ble marking and/or moderation of students’ summative assessment; continuing difficulties with degree classifi-cation; departures from institutional practice in the way staff in departments and schools work with external examiners; and generally weak use of statistical data to monitor and quality assure the assessments of all stu-dents and degree classifications.

The reviews also found weaknesses in the arrangements in some institutions for detecting and dealing with plagiarism and for providing feedback on students’ assessed work, in-cluding feedback to international students. This has consist-ently been the weakest area of the National Survey of Student Satisfaction, which the UK has used on a subject and institu-tion basis since 2005.

Partly as a result of these concerns, we are now experi-menting with an alternative to honours degree classifica-tion, the Higher Education Achievement Record (HEAR). Instead of a single overall classification, students will receive a detailed transcript showing their marks module by mod-ule, together with information about prizes, extracurricular activities, and so on. However, it is not yet clear whether the HEAR will replace or supplement the honours classification, employers especially being attached to what they see as a cheap and easy way of filtering job applicants.

The second factor is the expansion and diversification of the system since the late 1980s and even since the early 2000s. As well as the enormous increase in the number of institutions awarding degrees and the number and range of subjects and students in the system, three developments are of particular significance.

• First, an increase in the categories of work being ex-amined (invigilated exams, coursework, portfolios, projects, internships, etc.) and an associated reduction in the breadth of knowledge and understanding actually being assessed at any one time.

• Second, the growth of joint programs—as well as ones that are inter- and multi-disciplinary and modular (i.e., made up of discrete units of learning, each separately assessed and accredited). Modularity in particular puts considerable demands on external examiners, who have been recruited mainly for their disciplinary expertise.

• Third, the increased importance of such concepts as “en-terprise,” “employability,” and “transferable skills”—to which conventional assessment methods, concerned as they mainly are with testing mastery of subject matter, may not be well suited.

The fourth and most recent factor is the increasing compe-tition among institutions associated with the introduction of significant tuition fees (from 1998), commercial institutional rankings, the National Student Survey, and greater commer-cialization.

These pressures can be expected to intensify when the present cap on tuition is raised from the present £3,290 to £9,000 in 2012. This will almost certainly mean serious price competition in the undergraduate market for the first time, with fees (what is in effect a voucher system) representing nearly all of most institutions’ revenue for teaching. Such competition can be expected to lead to greater variations in standards and resources, even while it challenges the peer-review structures that have hitherto, albeit imperfectly, helped to keep such variability in check.

How then should we proceed?

responses To compeTiTion

This part of the discussion begins with a paradox. As the system has expanded and the cost to the student has risen, the desire for comparability has increased: Students and parents want to know whether the additional expenditure required to study a particular subject, go to a particular institution, and/or pursue a particular degree is worthwhile as compared with another choice. But by the same token, the ability to achieve such comparability has diminished. How, if at all, can this conundrum be solved?

It may have been a reasonable expectation that program outcomes should be comparable at a time when only a small proportion of the population went to university; when the student population was more equivalent in background, preparation, and ability; and when majors were relatively

As the system has expanded and

the cost to the student has risen,

the desire for comparability has

increased: Students and parents

want to know whether the ad-

ditional expenditure required to

study a particular subject, go to a

particular institution, and/or pursue

a particular degree is worthwhile

as compared with another choice.

Dow

nloa

ded

by [

Mem

oria

l Uni

vers

ity o

f N

ewfo

undl

and]

at 0

1:20

03

Aug

ust 2

014

www.changemag.org 69

uniform in structure and content. But today the situation is quite different.

In Britain, nearly half of the young population now par-ticipates in higher education; students’ abilities are much more varied; and the purposes, character, and intended out-comes of programs diverge enormously. It would therefore seem to make little sense to aim even for a broad measure of comparability. Instead, institutions should simply state their intentions and expectations as regards student achievement, and students and others should make their choices based on that. In this model, regulation would be confined to periodic checks to verify the truthfulness of institutions’ claims; eve-rything else should be left to “the market.”

However, even if real comparability is a chimera, stake-holders (and even institutions themselves, both as educators and employers) need some confidence that students graduat-ing from any recognised institution have reached some mini-mum level of attainment and that the degree awarded signifies some reasonable acquisition of knowledge, skills, and under-standing. Even if it did not matter nationally, this would be very important for maintaining the international reputation for quality that British universities and colleges enjoy.

But how should it be achieved? The following represents the bones of a possible reform program.

First, institutions should publish more information about the aims and outcomes of programs, what students need to do to attain them, and how the institution will provide them with the necessary facilities and opportunities to do so. Although they have been criticised as reductionist, the learning out-comes now expected for every major should enable everyone connected with it to see what kinds of achievement are being aimed at.

Second, we have to make a determined effort both to im-prove the quality and consistency of assessment practices and to communicate the limitations of any assessment method, even those more valid and reliable than what we currently have. A 2009 QAA report, Thematic Inquiries into Concerns about Academic Quality and Standards in Higher Education in England, recommended a review of assessment practices “supported by developmental activities aimed at improving the robustness and consistency of assessment of classifica-tion practices within and between institutions,” together with clarification and explanation of the reasons for, and mean-ing of, variation in particular approaches to assessment (see Resources). This would seem appropriate—indeed, long overdue.

Third, a national committee is currently reviewing the role of external examiners with a view to improving the quality of their training and preparation. One of the disadvantages of the system is that institutions tend to rely on the examiners too much whenever questions are raised about the appropri-ateness of their standards. The QAA and its sister body could between them sponsor the creation of networks of faculty, generally but not exclusively on a subject basis, concerned

with different aspects of assessment. These would be sepa-rate from, though they could overlap with, external examin-ers’ fora.

The aim would be to strengthen the ways in which faculty determine the appropriateness of majors and degrees by ref-erence to practice elsewhere. This would involve comparing the quality of student work and the judgments made about it. In this way the sector could build shared understandings of standards in a way that it is still relatively uncommon at present, at least outside some of the areas covered by profes-sional accreditation.

It would also be helpful if institutions could develop as-sessment archives, data, and information to support the moni-toring of standards-setting and assessment processes, trends in awards over time, and relationships between university and other awards (at both the local and system levels). Institutions could collaborate to compare and agree on common conven-tions for assessment—for example, threshold pass marks, the formula used to determine the final standard for the major, whether all or only some modules/assessments count towards the degree, compensation, grade bands for honours, and defi-nitions and protocols for borderline candidates. All of these are subject to immense inter- (and sometimes intra-) institu-tional variations at present.

Finally, institutional review needs to be focused much more directly on how institutions maintain their standards and satisfy themselves as to their appropriateness. One means of doing this is for each institution to have its curriculum in each subject periodically reviewed by appropriately qualified academic reviewers from institutions with cognate missions.

Stakeholders (and even institu-

tions themselves, both as educa-

tors and employers) need some

confidence that students graduat-

ing from any recognised institution

have reached some minimum level

of attainment and that the degree

awarded signifies some reasonable

acquisition of knowledge, skills,

and understanding.

Dow

nloa

ded

by [

Mem

oria

l Uni

vers

ity o

f N

ewfo

undl

and]

at 0

1:20

03

Aug

ust 2

014

70 Change • May/June 2011

The reviewers would ask the following questions:• What are the aims and purposes of the major(s) and the

associated degrees?• Are those aims and purposes being achieved? Is the

institution doing all in its power to enable students to achieve them by affording them the necessary opportu-nities, resources, and support?

• Are those aims and purposes worthwhile?• On what evidence does the department /institution sat-

isfy itself on these matters?The reviewers would report the answers, together with any

suggestions for improvement, to the vice chancellor. In addition, institutional review should be extended to

incorporate the scrutiny of institutional governance and management and the ways in which resource allocation and academic decision making interact. A recent parliamentary inquiry heard evidence that in a few universities, institutional management had interfered with examiners’ decisions in order to safeguard the institution’s reputation. Although the number of cases that has become public is so far small, this

n Brown, R. (2004). Quality assurance in higher education: The UK experience since 1992. London, UK, and New York, NY: Routledgefalmer.

n Cox, R. (1967). Examinations and higher education: A survey of the literature. Universities Quarterly, 21(3), 292–340.

n Department for Education and Science. (1985). Academic validation in public sector higher education (the Lindop Report). London, UK: DES.

n Higher Education Quality Council (1997). Graduate Standards Programme final report. London, UK: HEQC.

n Quality Assurance Agency for Higher Education (2008) Outcomes from institutional audit: assessment of students (second series). Gloucester, UK: QAA.

n Quality Assurance Agency for Higher Education (2009). Thematic enquiries into concerns about academic stand-ards and quality in higher education in England (final report). Gloucester, UK: QAA

n Quality Assurance Agency for Higher Education (2010). Consultation on changes to the academic infrastructure. Retrieved from http://www.qaa.ac.uk/news/consultation/AI/academic_infrastructure_consultation.pdf.

n Warren-Piper, D. (1994). Are professors professional? London, UK: Jessica Kingsley.

n Williams, W.F. (1979). The role of the external examiner in first degrees. Studies in Higher Education, 4(2) 136–160.

n Yorke, M. (2008). Grading student achievement in higher education. New York, NY, and London, UK: Routledge.

ResouRces

activity is very likely to increase as institutions face even stronger competitive pressures when the new funding re-gime is introduced in 2012. This almost inevitably implies a stronger external regulatory regime.

conclUsion

Although America and Britain come from different starting points, it can be expected that in both countries, pressures for comparable standards will intensify before they weaken. In both systems the academic community has a choice. It should of course explain how there really is no alternative to expert peer judgement if we truly wish to know the educational value of a syllabus, an essay, or an degree.

Beyond that, however, there is a strong argument for being more explicit about the assumptions that underlie those judg-ments, as well as being more professional in how we go about making them. Such professionalism has to involve much greater and more systematic benchmarking of the curricular demands made upon students, as well as much more careful and intensive training and preparation of peer assessors. C

Institutional review needs to be focused much more di-

rectly on how institutions maintain their standards and

satisfy themselves as to their appropriateness.

Dow

nloa

ded

by [

Mem

oria

l Uni

vers

ity o

f N

ewfo

undl

and]

at 0

1:20

03

Aug

ust 2

014