84
UNIVERSITY LECTURERS' EXPERIENCES OF THE SELF-EVALUATION OF THEIR ACADEMIC PROGRAMMES by NICOLENE MURDOCH Research Essay Submitted in partial fulfillment of the requirements for the degree MAGISTER PHILOSOPHIAE in ADULT AND COMMUNITY EDUCATION in the FACULTY OF EDUCATION AND NURSING at the RAND AFRIKAANS UNIVERSITY SUPERVISOR: Prof SJ Gravett January 2002

University lecturers' experiences of ... - ujcontent.uj.ac.za

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

UNIVERSITY LECTURERS' EXPERIENCES OF THE SELF-EVALUATION

OF THEIR ACADEMIC PROGRAMMES

by

NICOLENE MURDOCH

Research Essay

Submitted in partial fulfillment of the requirements for the degree

MAGISTER PHILOSOPHIAE

in

ADULT AND COMMUNITY EDUCATION

in the

FACULTY OF EDUCATION AND NURSING

at the

RAND AFRIKAANS UNIVERSITY

SUPERVISOR: Prof SJ Gravett

January 2002

ACKNOWLEDGEMENTS

- I could not have done this without the following people:

my husband Wilhelm - you complete me;

my family and friends - for always inspiring and supporting me;

Prof Saartjie Gravett - for being such an inspiration and the embodiment

of an adult educator;

all my colleagues at the Center for Higher Education Studies - for always

supporting me and showing an interest in my research;

Dr Ton Vroeijenstijn for sharing his profound knowledge about the

subject;

all the RAU participants in this project who created a friendly and co-

operative working environment;

for the talent and persistence I received from the Creator.

ABSTRACT

The Founding Document of the Higher Education Quality Committee (HEQC) (January,

2001) stresses the importance of quality and accountability in South African higher

education institutions. These institutions will no longer operate in isolation, but are

accountable to governmental bodies. Quality assurance is considered to be an

institution's responsibility, but is not merely an internal activity. The Founding

Document clearly indicates that "once the HEQC is satisfied that demonstrable quality

assurance capacity has been established across a spectrum of higher education

providers, it will use a 'light touch' approach to quality assurance, based on an

increasing measure of reliance on the self-evaluation reports of providers" (HEQC

Founding document, 2001:15). Based on the above, it is clear that self-evaluation

reports of institutions will be of major importance in their quest for accountability.

The HEQC will thus to a large extent, depend on self-evaluation by institutions for the

orderly execution of their responsibilities.

In light of these recommendations, the RAU (Rand Afrikaans University) conducted a

self-evaluation pilot project involving three academic departments. The self-

evaluation that was conducted was not merely the collection of data, fact finding or

check listing, but a frequent, critical self-analysis of strengths and weaknesses of

academic programmes that should "lead to measures which can be taken in order to

improve quality" (Vroeijenstijn, 1995:51). The main purposes of such a self-evaluation

are improvement and accountability. Unfortunately, however, the self-evaluation

process is not without controversy and disagreement. Academics respond to this

process in different ways, and many feel that it infringes on their academic freedom.

The general aim of this study was to explore the experiences of academic staff at RAU

who were participating in the quality assurance self-evaluation process. This

qualitative study employed in-depth interviews and participant observation as data-

collection methods. The data was analyzed by using the constant comparative method

of data analysis.

This study found that there are a wide variety of responses among academics

regarding the self-evaluation process. Most of the lecturers realize the value of and

need for such an exercise, but there are a number of internal factors (e.g. lack of

visible management support, availability of reliable data) and external influences (e.g.

requirements of national bodies) that can hinder the process. The academics also

considered it to be a time-consuming exercise, but expressed their appreciation for

the provision of a structured manual that served as a guideline. They did,

nevertheless, stress the need to adapt and organize the exercise, provided in the

manual, to their individual needs.

Since the national education agenda demands that self-evaluation form an integral

part of the transformation and development of higher education, there exists an

urgent need to assess and explore the validity and reliability of different self-

evaluation methods. This study hopes to contribute to the present body of research by

focusing on the responses to such an exercise at the RAU.

TABLE OF CONTENTS Page

TITLE PAGE

ACKNOWLEDGEMENTS

ABSTRACT

INTRODUCTION 1

STATEMENT OF THE PROBLEM AND RESEARCH QUESTION 2

AIM OF THE STUDY 3

ASSUMPTIONS AND PRESUPPOSITIONS 3

REVIEW OF RELEVANT LITERATURE 4

5.1 Introduction 4 5.2 Quality in Higher Education 5 5.3 Quality, improvement and accountability 8 5.4 Self-evaluation for improvement or accountability 9 5.5 Self-evaluation: An international perspective 11 5.6 Self-evaluation: A national perspective 12 5.7 Institutional responsibility 13 5.8 Experiences of self-evaluation 14 5.9 Conclusion 16

RESEARCH DESIGN AND METHODOLOGY 16

6.1 Research paradigm 16 6.2 Setting of the inquiry 17 6.3 Sampling procedures 19 6.4 Data collection methods 20

6.4.1 In-depth interviews 20 6.4.2 Participant observation 22

6.5 Data analysis and presentation 23 6.6 Validity and reliability 27 6.7 Ethical considerations 30

DISCUSSION OF FINDINGS 30

7.1 Visible management support and institutional culture 30 7.2 Influence of contextual factors 31

7.2.1 Internal factors 32

7.2.2 External factors 33 7.3 Clarity of outcomes, goals and importance

of self-evaluation 34 7.4 Provision of an evaluation design 38

RECOMMENDATIONS 39

SUGGESTIONS FOR FUTURE RESEARCH 44

CONCLUSION 45

LIST OF REFERENCES 47

LIST OF TABLES Table 6.1 Categories, sub-categories and outcome statements 26 Table 6.2 Strategies used to ensure internal validity 28

LIST OF DIAGRAMS Diagram 10.1 Ripple effect of self-evaluation

46

ADDENDUM 1

Transcript of an interview ADDENDUM 2

Example of field notes

UNIVERSITY LECTURERS' EXPERIENCES OF THE SELF-EVALUATION

OF THEIR ACADEMIC PROGRAMMES

1. INTRODUCTION

Quality in higher education is becoming increasingly important for all institutions, not

only in South Africa, but all over the world (Lewis a Smith, 1994:2). It is not only an

institution's responsibility to maintain and improve internal quality and exhibit public

accountability, but it is evident that it is also on the agenda of the national bodies

such as the Council on Higher Education (CHE) and the South African Qualifications

Authority (SAQA). There are, however, several factors influencing (and sometimes

hindering) the quest for quality in higher and further education, such as striving for

international recognition or maintaining African relevance, the rising concern about

public expenditure, massification of universities, clarifying of institutions' purposes,

the rapid growth of technology and academic mobility, to name but a few (Lategan as

cited in Strydom, Lategan it Muller, 1997:75-76). Harvey (1995:12) stresses this

demand for increased accountability by saying "the quality agenda will continue to

grow as the nature and purpose of higher education shift to reflect the rapidly

changing world economy". This challenge, to replicate the economy, involves

rendering some form of account that the educational activities are being carried out

effectively and efficiently. It also implies that universities are moving from the "ivory

tower" image, to a much closer relationship with government and all the other

stakeholders involved (Lategan as cited in Strydom et al. 1997:78). This relationship of

accountability will attract increasing attention as competition between institutions

becomes more severe.

One of the distinct features that will elevate one institution above the rest, is by

marketing and delivering excellent programmes'. To enhance the quality of

programmes offered by providers, self-evaluation by staff members can prove to be an

extremely valuable tool. This evaluation involves an academic department critically

scrutinizing its own programmes and outputs, assessing the satisfaction of its

In this context a "programme" refers to a number of learning units (modules) leading to one or more qualifications.

1

stakeholders and taking corrective actions. The Higher Education Quality Committee

(HEQC) was instituted by the CHE to take responsibility for, among other duties,

external accreditation and evaluation of public providers and learning programmes,

monitoring of certification procedures, auditing and institutional reviews, quality

promotion and providing public information of governmental bodies. The Founding

Document of the HEQC (January, 2001) stresses the importance of quality and

accountability in South African higher education institutions. "Quality is identified as

one of the principles that should guide the transformation of higher education,

together with equity and redress, democratization, development, effectiveness and

efficiency, academic freedom, institutional autonomy and public accountability"

(HEQC Founding Document, 2001:1). It is clearly indicated that "once the HEQC is

satisfied that demonstrable quality assurance capacity has been established across a

spectrum of higher education providers, it will use a 'light touch' approach to quality

assurance, based on an increasing measure of reliance on the self-evaluation reports

of providers" (HEQC Founding document, 2001:15). Based on the above, it is clear

that the HEQC, as national quality assurance body, regards self-evaluation of academic

programmes as being of, major importance. They will utilize it when executing their

function of evaluating learning programmes of providers. This study focus on the

experiences of academic staff at the Rand Afrikaans University regarding the self-

evaluation of their own academic programmes.

The first section of this research essay focuses on the statement of the problem and

the aim of the study. This is followed by a review of relevant literature that informed

the research. Subsequently, the research design and methodology will be discussed.

Thereafter, the findings will be presented, possible recommendations will be made,

limitations of the study will be stated and possible topics for future research will be

suggested.

2. STATEMENT OF THE PROBLEM AND RESEARCH QUESTION

The Rand Afrikaans University (RAU) is currently busy with a self-evaluation pilot

project, involving three academic departments. The University has contracted an

international expert of the stature of Dr Ton Vroeijenstijn to help establish this

process of self-evaluation at the university. Dr Vroeijenstijn is the senior policy advisor

2

of the Association of Universities in the Netherlands (VSNU), with external quality

assurance as primary responsibility. According to Lategan (as cited in Strydom et al.

1997:593), self-evaluation is dealing with "the major issues of an institution. It

reflects the 'story' of the university and the 'hermeneutics' to understand the story".

It is not merely the collection of data, fact finding or check listing, but a frequent,

critical self-analysis of strengths and weaknesses that should "lead to measures which

can be taken in order to improve quality" (Vroeijenstijn, 1995:51). Self-evaluation can

be conducted for the purpose of internal improvement or as providing evidence for

external accountability. The self-evaluation process itself is however, not without

controversy and disagreement. Academics experience this process in different ways,

and often feel it infringes on their academic freedom. Due to these negative

perceptions, it is necessary to present the self-evaluation of programmes in a positive

light. Academic staff should realize that the exercise will not only result in personal

self-improvement, but also elevate the standards of the institution.

In light of the above, the research question can be formulated as follows: How did the

academic staff at the Rand Afrikaans University experience the quality assurance self-

evaluation process?

AIM OF THE STUDY

The general aim of this study was to explore the experiences of academic staff at RAU

who participated in the quality assurance self-evaluation process.

ASSUMPTIONS AND PRESUPPOSITIONS

I have been a permanent staff member at RAU for the past two years. I am primarily

involved in the quality related initiatives of the University. This portfolio is located in

the Center for Higher Education Studies. I have been specifically involved in the

administration and initiation of the self-evaluation exercise, where I developed the

perception that many academics do not respond positively to evaluative processes. I

have presupposed that academics experience this controversial exercise in different

ways, often feeling it imposes on their academic freedom. Attending numerous quality

assurance workshops and conferences where the issue of self-evaluation in higher

3

education was presented in a negative light, strengthened the latter assumption.

Collaborating and conversing with colleagues from other institutions who are also

involved in self-evaluation, created the impression that lecturers experience it as just

another regulatory exercise imposed by government. Consequently, I assumed that

they are not aware of the value and importance of self-evaluation for the purpose of

personal and departmental performance. Therefore, I presupposed they also do not

realize the importance thereof for institutional improvement and accountability.

5. REVIEW OF RELEVANT LITERATURE

5.1 Introduction

Changes in the environment and context of higher education are evident and rapid.

These changes are a worldwide phenomenon and arise due to the increasing

competitiveness among institutions, rising stakeholder expectations, and the challenge

institutions face having to "accomplish more with less" (Lewis Et Smith, 1994:x).

Ultimately, however, the important question to be addressed regarding the changing

landscape of higher education, is how members of the academy are responding to

these and other related developments. Lewis and Smith (1994:x) are of the opinion

that academics respond in one of two ways. On the one hand, they can react in a pro-

active and quality-driven manner, or, on the other hand, be defensive and

continuously strive to retain the past at the expense of the future. The latter would

result in failure and decease of constituencies holding such conventional views. Pro-

active institutions would recognize the need for continuous improvement and

assurance of the quality of products and services delivered.

Quality assurance or improvement is however a new and somewhat enigmatic concept

in higher education. Lewis and Smith (1994:2) support this view by saying "the

perception of quality in higher education is increasingly becoming a problem." Higher

education has a longstanding history and tradition of allowing individuals to function in

their own right and encouraging academic freedom and individuality. The challenge is

to change mindsets and misconceptions about the necessity of quality related

activities in higher education, even more importantly, to view the inevitable change as

an opportunity and not a threat.

4

Firstly, I will discuss the origins and notions of quality as derived from industry and its

appearance in higher education. Following on this, the focus of the literature review

will move to the issues of accountability and improvement as purposes of quality

assurance. I will then briefly describe the international and national history and

current state of affairs regarding self-evaluation and elaborate on the institutional

responsibility for self-evaluation. Finally, I will discuss some staff responses regarding

self-evaluation, as derived from the literature.

5.2 Quality in higher education

To investigate the "grand entrance" of quality into the field of higher education, we

need to look at its origins in industry, as it is from industrial practices that we derive

the basic concepts and methodology regarding quality (Sallis, 1996:8). Prior to and

early in the 1900's, quality was viewed as an integral element of craftsmanship and it

was controlled by supervisors. They were involved in a process of quality control once

the product was completed. This quality assessment moved towards a more

inspection-based and controlled process, measured against pre-determined standards

in the mid 1900's. Formal quality departments were only established in industry after

1960, and the concept of "total quality management" was introduced after 1980. This

concept proposes a practical and strategic approach to quality by focusing on the

needs of the customers (Sallis, 1996:28). Presently, it has expanded even further, to

include a culture of continuous improvement and organisation-wide quality

management and responsibility.

Regarding the educational context, only a few references to quality management are

found in the literature of the late 1980's. Quality management was initiated in both

the United States and Britain from 1990 onwards. Nationally, some formal structures

regarding quality assurance were only instituted from 1996 onwards (some

international and national perspectives will be elaborated on later on in this section).

Sallis (1996:10) is of the opinion, however, that there has been a "traditional

reluctance" in education to embrace, accept and adapt quality-related methodologies

associated with industry. Even though it is not ideal to relate educational processes

5

directly to manufacturing products, academics seem to be more susceptible to the

concept of quality assurance from lessons the service industry learned tong ago.

The problem of relating education to industry is even more complex because of the

difficulty of defining and measuring the quality of the service offered and the product

produced within an educational context. Not only are there large numbers of internal

customers involved (e.g. learners, lecturers etc.), but the needs of a variety of

stakeholders (e.g. parents, labour market etc.) have to be considered in the course of

the educational process. Despite the complexity of enhancing and promoting quality

services and products in education, there is a real need to address the issue because

"increasingly in education, quality makes the difference between success and failure"

(SalLis, 1996:1).

To make this difference and be perceived as striving to offer and maintain the highest

educational standards, we need to define quality in the context of higher education.

Quality experts around the world agree that quality is, like beauty, in the eye of the

beholder. It means something different for each person. Therefore, the former Quality

Promotion Unit (QPU) (instituted by the South African Universities' Vice-Chancellors'

Association) promoted the idea of a notion of quality, rather than an abstract,

simplistic, one-dimensional definition of quality (Brink, 1997:5). Within the context of

RAU, the Quality Care Committee (QCC) approved the following notions of quality:

Fitness of purpose refers to whether the mission, vision, values, objectives

and derived statements of the University comply with the dictates of national

policy, and whether they measure up to the expectations of recognised internal

and external stakeholders.

Fitness for purpose measure whether relevant attributes of the University

against the explicit and implied dictates of recognised University values, as

embodied in the mission and related statements.

Quality in the transformative sense focuses on the extent to which the primary

activities of teaching and learning, as well as research, develop the

competence of individual learners for personal enrichment, as well as the

requirements of social development and economic and employment growth

6

Value for money refers to whether the results of the various activities in which

the University and its subsystems are engaged warrant the resources allocated

to their conduct.

Quality means excellence, which implies striving towards exceptionally high

standards.

RAU decided to adopt the above-mentioned notions of quality, to stress the

importance of a common understanding when referring to certain phrases. It is

common practice internationally to use the generic term "quality assurance" to

describe all quality related activities. Vide, the name chosen by the international

organization for quality in higher education, called the International Network of

Quality Assurance Agencies in Higher Education (INQAAHE). Locally, the Higher

Education Quality Committee (HEQC) also uses the term in their Founding Document

and other documentation to denote all quality related activities. It also naturally

relates to industry, where quality assurance refers to ensuring that the production

system is under statistical surveillance for conformance to specifications, at certain

critical control points. These processes ensure that the final product also conforms to

original specifications and that a consistent conformance to those specifications can

be achieved. It is this constant conformance to specifications or, more specifically,

the constant deliverance of high quality service that should appeal to lecturers from

higher education institutions and should justify the implementation of quality

assurance systems. At RAU, however, it was decided to adopt the term "quality care"

for quality related activities, a term translated from the Dutch "kwaliteitzorg"

(Vroeijenstijn, 1990).

Quality care, in this context, includes four components, namely quality assurance,

quality promotion, programme accreditation and customer care. This "softer" term

was deliberately chosen to convey the impression that the University's quality

structure is not established to police the deliverance of quality service on campus, but

is there in a supportive role. The final responsibility for quality remains in the hands of

the responsible line-functionaries.

Quality assurance, first of the four components, includes all activities that are

intended to ascertain whether legitimate expectations are being met, whereas quality

7

promotion addresses issues that require improvement. Programme accreditation refers

to the verification that minimum quality criterion for accreditation by relevant

authorities, professional councils, regional, national and/or international bodies are

being met and customer care directly focuses on effective and efficient service to all

its customers by the University and its constituent parts.

To summarise, the aforementioned definitions suggest that the higher education

system aims to capture from industry what applies to their specific context in

education and training. It is thus evident that educational institutions should

increasingly become involved in quality assurance in order to contend with rising

competitiveness and stakeholder expectations. Furthermore, it is a professional

responsibility to strive for academic excellence.

Barlosky and Lawton (1995:51) propose four quality imperatives as driving and

motivating forces that challenge institutions to take up the quest for quality. Firstly,

there is the moral imperative, which claims that customers deserve the best possible

education. Secondly, there is the professional imperative of striving for high standards

and commitment. Thirdly, the competitive imperative focuses on the needs of

customers and lastly, the accountability imperative, which implies they must meet

the political demands for education to be more accountable and publicly demonstrate

the high standards of their products (e.g. curriculum, learning, graduates) and

services" (Sallis, 1996:5). For education, as for industry, improvement and

accountability is no longer an option; it is a necessity.

5.3 Quality, improvement and accountability

Improvement and/or accountability are the primary purposes of quality assurance. It

may seem, however, that external bodies, such as the CHE, would determine and

enforce these purposes. The willingness to improve and to be accountable is, however,

ultimately the responsibility of the institution itself. Thus, the institution is the

determinant of its own quality. This sense of responsibility and the task of alleviating

possible public misconceptions by not offering quality education is the institution's

prerogative. On the one hand, improvement is a matter of institutional integrity and is

only possible with constructive cooperation. On the other hand, accountability

8

"involves rendering some form of account that an activity is being carried out

effectively and efficiently" (Loder, 1990:2).

These two purposes are often seen as incompatible, but they are not mutually

exclusive. The challenge, according to Brennan, Frazer and Williams (1995:5), is to

achieve clarity and conformity as to the equilibrium that is sought between

accountability and improvement. Trow (1996:2) has written extensively on

accountability. He defines it as "the obligation to report to others, to explain, to

justify, to answer questions about how resources have been used and to what effect."

The extent to which accountability exists within an institution is determined by the

organisational structure, its culture and procedures, and where decision-making

powers lie. Wellman (2001:48), sees accountability systems in terms of "state-level

indicators of institutional performance, designed to reach public audiences, using

quantitative and qualitative measures that allow comparisons among institutions." It is

also called benchmarks or performance indicators.

An institution can however benchmark itself against internally set standards for the

purpose of improvement. Therefore, quality assurance can merely be aimed at

internal improvement. It can be achieved by identifying weaknesses and strengths in

the deliverance of academic programmes. Improvement may then involve some form

of change. This change includes building on the positive aspects and diminishing or

correcting the weaknesses that have been identified. These changes are not always

linked to increased financial costs, but an improvement-based approach can actually

save costs by identifying duplicated or wasted efforts, and proposing ways to eliminate

or reduce them (Open University, 1999). Sometimes, it does not involve any monetary

costs, but only means changing thinking processes or attitudes about correcting

previously unsuccessful activities. The increasing external pressures for accountability

in Higher Education necessitate urgent attention to the internal quality improvement

activities of all institutions. The quest for quality is equal to a quest for continuous

improvement.

5.4 Self-evaluation for improvement or accountability

9

As mentioned previously, one of the aspects of quality care being addressed at RAU, is

quality assurance. Self-evaluation is an activity primarily aimed at assuring the quality

of academic programmes. It is therefore an action intended to monitor and ensure the

levels of quality maintenance and identify aspects that can be improved upon. The

process of self-evaluation is not new, it has been an ongoing process at academic

institutions for a long time, maybe using different terminology. The notion of self-

evaluation is consistent with a range of educational, psychological and management

philosophies, such as 'the reflective practitioner' (e.g. Schon, 1987), 'metacognition'

(e.g. Biggs, 1991) and 'total quality management in education' (e.g. Sagor it Barnett,

1994 and Sagor, 1996). All of these scholars emphasize the capacity of individuals and

organisations to examine and evaluate their own behavior and actions, in order to

improve performance (Hall, Woodhouse a Jermyn in Strydom, et al., 1997).

In the literature, the terms "self-evaluation" and "self-assessment" are used

interchangeably. In higher education, the term "self-evaluation" is more commonly

used. It can be explained as follows. The use of the term "evaluation" in higher

education means to discover by investigation how well a specific educational approach

or activity is functioning. We can also distinguish between internal and external

evaluation. "Assessment" is currently used as measuring student learning outcomes. In

terms of quality assurance, "assessment" is used to include self-evaluation as well as

peer review (Open University, 1999). According to Brennan, Frazer and Williams

(1995:5) "self-evaluation is about whether educational objectives are being achieved

and whether current practice can be improved upon."

In an academic context, the question arises as to whether an institution should embark

on self-evaluation merely for improvement purposes, or should also be driven by the

requirements of accountability. Tension is caused between these two purposes,

because responses are influenced and determined by the purpose of the exercise. A

research project undertaken by the University of the Free State in 2000 concluded

that institutions viewed the two purposes of self-evaluation as of equal importance

(Fourie, 2000:7). The researchers concluded by saying that internal quality assurance

(aimed at improvement) "cannot retain sufficient momentum" without some kind of

external quality assurance (aimed at accountability). The New Jersey Commission on

Higher Education (2000) supports this view by saying: "Accountability and improved

10

performance are closely linked. Today's knowledge-based, global economy and society

hold extremely high expectations for colleges and universities and their graduates.

The challenge at hand demands open communication, the broad involvement of

stakeholders, pertinent information about performance, and a commitment to

improvement."

In light of the above, it is clear that the tension between the two purposes cannot be

ignored. According to Woodhouse (1995), this tension should be manageable within the

institution, since accountability and improvement have been simultaneously present in

the academic environment from the earliest times. The fact remains, however, that

the confidentiality of an internal self-evaluation report, and external public reporting,

for the purpose of accountability, will remain strenuous. Brink (as cited in Strydom, et

al., 1997), foresee that accountability will be formally built into the institutional

quality assurance systems of institutions in future. Nevertheless, it is still uncertain

how this accountability will manifest itself.

5.5 Self-evaluation: An international perspective

The process of self-evaluation has been part of the United States accreditation system

for nearly a century. Critical self-analysis has also been a fundamental element of the

United Kingdom's Council for National Academic Awards validation and accreditation

procedures for over 25 years (Open University, 1999). National quality assurance

systems operate in European countries such as Denmark, England, Finland, France,

Netherlands, Scotland, Sweden and Wales, and outside Europe. Other countries such

as Australia, Hong Kong and Mexico all have a process of self-assessment as a central

part of their quality activities and procedures.

In the United Kingdom, the Higher Education Quality Council (HEQC) was established in

1992 by universities and colleges in order to contribute to the improvement and

maintenance of quality. The emphasis of quality assurance in the United Kingdom has

been on improvement rather than accountability. Internal validation and review were

well established prior to any external monitoring, which means that institutions had

certain procedures in place prior to external audits (Geall, Harvey >t Moon in Strydom

11

et al., 1997:186). Currently, self-evaluation is encouraged and also the enhancement

of quality by sharing good practices.

Together with the United Kingdom and France, the Netherlands was in the "first

wave" of European countries to introduce self-evaluation in higher education

institutions (Westerheijden as cited in Strydom, et al., 1997:259). Much has changed in

institutions in the Netherlands, and the evaluation of quality played a major role in

those changes. Transparency of higher education institutions increased and a

significant change in Dutch academic culture has become evident. It is customary that

not all institutions are equal, and the bureaucratic "axiom of homogeneity" can be

loosened. According to De Haan and Frederiks (as cited in Strydom, et al., 1997:518),

the primary goal of the Dutch evaluation system, is improvement.

The Australian government states clearly that quality assurance and improvement

should be integrated into the strategic planning activities of universities. This is

ensured, among other methods, by institutional self-evaluation, followed by annual

consultations with the Educational Profiles (Candy ft Maconachie in Strydom et al.,

1997:334). It is evident from the literature that self-evaluation is internationally well-

known and widely utilized.

5.6 Self-evaluation: A national perspective

In South Africa, the change in the political dispensation in 1994 has resulted in

increased access to higher education, demands for the redress of past inequalities,

and calls for curriculum change to increase relevance in the new order. The major

development that necessitated that higher education respond to quality assurance

demands, was the establishment of the legislative environment, that lead to the

development of the National Qualifications Framework (NQF), as administered by the

South African Qualifications Authority Act (SAQA). According to Muller (In Strydom, et

al., 1997:38), the NQF represents a "state-initiated, state-controlled and state-

coordinated system of quality assurance that poses challenges to the traditional

academic freedom and institutional autonomy of higher education institutions."

12

The problem that has faced higher education in South Africa for the past few years, is

the absence of an external quality assurance agency for South African universities. The

Quality Promotion Unit (QPU) of SAUVCA (South African Universities' Vice Chancellors'

Association) functioned for a short period (1996 - February, 1999) (Fourie, 2000:3), but

was terminated. This situation was addressed in the Draft White Paper (Department of

Education, 1997), which recommended the formation of the Council for Higher

Education (CHE). The Higher Education Quality Committee (HEQC) was instituted by

the CHE at the beginning of 2000. In their Founding Document (January, 2001) the

importance of quality and accountability in South African institutions was stressed.

"Quality is identified as one of the principles that should guide the transformation of

higher education, together with equity and redress, democratization, development,

effectiveness, and efficiency, academic freedom, institutional autonomy and public

accountability" (HEQC Founding document, 2001:1).

It is evident from the literature, that the practices of self-evaluation in South African

universities, largely comprises of a combination of informal, traditional practices,

selectively applied, as determined by the institution. National developments and the

institution of the HEQC as national quality assurance body, the introduction of

external accountability and programme and provider accreditation necessitate regular,

structured self-evaluation at universities.

5.7 Institutional responsibility

It is evident that enhancing quality is an institutional responsibility, but not however,

merely an internal activity. The locus for responsibility in institutions for quality

assurance is continually under debate. The presence of a variety of professional

councils leaves the impression that they are taking responsibility, but the ownership of

quality must be shifted from the external body to the institution itself. Fourie (2000:3)

supports this view by postulating: "regardless of the existence of activities of any

external quality assurance agency, the major responsibility for systematic, structured

and continuous attention to quality, rests with the institutions themselves." This view

is also supported by the Department of Education (1997:22), as they state in the White

Paper on Higher Education that "the primary responsibility for quality assurance rests

with higher education institutions."

13

However, external systems do influence internal quality assurance activities. The

internal quality management system is determined by external requirements. The

National Commission on Higher Education (NCHE), recommended in its report that,

"Quality is not only an internal institutional concern, but also an essential ingredient

of a new relationship between government and higher education... A comprehensive,

development-oriented quality assurance system provides an essential mechanism for

tackling differences in quality across institutional programmes" (NCHE, 1996:11-12). It

is therefore essential that institutions take cognizance of governmental developments

and requirements when planning and executing internal quality systems.

5.8 Experiences of self-evaluation

Unfortunately, the self-evaluation process is not without controversy and

disagreement (Kimball, 2000). Academics' responses differ widely. On the one hand

they often feel it imposes on their academic freedom and on the other, they realize

the personal and institutional benefits thereof. Current literature reflects some of the

difficulties encountered when introducing self-evaluation processes in higher

education.

According to Frazer (1992), it is never easy to be self-critical and reflective. Self-

evaluation requires a constant, structured self-review. However, it is still necessary to

acknowledge the inherent professionalism of academic staff. Academics are often

suspicious that evaluation will have personal implications (such as losing respect and

credibility amongst colleagues) and that it is measuring personal productivity and

judging them individually. There is often a lack of understanding that it is aimed at

evaluating the system, and not assessing individuals. The lack of, firstly, awareness of

a systems approach to evaluation and, secondly, a quality culture in an institution, can

create these misconceptions. Poor communication of quality assurance, specifically

regarding the self-evaluation process, may lead to a loss of legitimacy for the system

(Jacobs in Strydom et al., 1997:165). Relevant information and good practices should

be disseminated through an effective and open system of communication.

Vroeijenstijn (1995:67) supports this view by saying that "a good self-assessment

(should be) as open and honest as possible."

14

Furthermore, self-evaluation is often experienced as an extra burden and

administrative responsibility, which academic staff feet should not reside with them.

On the other hand, centralized responsibility for regulatory exercises within the

institution creates a lack of commitment and involvement from staff. Staff often

either shift the responsibility, or are unable to recognize the need for self-evaluation.

Jacobs (as cited in Strydom, et al., 1997:155), enhances this view, by saying that

"staff in higher education may see quality as 'that person's job' and distance

themselves from it." He is of the opinion that staff should have ownership of the self-

evaluation process and participate fully in this collegial process, to avoid the above

misconception. These misconceptions are exacerbated when academics perceive the

process as being imposed upon them. Negative responses are often due to a lack of

transparency regarding the goals and aims of the process.

Bamett (1992:131) also argues that frustration is caused by a lack of structure or

standardized procedures regarding self-evaluation. Lecturers feet that their efforts

and the time invested in preparing the self-evaluation report are in vain, because it is

often not followed-up and there are no well-designed, continuous feedback processes

are in place. Many also claim that measures that are taken as a result of the self-

evaluation process are not clear, and that the entire experience is merely a paper

exercise. Furthermore, the absence of a quality management system or a specific

body, unit or committee leading the process, produces a perception of lack of

leadership and initiative. As mentioned earlier, staff often only experiences the

benefit of the self-evaluation in retrospect, as the following comment suggests, "It is

an anxiety-producing process that is not always easy, but it was good for us" (Kimball,

2000).

A number of suggestions are made in the literature to counteract these negative

perceptions and to better manage self-evaluation exercises. Brennan, Frazer and

Williams (1995:Preface) postulate that if self-evaluation is done only for public

relation considerations, it then develop into self-deception. This links with the idea of

creating unrealistic expectations of improvements, which cannot be met. It should be

obvious to staff members that top management supports and is completely committed

to the process of self-evaluation, otherwise, staff will not take the evaluation

15

seriously. Not only should the university management be convinced of the need for

quality assurance and self-evaluation but financial and infra-structural support and

resources should also be made available. Staff often perceive it as a top-down

approach, which produces conflict regarding power relations (Open University, 1999).

5.9 Conclusion

To gain a holistic perspective of quality assurance related to the context of higher

education, and the role that self-evaluation plays in this regard, it has been necessary

to discuss the origins and notions of quality adopted in the context of higher

education, and in this instance, specifically RAU. Consequently, the current debate

regarding issues of accountability and improvement as purposes of quality assurance,

was discussed. To conceptualize the research it has been important to focus briefly on

the national and international scenarios regarding self-evaluation and to emphasize

the institutional responsibility for self-evaluation. Lastly, (strengthening the purpose

of this study) I focused on the limited amount of information in the literature and

often neglected aspect of self-evaluation, namely the experiences of academics.

In the next section I will discuss the research design and methodology used in this

inquiry.

6. RESEARCH DESIGN AND METHODOLOGY

6.1 Research paradigm

Each person lives life and experiences reality based on certain pre-constructed and

sometimes unconsciously present paradigms and frames of reference. Guba and

Lincoln (1989:80) defines a paradigm as "a basic set of beliefs, a set of assumptions

we are willing to make, which serve as touchstones in guiding our activities." This is

then the philosophical framework, which not only guides the lives and decision-making

of researchers as people living in the world, but more importantly in research,

determines the design, methods and methodology in which they choose to conduct

their research.

16

These choices of design, methods and methodology are also guided by the aim of the

study they are undertaking. In this case, the aim of the inquiry was to determine and

understand how academics experience the self-evaluation process of their academic

programmes. In determining experiences of the participants, the researcher, as well

as the participants, play an active role in constructing the reality regarding a specific

exercise or intervention (Guba a Lincoln, 1989:84). Thus, it can be postulated, that a

constructivist paradigm informed this research inquiry. There will necessarily be

differences in experience amongst both the participants and the researcher. My role as

researcher was then to be "the investigator" and therefore "the primary instrument

for gathering and analyzing data and (...) respond to the situation by maximizing

opportunities for collecting and producing meaningful information" (Merriam,

1998:20). I, as the researcher, then played an active role in constructing reality, by

walking the way with the participants, attempting to experience their world and

reality. Consequently, I used qualitative methods of inquiry, as these are preferred

methods within such a constructivist paradigm.

Since this was a qualitative study, it follows logically that I employed qualitative

methods of data collection. My main method of data collection involved in-depth

interviews with participants and, to a lesser degree, participant observation.

6.2 Setting of the inquiry

As mentioned previously, this study focused on lecturers' experiences of the self-

evaluation process of academic programmes. It was conducted at the Rand Afrikaans

University (RAU) located in the heart of Johannesburg. It is an urban university with

almost 20 000 students (about 6 000 are distance education students). The university

has six faculties, with a variety of 46 departments within the various faculties. Three

departments were involved in the self-evaluation process and they were located

within the Faculty of Education and Nursing, the Faculty of Science and the Faculty of

Economic and Business Management.

As explained in the previous section, self-evaluation is one of the primary activities of

the quality initiative at RAU. The quality portfolio is located within the Center for

Higher Education Studies at the University, implying that the Center is responsible for

17

administering the self-evaluation exercises. The quality portfolio consists of two

individuals (of which I am one) responsible for the quality related activities at RAU.

Due to this limited capacity, RAU has contracted an international expert of the stature

of Dr Ton Vroeijenstijn to help establish this exercise of self-evaluation at the

university. The reason for such an intervention will not only make a valuable

contribution to quality assurance at the institution by putting it on a sound foundation,

but it will also put RAU in a favourable position on national level. Dr Ton developed a

self-evaluation manual specifically for RAU, which was implemented for the first time

in the abovementioned three departments.

This manual was specifically developed for academic departments. However, self-

evaluation can (and should) be conducted in non-academic departments as well. For

the purpose of this study, I focused on the self-evaluation of academic programmes

within the three academic departments currently involved in the process. It is evident,

from national developments, that self-evaluation reports of institutions will become

increasingly important in their quest for improvement and accountability (as discussed

in the previous section). National bodies indicate that they would, to a large extent,

depend on self-evaluation by institutions for the orderly execution of their

responsibilities.

When evaluating academic departments, Vroeijenstijn (1995:52) is of the opinion that

self-evaluation should consist of the following subjects regarding the inputs, processes

and outputs of the department:

the place of the department in the organisational structure of the complete

system - the goals and aims of the department and its programmes should fit

the general mission and vision of the institution;

a description of the students, e.g. success and dropout rates;

description of the programmes in terms of content, organization, didactic

concept, curriculum design and assessment;

the educational process, e.g. teaching methods, utilisation of media etc.;

satisfaction of the students, alumni, labour market, society and staff.

18

Each of the above-mentioned subjects should be analysed by describing them in the

following manner:

a description of the current situation;

an analysis of the situation;

problem-solving - if the situation is not satisfactory, the way in which it can be

changed and improved should be described.

Through self-evaluation, departments should become aware of their strengths and

weaknesses, and start to restructure the weaknesses (Kells, 1995).

6.3 Sampling procedures

Miles and Huberman (1994:27) are of the opinion that qualitative researchers study

small samples of people, situated in their context, comprehensively. I used purposive

(or sometimes referred to as purposeful) sampling in my inquiry. This implies that "the

investigator wants to discover, understand, and gain insight and therefore must select

a sample from which the most can be learned" (Merriam, 1998:61). Thus, eight

academic staff members from the departments involved in self-evaluation were

selected purposefully, to ensure a variety within the sample (Maykut and Morehouse,

1994:45). Initially, I planned to interview nine participants, three from each

department, but in the fifth and sixth interview recurring patterns and themes

became evident, and after eight interviews, a point of redundancy (Guba ft Lincoln,

1989:64) was reached.

The selection of the sample was based on the "information-richness" (Patton, as cited

in Merriam, 1998:61) criteria, and furthermore, a combination of typical and

convenience sampling. Convenience sampling was utilized in terms of the participant's

time and availability for interviewing and typical sampling with regards to the

academics' active involvement in the process. The participants were not necessarily

on the appointed self-evaluation committee (to drive and manage the process within

the department), but in some instances, they were specifically chosen for not being on

the committee. This resulted in collecting opinions from both perspectives. Males and

females were represented almost equally, as well as, a variety of age groups and

academic positions.

19

Before the inquiry, all the relevant departments agreed to participate in the study.

Lecturers from the different departments were contacted individually, and

appointments were made. I experienced enthusiasm and willingness among the

lecturers to participate in the research project. The participants chose the setting

where the interviews were to be conducted which in all the cases were their own

office environments. Due to the familiar surroundings, most of the respondents felt

relaxed and confident to talk about their experiences. As the interviewer, I felt more

comfortable interviewing some of the respondents than others, because some of the

participants were much more willing to disclose very personal experiences, whereas

others were at first more hesitant. But after I explained my role as researcher and the

aim and purpose of the study, the respondents were willing to participate and

contribute in a co-operative manner.

6.4 Data collection methods

For the purpose of this study, as mentioned briefly, I have made use of in-depth

interviews and, to a lesser extent, participant observation. Using multiple sources of

information is valuable "because you want to use different methods ... to corroborate

each other so that you are using some form of methodological 'triangulation" (Mason

in Silverman, 2000:98). The use of in-depth interviews and participant observation, as

data collection methods, will now be discussed.

6.4.1 In-depth interviews

The primary method of data collection was personal interviews. An in-depth interview

takes the form of a social conversation between two people. De Vos (1998:300) is of

the opinion that an in-depth interview "enables an interviewer to obtain an 'insider

view'." Patton (1987:109-114) distinguishes between three approaches to qualitative

interviewing. One of the approaches is defined as the "informal conversation

interview" (also see Merriam, 1998:71). This approach relies entirely on the

spontaneous generation and formulation of questions within the natural flow of

interaction.

20

Eight academic staff members were interviewed individually. The reason for using

individual interviews as opposed to focus group interviews, was the objective of

eliciting truthful responses and eliminating the threat of a group situation. The length

of the interviews varied from individual to individual. The shortest interview took

approximately twenty minutes, and the longest, almost one hour. The interviews were

scheduled over a period of a month. The different departments were at different

stages of the self-evaluation process, which contributed to the "information-richness"

of the data (Patton, 1990:172). The interviews were conducted in the language of

preference of the participant.

During the preparation phase, before the interviews were actually conducted, I

studied the relevant literature and also the specific instrument for evaluation, as

developed by the consultant. Initially a total of nine interviews were tentatively

planned, but the actual number of eight was determined by the data and information

obtained from the various participants.

The interviews were planned in terms of three phases, namely the beginning, middle

and terminating phase. The beginning phase of the interview was informally structured

along the following broad lines. Firstly, I explained that confidentiality would be

maintained since the conversation took place only between two individuals. The

respondents were thanked for their willingness to contribute and it was also explained

that their input was regarded as valuable for further self-evaluation exercises in

future. As a researcher, I ensured the participant that no information would be

attached to them personally or to the department they represent in any way. I also

obtained all the respondent's permission to tape record the interview. A disadvantage

of recording interviews, which I also experienced, was that some of the respondents

are constantly aware of the recorder, and saw it as some form of "intrusion." I assured

the participants that the recording was merely for my own personal use, as I was not

able to make notes and remember all the important aspects discussed. After some

initial discomfort, in some instances, all the respondents expressed themselves freely

about their experiences.

The next step was to explain the goal and purpose of the study to the participants. It

was necessary to clarify my role as researcher, because I was also involved in

21

administering the self-evaluation process at the university at the time. It was stated

clearly that the purpose of the interview was not to determine the current status or

progress of self-evaluation within the department.

The middle phase of the interview was introduced by asking participants to tell me

about their experiences regarding the self-evaluation process. The rest of the

interview questions spontaneously developed as part of the interaction. As the

interviewer, I had a list of issues at hand to refer to, when specific direction was

needed. Probing was used to further clarify uncertainties and to get additional

information when the responses were not satisfactory. According to Nachmias

(1997:243), probes motivate respondents to elaborate or clarify responses or explain

reasons behind certain answers. These were derived from answers or comments to

previous questions. An example of a probe would be, "So, you hope that this exercise

will solve some of those problems?", following up on a comment the respondent made

about a solution to certain problems regarding academic programmes. Another would

be, "Are people actually involved?", responding to an unclear comment made about

the role of fellow academics.

The last phase was to terminate the interview, where the participants were thanked

for their availability and willingness to participate. I also mentioned the possibility of

future interviews and indicated that I would contact them if necessary.

An example of a transcription is attached as Addendum 1.

6.4.2 Participant observation

Participant observation was used as a secondary method of data collection. The

observations were done during the preliminary discussions and information sessions

prior to the self-evaluation exercise and during other supporting or information-

seeking informal conversations with the departments involved. Field notes were made

(an example of the field notes are attached as Addendum 2) when the consultant, Dr

Ton Vroeijenstijn, visited the six faculties in March 2001, and again when he visited

the participating departments in June of the same year. It was quite easy to take on

the role of observer, because I did not directly participate in the conversations

22

between the consultant and the members of the department. Consequently, this

created the ideal opportunity to observe in a non-threatening manner. Verbal

questions and concerns were raised, regarding possible problems in the future.

Another important aspect observed during these sessions, was the non-verbal

communication and messages conveyed by participants. This not only provided

background information regarding the structure and internal politics of the

department involved, but also about the attitude of the participants to the planned

exercise. Short notes were made during the sessions, which were expanded upon as

soon as possible after each session. According to Spradley (as cited in Silverman,

2000:141-142), this improves the reliability of the field notes.

The sessions were noted mainly (according to the checklist in Merriam, 1998:97-98) in

terms of:

the physical setting - naming the venue and time allocated for the session;

the participants - not individually labeled, but whether it was the complete

department, or only the appointed self-evaluation working group;

the conversation - I especially noted the inputs regarding the participants'

experiences and the information the consultant provided for clarification;

subtle factors - such as the attitudes or approaches where departments had to

introduce themselves to Dr Vroeijenstijn, non-verbal messages and evidence of

interest of members who were present.

The findings, from studying these field notes, strengthened and confirmed the results

that were derived from the interviews. I, as the researcher, had the advantage of

experiencing the process first-hand. However, according to Merriam (1998:95), this

contains an unreliability disadvantage, because it is based only on the perceptions of

the researcher. As with the interviews, this was a time consuming exercise, but

nevertheless made valuable contributions and advancements to the data collected.

6.5 Data analysis and presentation

De Vos (1998:336) states that analysis is "a reasoning strategy with the objective of

taking a complex whole and resolving it into its parts." It is the process of organizing

the data into more manageable categories. Bailey (1996:89) is of the opinion that data

23

analysis starts at the moment the research is thought of, and continues up to the final

report writing. Merriam (1998:178-179) on the other hand, explains the level of

analysis as "data (that) are compressed and linked together in a narrative that conveys

the meaning the research has derived from studying the phenomenon." The interviews

were all recorded and transcribed verbatim by myself for the purpose of analysis.

These transcripts were analyzed by means of the constant comparative method of data

analysis (as discussed by Maykut a Morehouse, 1994:126-144). This method consists of

a "continuous developing process" (De Vos, 1998:339) of comparing incidents and

searching for recurring themes and patterns (Merriam, 1998:159).

The data collection and data analysis were done simultaneously. I read through the

initial interview transcription and field notes made during the observed sessions. I

highlighted important pieces of information and made notes of main ideas in the

margin. I started again with the first interview and identified the smallest units of

meaning. This consists of the smallest sentence or paragraph that can stand on its

own, and still be meaningful and reveals information relevant to the study (Merriam,

1998:179-180). An example of a unit of meaning would be in one of the interviews, "I

definitely think it is necessary" and "It is time consuming." A unit of meaning does

not necessarily consist of only one sentence, or part of a sentence. Sometimes it is a

discussion of half a page regarding a specific idea. In the margin next to the first

example, I would write the need for self-evaluation, and next to the second example

negative experiences. Each unit of meaning was coded, to be able to know where it

was derived from. An example of a code would be BW1 /1. BW being an identity

assigned to a specific person, "1" being the first (of a possible two) interviews, and

lastly, reference to the page number of the transcription being analyzed. The

transcriptions were then photocopied and literally cut to pieces. Each unit of meaning

was cut out. These units were compared with each other then placed into preliminary

categories or classes as constructed through reading through the data. These tentative

categories were refined from the literature and during the data collection phase as

ideas were conceptualized and concretized. These categories related to the purpose

of the research. In further readings of the data (as interviews were conducted), the

notes were compared to what was derived from the first interview. Some of these

tentative categories were, for example:

➢ positive experiences;

24

negative experiences;

culture or awareness;

context;

management support and commitment;

the instrument etc.

Data that did not fit into one of the provisional categories, was categorized

separately, for later inclusion in a newly created or tentatively named category. The

names assigned to these preliminary categories were merely for the organization of

the units of meaning. Further development of the names of the categories was

constructed during the course of data analysis. The next phase was to compare and

integrate the categories. In some instances new categories were born, and others were

merged. Some of the categories were also renamed. A rule for inclusion was written

for each category to determine what would be included and what would be excluded

from that specific category. The units of meaning were scrutinized to determine

whether they "qualify" for inclusion within a specific category. These rules for

inclusion were refined and rewritten several times to finalize them. An example of a

rule for inclusion for the preliminary category "positive experiences" would be "Some

lecturers realize the value and need for such an exercise." As the interviews were

conducted, they were transcribed and categorized in the described manner until a

point of redundancy was reached.

Outcomes were prioritized according to the outcome propositions in light of their

importance in contributing to the purpose of the study and their prominence in the

data (Maykut Et Morehouse, 1994:158). These outcome statements produced the

foundation of the findings.

The following indicate the final categories, sub-categories and outcome statements:

25

Table 6.1: Categories, sub-categories and outcome statements

Categories and Outcome statement

subcategories

Visible management

support and institutional

culture

Management

commitment and

support

Institutional culture

There are uncertainties about the support and

commitment of management and therefore the success

of the self-evaluation exercise. This does not only

include moral support, but also practical, visible,

monetary and infrastructural support. A culture of

quality and a need for continuous improvement should

be established.

Influence of contextual

factors

External factors

Internal factors

Internal and external contextual factors influence the

self-evaluation. Lack of clarity of external requirements

regarding self-evaluation and other national initiatives

hinders the process of self-evaluation. Integrated,

reliable data availability is an essential aspect of

successful self-evaluation. Lack of co-operation of fellow

academics often hinders the quality of the process.

Clarity of outcomes, goals

and importance of self-

evaluation

The academic staff realizes the importance, necessity

and value of self-evaluation as an improvement

mechanism. Although the need for self-evaluation is

evident, academics still experience it as an additional

time consuming, frustrating and highly administrative

burden. The expectations, goals and consequences of the

self-evaluation were not clearly identified and

communicated.

Provision of an evaluation

design

A well-structured instrument needs to be provided as a

guideline, but academic freedom should still be

recognized and allowed in the adoption and tailoring of

the instrument for the specific context.

26

The efficacy of categories was determined by using the following guidelines, as

provided by Merriam (1998:183):

each of the categories reflected the purpose of the study;

the categories were exhaustive, in that they made provision for all data;

the categories were mutually exclusive - there was not a possibility of placing a

unit of data into more than one category;

the names assigned to categories were as sensitive and descriptive as possible;

they were all conceptually congruent, thereby indicating a similar level of

description in all categories.

The above-mentioned findings will be discussed in detail in the following section.

6.6Validity and reliability

It is necessary to provide evidence to prove that what the researcher has examined is

noteworthy and can be trusted. In the attempt to prove this, I investigated the data

collection and analysis for internal validity, external validity and reliability. By

following this form of investigation, confidence was established in the investigation

and results.

Merriam (1998:201, 207) distinguishes between internal and external validity. Internal

validity deals with the coherence between the results and the dynamic reality. She

(1998:204) proposes six strategies to enhance internal validity. The following table

indicates which of the six strategies were used in this study:

27

Table 6.2: Strategies used to ensure internal validity

Strategy Description of strategy Application to ensure

internal validity

Triangulation Multiple sources of data and

methods of data collection used

or the combination of one or

more research methods in the

one study (Myers, 2001).

For the purpose of this

study two methods of data

collection were used,

namely in-depth interviews

and participant

observation.

Member checks People that served as data

sources were consulted to

clarify uncertainties.

An open communication

channel was established

between the researcher

and participants, which

provided the opportunity

for regular member

checks.

Long-term observation Observation took place over a

period of time.

The study was conducted

over a period of one year.

There was opportunity for

development with regards

to academic staff's initial

and subsequent

experiences.

Peer examination This is referred to as "peer

debriefers" (Byrne, 2001), and

it is when the researcher

consults experts and colleagues

to ensure validity of

interpretations.

I had contact regularly

with the consultant, my

colleague and trusted

mentor, and colleagues at

other institutions who were

also in the process of self-

evaluation.

28

Strategy Description of strategy Application to ensure

internal validity

Collaborative mode of

research

Involving participants in all

stages of the research.

The participants were

observed in the beginning

stages, and interviewed at

the latest possible stage of

the project.

Researcher's biases Clarifying assumptions of the

researcher.

I had definite biases at the

onset of the project, but

after conducting my first

interview, this was

clarified.

External validity refers to the possibility of applying the findings to other situations.

Hoepfl (1997:13) describes this transferability to other situations as depending "on the

degree of similarity between the original situation and the situations to which it is

transferred." Researchers should therefore provide the reader with sufficient

information to prove the external validity or transferability to other situations.

Two strategies (as porposed by Merriam 1998:211) were used to aim for ultimate

external validity. Firstly, I provided, as far as possible, detailed information and

descriptions to help the reader to decide whether their situation is similar to the

described setting. Secondly, I described, where information was available, how other

institutions or departments experienced self-evaluation. I am of the opinion that it

would not only be transferable to similar (discipline-specific) departments, but also to

other departments involved in self-evaluation, due to the generic aspects of such an

exercise.

The problems regarding reliability in qualitative research are discussed by Hoepfl

(1997:13) and Merriam (1998:205). On the one hand it is problematic because human

behaviour is dynamic. On the other hand, qualitative researchers are often focused on

proving or providing evidence for validity, rather than concerning themselves with

reliability issues. In understanding reliability, Guba and Lincoln (as cited in Merriam,

29

1998:206), suggest the use of the terms "dependability" and "consistency." The

following techniques were used to ensure reliability by providing information

regarding:

the context, the researchers' position and the relationship with the selected

participants;

triangulation, as discussed previously and

providing a clear and defensible link for each step from the raw data to the

reported findings - in other words, a clear and logical "audit trail."

6.7 Ethical considerations

As explained earlier, as a researcher, I was fulfilling a two-fold role in the self-

evaluation project. On the one hand, I was part of the administration of the process,

and on the other hand, the researcher of the process. With regards to the above-

mentioned duality of roles, ethical considerations had to be taken into account

concerning the gathering of information in a non-threatening and non-"policing"

manner. I also had to be aware of using information obtained by means of interviews,

to the advantage or disadvantage of a specific department. Confidentiality had to be

ensured and roles had to be explained. The fact that no information would be

attached to individuals or specific departments had to be clarified. Codes were

assigned to each individual for use in the transcribed interviews, so that no person was

in any way linked to a specific interview.

7. DISCUSSION OF FINDINGS

7.1. Visible management support and institutional culture

Respondents do not consider the intentions, visible commitment and support from

management as being sufficiently clearly defined. Lewis and Smith (1994:285) are of

the opinion that any quality initiative can only succeed if there is visible commitment

from management. Management intentions are questioned in that they "should not

support the activity to look good in government circles" and that it should not be seen

as a control mechanism. They should show that they are serious about quality and

commit themselves, not only in terms of verbal agreement, but also in terms of

30

practical provision of essential resources. The interviewees acknowledge that it is not

always merely a matter of providing money. Sometimes, more significantly, a change

in attitude is required. But in the instances where money is needed, management

should provide financial support.

According to Dr Vroeijenstijn (2001a:4) the issue of the involvement of management is

a matter of international concern and importance. Self-evaluation depends, on the

one hand, on support from management, but, on the other hand, they cannot enforce

it, the responsibility and ownership should lie within the department. Kells (1995:6)

indicates that higher education institutions often do not display "vital management."

In addition, they are often hesitant to change, and characterized by reactive quality

management. This distinctiveness of academia creates "stumbling blocks to

formulating and completing a useful self-evaluation process and to initiating and

sustaining ongoing, internally motivated cycles of such activity" (Kells, 1995:6).

The respondents indicated that visible involvement of management is essential in

creating an institutionally wide culture for continuous improvement. This general

creation of awareness will result in the necessary sensitivity that ultimately creates

quality. Some faculties are more open to a culture of improvement, especially where

they have been exposed to accreditation of professional bodies on a regular basis.

Management and staff need to acquire a sound knowledge base of quality. National

and internal tendencies are instrumental in fostering this philosophy.

The respondents realize that the creation of a continuously improvement culture is a

long-term process. One of the participants mentioned, "I am not convinced in my

heart that they are completely committed to the process." Kells (1995:24) supports

this view by explaining that "building the culture will take time and, of course, much

progress can be achieved through a good experience with a self-evaluation process."

7.2 Influence of contextual factors

The respondents are of the opinion that contextual factors and influences must be

understood and analyzed to get meaningful participation with regards to self-

evaluation. They mentioned "we must be aware of national developments" and

31

"internally, changes and happenings should be communicated." They feel there should

not only be an internal consciousness of the organizational environment, but also an

awareness of the larger context regarding governmental and international trends

(Lewis Et Smith, 1994:2). Academics need to be informed about global and local

initiatives and tendencies, to indicate that what is expected of them is in line with

international practices. A distinction can be made between internal and external

factors influencing self-evaluation.

7.2.1 Internal factors

Each and every institution should take responsibility for quality in its own environment

and context. Knowledge of the organizational context includes the internal

relationship between departments and faculties, making comparisons and creating

realistic openness. Issues like internal restructuring and reformations affect the

success and feasibility of self-evaluation. There is also a definite need for South

African institutions to raise the levels of their internal quality, due to rising

competitive challenges.

In terms of the internal context, the participants experience the dedication and

commitment of co-academics as problematic and sometimes literally hindering the

process. This applies especially to those not serving on the departmental self-

evaluation committee. If staff members could not see the personal benefit of the

exercise, they did not do anything before some concrete contribution was expected of

them. The "passive observing and unwillingness to share information" and best

practices amongst departments are impeding the natural flow of the process.

Respondents felt that "people's hearts weren't always in it" and "they didn't go the

full nine yards." Another problem was a degree of dishonesty regarding the personal

information provided, which impacted on the accuracy of the report. Some individuals

felt they had to protect themselves and therefore did not disclose certain information.

Kells (1995:30) is of the opinion that "self-evaluation has often been seen to improve

communication patterns, trust, listening, and group functioning in facing and solving

problems." It should be used to the advantage of the department, and not have a

detrimental effect.

32

Insufficient provision and lack of reliable data was a foreseen problem within the

internal context of the institution was confirmed by the research results. This was due

to problems regarding the information management system operating at RAU. Kells

(1995:4) postulates that this is an international problem by claiming that, in general

the infrastructure needed to support planning, decision-making, self-evaluation and

regulatory activities, is weak. The respondents experienced frustration and a rupture

in the momentum because of the availability of fragmented, incomplete and outdated

data. They felt that there was nothing "reliable and overarching" available. Within

some faculties or departments their own initiatives with regards to data capturing

helped to some extent, but overall "it took a lot of time to collect all the data." This

frustration mounted because it is a hindering factor and the departments had no

control, and had to "constantly question the validity and reliability of the data". This

unfortunately shows deficiencies in the data management system of the institution,

because "information is not readily available." The tack of data also resulted in open

spaces, where answers were not available, which may create an incomplete picture.

7.2.2 External factors

Kells (1995:6) is of the opinion that institutions venturing into self-evaluation are not

only internally affected, but also by circumstances and aspects beyond its own

organizational borders. As mentioned in the problem statement, the Higher Education

Quality Committee (HEQC) will rely heavily on self-evaluation reports. It is definitely

problematic to continue with the process before "clear guidelines are provided and

established by the HEQC." However, the departments realize they cannot wait for

governmental interventions, they should get their own houses in order. The relevance

of self-evaluation against external changes and restructuring are always in question,

but inaction will not be beneficial to the institution.

Some departments are used to being accredited by and adhering to requirements of

external bodies and professional councils within the broader context, "we are used to

this, we've been doing it for years" and "for us, this is nothing new." According to

Webbstock and Ngara (in Strydom, et al. 1997:556) "external quality monitoring of

university programmes has been the preserve of professional bodies which accredit

career-oriented programmes." They have the power to determine minimum

33

requirements for students entering into certain professions. Dr Vroeijenstijn (2001a:4),

however, warns departments, involved in such procedures, to beware of only adhering

to the standards set by professional or other external bodies. It does not always

include an element of evaluation and analysis. They should be critical of what is

required and add on to that, in order to make a complete self-evaluation. Academic

standards are dynamic and not static, and must be adapted regularly to remain

internationally competitive and comparable.

7.3 Clarity of outcomes, goals and importance of self-evaluation

It was evident that the participants realize the importance, need and value of self-

evaluation. It was stressed that it is important and valuable for internal development.

Pekar (1995:38) supports this view by saying that "everyone can benefit from such an

exercise because no one can ignore themselves." The respondents indicated that in

most cases they were aware of the problem areas, and that the self-evaluation

provided a medium to communicate the realistic or real state of affairs. Some other

comments regarding the need and value of the exercised were expressed, for

example:

"I really think it is necessary."

> "I think it is a sound principle."

"The necessity of self-evaluation is evident."

"We need to continuously evaluate ourselves."

The respondents were of the opinion that you cannot rectify or improve on the current

situation if you do not know what your weaknesses are. One participant made the

remark: "It is after all self-evaluation for ourselves, it is not forced upon us by

management. They will not put a red cross next to your name." Academics generally

agreed that we should have started walking this road a long time ago, and that it is no

longer a question of if we should, but when and how we should embark on self-

evaluation. According to Dr Vroeijenstijn's experience, most departments that

exposed themselves to self-evaluation responded that although it is an extensive

exercise, they were always glad that they had engaged in it. The value of the exercise

always overshadows the negativity once the process is completed.

34

Due to my presuppositions and assumptions, as the researcher, I was gratified at the

positive responses to self-evaluation. One of the respondents stated "it made me

realize how important it is for us to think about what you want to achieve with your

programmes, and to what extent you are successful." They also indicated that it is

worthwhile to sit down and investigate and discuss the underlying coherence of course

contents. They were able to identify long overseen overlaps which provided for better

co-ordination of different modules in the future. The academic staff indicated that

they often fail to see the bigger picture and underlying relationships, which the self-

evaluation exercise highlighted.

They indicated that "it was not so bad as we thought it would be, as soon as you get

started you're fine." The importance of having a willing attitude to change and

improve was thought to be a vital part of what they considered to be a "learning

experience." With regards to handling the exposure to criticism, they indicated that

"you have to evaluate the merit of the criticism" and use it for improvement. They

realized that "it is not personal, it is a way to say how you are going to improve."

Lastly, one of the respondents indicated that "self-evaluation came at exactly the

right time when we realized we had to do something in terms of legislation to ensure

quality," which is also an indication of the realization of the importance of self-

evaluation.

The respondents felt that the goals and expectations of the exercise were not clear

and communicated thoroughly. Some insecurity was experienced because of the

unclear expectations:

> "It wasn't clear what was expected of us."

➢ "It was not completely clear. What are we aiming for? What are we trying to

get out of this?"

> "One of the problems was we did not know what the outcome will be."

Kells (1995:30) explains that it is difficult to align "staff members' perceptions of the

purposes, structure and activity of self-evaluation." It definitely emphasized the

importance of stressing the improvement goal of self-evaluation, and not just as part

of or preparation for governmental or external initiatives.

35

Even when the goals and needs for self-evaluation are clear, as with any type of

criticizing or regulatory exercise, there are some negative or problematic experiences.

First and foremost is the fact that self-evaluation is an extremely time-intensive

activity. Some academics felt that it is not the core business of the institution and

that it distracts them from time that should rather be spent on the core educational

activities.

Other negative experiences included that of frustration and resistance. Academics felt

it was "just another thing that landed on our desks." They were also not convinced

that the advantage of the exercise would be realized or filtrated to all members

involved. One respondent verbalized it as follows, "One danger I see is that this

exercise will not result in wider enrichment."

Again with regards to exposure to criticism, responses were: "Nobody likes to be

exposed," "It is never easy to be criticized." For some it was realty a "painful

process" to counteract the threat of criticism, Dr Vroeijenstijn advised that the

evaluation should by no means be individual in nature, but that a collective approach

should be followed.

The consultant, Dr Vroeijenstijn (2001a:6), explained from the outset that self-

evaluation would be a costly and time-consuming exercise. According to Kells

(1995:4), it is often expected of higher education professionals that are specialists in a

specific field, to perform in other areas where they often have "little or no training."

Frequently having to deal with an enormous workload, the self-evaluation was in some

instances seen as an additional administrative burden. Academics referred to the fact

that it was not this initiative specifically that they objected to. They had to deal with

a number of other national and internal initiatives and were having to cope with more

and more administrative tasks. They expressed this frustration and irritation in the

following manner:

> "If it wasn't for the heavy workload, we could have spent a lot more time on

it."

> "I thought, why on earth do we have to do this?"

> "Where are we going to find time to do this?"

> "I felt we are doing the job of business consultants."

36

Some academics, on the other hand, acknowledge that everyone must help to carry

the administrative load of the university. Others feel that their colleagues use their

workload as an excuse, and that the importance of self-improvement programmes

should be realized and internalized. They feel that academics should make time for

priorities and matters of importance. There was also the realization that the cycle is

initially time consuming, but subsequently it will become easier once the culture and

routine has been established.

Some colleagues still see quality assurance, and specifically self-evaluation as a

bureaucratic action, and an inhibiter of academic freedom. They realize that for lots

of people "academia is a refuge" and safe haven. Again, some of the respondents felt

that academic freedom is often misused as an excuse. Academics often felt they have

certain personality characteristics and foster a philosophy to achieve high standards.

They also claimed that they should have the knowledge, skills and attitudes to adapt

to changing environments.

The outcome of such a self-evaluation consists of a report as product. The process has

to stop at a certain point in time, otherwise it can continue without an end-point. The

consequences and results of this report are of crucial importance for the success of

the exercise. According to a participant, it should not be an "action the department

has to go through just to put a document on the table, to get the job done, to

continue with their normal processes." The questions and concerns regarding the

report that were raised by the participants, was in the first place, whether the

advantages and value of producing such a report, would filter through to everyone.

The main concern was that the departmental reports would culminate in a faculty

report, which eventually would be used in an institutional report. According to staff,

certain aspects would be "watered down," and meaningless and the whole effort

would result in a mere paper exercise. Their second concern was that the report

would not be effectively applied and utilized in the institution.

Staff stressed that recommendations stemming from the report, should be

implemented to change and improve the current situation. This does not necessarily

only mean financial investment, certain changes can be made without cost. Weusthof

37

(1994:216) refers to "passive or active utilization," of results. "Passive utilization"

refers to no immediate action being taken or merely the making of recommendations.

"Active utilization," on the other hand, refers to actively acting on the available

report, operationalized as taking measures and monitoring the implementation

thereof. Weusthof (1994:216) comes to the interesting conclusion, (which also links to

one of the previous findings about the importance of contextual influences), namely

that "in those faculties (departments) where self-evaluation procedures have been

started in response to external developments, in general the results are used passively

at both the central and decentralized level. When these processes are initiated in

response to internal conditions, however, the majority of the faculties show both

passive and active use of the results at both levels. This conclusion implies that

external incentives can be perceived as a necessary but not a sufficient condition for

successful internal quality assurance ... within faculties (departments)" (Weusthof,

1994:222).

7.4 Provision of an evaluation design

The consultant explained (Vroeijenstijn, 2001a:3) that the self-evaluation manual that

was developed should be seen as an instrument which the department could use to

ensure and improve the quality of the programmes they provide. It must be utilized to

identify strengths, and not only to point out weaknesses. The instrument provides

flexibility serving only as a guideline and allowing it to be adapted to individual

departmental needs. It should not be seen merely as a questionnaire where questions

should be answered. The questions need to be open-ended stimulating original

responses on a variety of issues.

It is important that the department must experience a feeling of ownership regarding

the development and refinement of the instrument. The development requires "close

coordination between all members" (Pekar, 1995:57). Dr Vroeijenstijn (2001a:3) used

the example of a lost person having two options, wasting time wandering around, or

buying and using a map to navigate the route to the destination. The instrument must

be seen as a map to guide the department through the self-evaluation. It helps to

direct thinking processes, but provides the freedom to formulate and determine

individual goals and aims. The aim of the manual is to structure the exercise. It was

38

interesting to note that one of the participants was of the opinion that most staff

members who received the manual, did not read it and simply put it aside. Only when

some kind of input was required of them, did they really made an effort to interact

with the document.

The respondents all felt that the instrument that was provided in the form of a

"manual" was suitable, especially for undergraduate programmes. For postgraduate

programmes, it would need to be adapted, to a certain extent. They described it as a

broad document and they "had to ask (them)selves in how much detail (they) think

(they) should go into it to get valuable results." They found it to be useful, because it

"forces you to think more widely". They all indicated that they did not "religiously

followed the model," but used it as a guideline. This correlates with the

recommendations made by Dr Vroeijenstijn (2001a:3).

8. RECOMMENDATIONS

In light of the findings of the study, it is suggested that the following

recommendations could be implemented to ensure a more effective process and to

create a more positive experience for self-evaluation. Furthermore, it would also be

meaningful for quality assurance managers within higher education engaging in self-

evaluation to improve the existing process.

➢ The goals, purpose and outcomes of self-evaluation should be determined

and communicated to all involved parties.

Mechanisms should be put in place to ensure that the goals, purpose and outcomes of

self-evaluation are communicated to all stakeholders involved in the process.

Departmental sessions should be held to provide the opportunity for reaction and input

from all the members involved. The entire department should also participate in the

development and refinement of the instrument, so that they will have ownership and

feel part of the process. A further reason for emphasizing the importance of

involvement is to establish an environment of working together towards a set of

common goals and to get total commitment from all participants (Pekar, 1995:24).

Allowances must be made for disagreements between members of a department, but,

39

ultimately, they must strive to reach a consensus between the different points of

view.

Kells (1995:16) supports the above recommendation by saying that we must

understand and clarify the goals, purpose and outcomes of self-evaluation. He argues

that it is possible that there can be numerous reasons to conduct self-evaluation, but

that the clarification thereof is a "significant matter." In the RAU context, Dr

Vroeijenstijn (2001b:5) maintains that the aim of this self-evaluation exercise was not

intended to close down any department, but to analyze strengths and weaknesses with

the aim of improving the department's functioning. In some instances, self-evaluation

is aimed at accountability, but then it should be communicated clearly to the involved

parties (Pekar, 1995:viii). The ultimate value of the exercise and the expectations

should be clearly explained.

)=. Mechanisms should be put in place to address issues and recommendations

stemming from the self-evaluation report, as well as follow-up procedures

and activities.

Issues raised during the self-evaluation should be addressed to ensure future

participation and success. The process should not just be, or perceived to be, a paper

exercise with no concrete results. Management's role in the reaction to the results

should be evident in addressing concerns and recommendations. Finances and

infrastructure should be provided where necessary, even though as mentioned

previously, some recommendations can be addressed without financial aid, but by

means of changing attitudes and thinking processes. Management (hereby not only

referring to top management, but also line functionaries) should make it possible and

motivate staff to participate. Kells (1995:34) postulates as a prerequisite "adequate

levels of interest on the part of the leaders of the institution" who should stress that it

carries a high priority.

Follow-up procedures should be clearly explained and outlined. The self-evaluation

should serve as preparation for external peer reviews. The external assessment should

be seen as an instrument to check the internal quality as reported upon in the self-

evaluation. Departments should not however only do the internal evaluation as

40

preparation for the external, but it should be a valuable exercise in itself. Kells

(1995:138) is of the opinion that "a major use of the self-study report is as a basis for

the peer review." External evaluation or peer reviews are a completely different "ball

game" with their own challenges (see suggestions for future research).

The completion of the self-evaluation should also result in the formulation of a

strategic plan based on the findings of the exercise. The strategic plan "is a

thoroughly documented set of aims for quality definition, quality assurance and

quality improvement together with a thoroughly documented set of quality assurance

procedures" (Jacobs as cited in Lategan et al. 1997:163). This plan can be updated

and changed annually. Weaknesses should be prioritized to determine which are most

important and should be addressed immediately. The format and style should be well-

organized, concise and focused on key issues. It should provide an honest and unbiased

picture and be suitable for a variety of practices. Kells (1995:95) emphasizes the

importance of the usefulness of the report. The unit administering the self-evaluation

process should play a rote in diminishing the frustration of participants and providing

guidelines and proposed formats for the report.

The self-evaluation should be combined with other, similar initiatives. Other

initiatives may include the annual report or accreditation by an external or

professional body. Management should rethink the role of the annual reports, and how

they can be incorporated in the self-evaluation. The self-evaluation should be

absorbed and integrated into other activities, and not be an additional burden or

administrative exercise.

> Internal and external contextual factors should be taken into account when

planning and conducting self-evaluation.

It is evident from the findings that certain internal and external contextual factors

influence the success of self-evaluation exercises. Internally, integrated, reliable,

regularly updated data should be available to enhance the accuracy of the self-

evaluation. Access to data and necessary information is a major cause of frustration. It

is recommended that a template of information needs to be designed, so that it is

clear what is required. Faculties and departments may choose to manage their own,

41

internal databases. Consideration should be given to the views of participants, an

aspect which is often neglected. Kells (1995:74) however reminds us that this source

of information should not become the only focus area and that other initiatives and

ways of gathering data should be explored.

Furthermore, best practices should be shared amongst departments and faculties. The

sharing of templates, formats, instruments and best practices amongst similar

departments and even among institutions are strongly recommended. Kells (1995:8) is

of the opinion that "acting together can protect individual institutions, can put

pressure on them to act on needed improvements, and can detect potentially

destructive incursions by government."

In terms of external factors, the requirements of external bodies should be integrated

and acknowledged in the self-evaluation process. Any external requirements should be

incorporated in the self-evaluation initiative. Departments exposed to such activities

should, however, be critical about the requirements and standards of such councils. If

faculties or departments already have some systems in place, the self-evaluation

exercise should be incorporated into existing practices. The importance of a healthy

relationship with external bodies cannot be stressed enough. Dr Vroeijenstijn

commented (2001b:7) that producing material required by external bodies is not

always total self-evaluation, even though it is often perceived as such. There should

always be the opportunity to reflect and make recommendations for future actions.

➢ A culture of evaluation for continuous improvement should be created.

By being dedicated to continuous improvement and striving for excellence, institutions

will constantly improve their internal performance, customer service and quality

(Pekar, 1995:21). A climate must be established and structures need to be formalized

to manage the process of evaluation before government formally requires it. However,

staff motivation is often problematic (especially during the implementation phase) and

there is a need to create a positive atmosphere. They should be motivated to identify

and rectify problem areas and to take appropriate actions. It should be clear to staff

members that the goal is also to improve their own working conditions and levels of

satisfaction. The value of such a culture should not be underestimated. Dr

42

Vroeijenstijn reminds us of "small quality" e.g. being on time, preparation for

lectures, quality of printing material etc. Such practices should be an "accepted part

of professional life" (Kells, 1995:7).

This culture should be established at faculty and departmental level by establishing

internal quality assurance systems. This view is supported by Jacobs (as cited in

Strydom et al. 1997:161, 165) who proposes that institutions should be committed to

conducting self-evaluation and other quality related initiatives at departmental and

faculty level. Heads of Departments and members of faculty need to have knowledge

of and training in quality assurance. Kells and Kirkwood (as cited in Kells, 1995:7)

support this view by providing evidence that there is a direct correlation between the

knowledge and attitudes of leaders about evaluation and the level of perceived

success after the self-evaluation process. This would also assist with creating a culture

of quality and continuous improvement. Different forms of evaluation should be

instituted in departments, such as, evaluation of study guides and teaching, practices

determining satisfaction of all stakeholders and so on. Differences within faculties

regarding thought processes should be recognized and dealt with.

> A model or guideline should be provided, but with the option for adaption

and development.

There is no one tailor made instrument that can be used by all institutions, faculties or

departments. As mentioned previously, the involved parties should have a feeling of

ownership and develop it according to the specific environment. A guideline however

is helpful and necessary. It reduces frustration and serves as a starting point. A generic

format or template could be provided, where possible, with the added advantage of

easing standardization for comparisons and analysis.

> The establishment or formalizing the role and responsibilities of a unit for

quality assurance, to steer the self-evaluation process.

The level of support and involvement provided by a quality assurance unit differs from

department to department. Those who are used to professional body accreditation, for

example, often need a different level of support. The role of the unit includes

43

facilitating and monitoring the process and motivating staff. It should also provide

training were necessary. Jacobs (as cited in Strydom, et al. 1997:169) is of the opinion

that training must be categorized to provide for different levels of staff that are

involved in the process. According to Dr Vroeijenstijn, the unit should organize

training sessions and provide support as far as possible or necessary, but should have

no say in the evaluation itself. The unit should evaluate and analyze their own

functioning and activities.

It is recommended that a self-evaluation exercise should be conducted according to

project planning principles. These principles include elements such as formulating a

realistic project plan, with deadlines, and having a project manager or leader. The

responsibility should not however reside only with the leader. He or she should be able

to delegate and assign tasks and focus areas to specific people. The establishment of a

project team or group is, according to Kells (1995:32-33), the first step in the process

of self-evaluation. This group should diagnose the situation and suggest a plan of

action. The members need to know the institution and the departmental structures

well. "The quality of the group's analysis, report, and suggestions will depend on the

leadership provided" (Kells, 1995:93). If possible, an investment should be made in

human resources, to assist with the administrative burden. The person/s appointed

should have technical expertise, not only about the methods and techniques used, but

also in terms of computer skills and logistical arrangements, to simplify the process.

9. SUGGESTIONS FOR FUTURE RESEARCH

The study exposed a number of aspects regarding self-evaluation and related activities

for suggested future research. These include:

the role, influence and involvement of management in the self-evaluation

process;

the debate regarding the administrative versus the academic responsibilities of

academic staff during self-evaluation, and how it can be addressed to enhance

productivity;

the self-evaluation was conducted at support / non-lecturing departments and

units parallel to the academic departments and their situation produces

44

numerous research possibilities such as the development of a manual that can

be adapted within their unique context;

research regarding the format, consequences and follow-up of self-evaluation

reports;

the utilization of data and the investigation of additional sources of data for

the purpose of evidence collection for self-evaluation;

practical strategies to create or enhancing a culture for self-regulation;

the role and function of a self-evaluation committee or team and the

leadership issues involved;

the process of preparing for external evaluations / peer reviews.

It is clear that, especially in the South African context, there are ample research

opportunities still to be investigated and compared with international tendencies

regarding self-evaluation.

10. CONCLUSION

This research essay focused on answering the following research question: How did

lecturers at the RAU experience the quality assurance self-evaluation process? Data

collection was done by means of in-depth interviews with lecturers, and to a lesser

extent participant observation. After analyzing the data, it was found that there are a

variety of responses regarding the self-evaluation process, not all of which

corresponded with my initial assumptions. Most of the lecturers realize the value and

need of such an exercise, but there are certain definite internal and external

influences that can hinder the process. Because they also experience it as a time-

consuming exercise, they appreciate the provision of a structured manual that serves

as a guideline. They do however want the freedom to adapt and organize the exercise

according to their individual needs.

It is evident from the study that self-evaluation should be presented in a positive light

to make academic staff realize that is does not only add value to their personal work

performance, but also has a ripple effect within the broader institution. It results in

improved standards within the department, and ultimately the institution. Thus, the

improvement will be evident, firstly, on an individual level. The individual will be able

45

to present higher quality programmes, in terms of content and coordination with other

programmes. This increased confidence of academic staff members will influence the

functioning on departmental level. Departments will be efficient and optimally

utilizing resources, both human and otherwise. The benefits of the constant analysis

and refinement of programmes, range from identifying and correcting weaknesses, to

being in synch with market demands. Lastly, this will have a ripple effect on the high

standards offered by an institution. The institution would have a reputation as being

committed to continuous improvement. This is reflected in the following diagram:

Diagram 10.1: Ripple effect of self-evaluation:

High standards on institutional level

Effective departmental functioning

..s........„ Improved individual

performance

This study included a review of some of the available literature on the topic of self-

evaluation, it explained the processes of data collection and analysis, and also

suggested some possible recommendations for consideration by departments

undertaking the same exercise. Future research into the area of quality delivery at

tertiary institutions and the implementation of well-planned self-evaluation exercises

will ensure that we maintain the academic standards and values required by the

national and international community.

46

Eating an elephant

Why on earth do we have to do this? I just don't have the time! I am an academic. You're imposing on research time.

Strengths Weaknesses Opportunities Threats Just let me be...

Give me a format! Where's that manual? What wilt they do with this? Only now I realize why.

Some say it's like riding a wave Highs lows highs lows Or how do you eat an elephant You go about it bit by bit.

Nicolene Murdoch

(Composed at a Qualitative Research Methodology workshop while exploring alternative methods to data analysis)

LIST OF REFERENCES

Bailey, A.C. (1996). A Guide to Field Research. Thousand Oakes: Pine Forge Press.

Barlosky, M. ft Lawton, S. (1995). Developing Quality Schools. Toronto: Kodak Canada

Inc and the Ontario Institute for Studies in Education.

Barnett, R. (1992). Improving Higher Education: Total Quality Care. The Society for

Research into Higher Education ft The Open University.

Biggs, J.B. (1991). Teaching for Learning: the View from Cognitive Psychology.

Hawthorn: ACER.

Brennan, J., Frazer, M. ft Williams, R. (1995). Guidelines on Self-evaluation. London:

Open University Validation Services.

Brink, J.A. (1997). Quality Promotion Unit: Quality Audit Manual. Pretoria: SAUVCA.

Byrne, M.M. (2001). Evaluating the Findings of Qualitative Research. AORN Online:

Research Corner. Available from: http://www.aorn.orq/joumal/2001/marrc.htm

(Accessed on 3 October 2001).

Department of Education. (1997). Education White Paper 3: A Programme for Higher

Education Transformation. Pretoria: Department of Education.

De Vos, A.S. (Editor). (1998). Research at Grass Roots: A Primer for the Caring

Professions. Pretoria: Van Schaik.

Fourie, M. (2000). Self-evaluation and external quality control at South African

universities: quo vadis? Paper presented at the 22 nd annual EAIR Forum, Berlin.

Frazer, M. (1992). Quality Assurance in Higher Education. In Craft, A., Quality

Assurance in Higher Education: Proceedings of an International Conference, Hong

Kong, 1991. London: The Falmer Press.

47

Guba, E.G. ft Lincoln, Y.S. (1989). Fourth Generation Evaluation. Newbury Park,

London: Sage Publications.

Harvey, L. (1995). (Editorial). Quality in Higher Education, 1(1):8-12.

Higher Education Quality Committee (HEQC). (2001). Founding Document. Pretoria:

Council on Higher Education.

Hoepfl, M.C. (1997). Choosing Qualitative Research: A Primer for Technology

Education Researchers. Journal of Technology Education, 9(1):1-17. Available from:

http://scholar.lib.vt.edu/ejournals/JTE/v9n1/hoepfl.html (Accessed 5 October 2001).

Kells, H.R. (1995). Self-study Processes: A Guide to Self-evaluation in Higher

Education. Washington, D.C.: ORYX Press.

Kimball, E. (2000). Departmental Reviews Place Emphasis on Self-evaluation.

Heraldsphere News. Brown University. Available from:

http://www.browndailyherald.com/stories.cfm?S=2fID=943 (Accessed 31 May 2001).

Lewis, R.G. a Smith, D.H. (1994). Total Quality in Higher Education. Florida: St. Lucie

Press.

Loder, C.P.J. (1990). (Editor). Quality Assurance and Accountability in Higher

Education. London: Kogan Page.

Maykut, P. a Morehouse, R. (1994). Beginning Qualitative Research. A Philosophic and

Practical Guide. London, Washington, DC: The Falmer Press.

Merriam, S.B. (1998). Qualitative Research and Case Study Applications in Education.

San Francisco: Jossey-Bass Publishers.

Miles, M.B. a Huberman, A.M. (1994). Qualitative Data Analysis. Second Edition.

London: SAGE Publications.

48

Myers, M.D. (2001). Qualitative Research in Information Systems. Available from

http://www.misq.org/misqd961/isworld/ (Accessed on 24 October 2001).

Nachmias, C. (1997). Social Statistics for a Diverse Society. California: Pine Forge

Press.

National Commission on Higher Education. (1996). A Framework for Transformation.

Pretoria: Department of Education.

New Jersey Commission on Higher Education. (2000). Accountability in Higher

Education. The 4th Annual System wide Report. New Jersey Commission on Higher

Education. Available from: http: / /www. state. n j. us/highereducation / reports. htm

(Accessed 2 June 2001).

Open University. (1999). Self-evaluation in Higher Education: The Guide. Centre for

Higher Education Research and Information. United Kingdom: Hobbs the Printers, Ltd.

Patton, M.Q. (1987). How to use Qualitative Methods in Evaluation. London: SAGE

Publication.

Patton, M.Q. (1990). Qualitative Evaluation and Research Methods. Second Edition.

London: Sage Publications.

Pekar, J.P. (1995). Total Quality Management: Guiding Principles for Application.

Philadelphia: American society for testing and materials.

Sagor, R. a Barnett, B.G. (1994). The TQ• Principal: a Transformed Leader. Thousand

Oaks: Corwin Press.

Sallis, E. (1996). Total Quality Management in Education. London: Kogan Page.

Schon, D.A. (1987). Educating the Reflective Practitioner. San Francisco: Jossey-Bass.

49

Silverman, D. (2000). Doing Qualitative Research: A Practical Handbook. London:

SAGE.

Strydom, A.H., Lategan, L.O.K. Et Muller, A. (1997). Enhancing Institutional Self-

evaluation and Quality in South African Higher Education: National and International

Perspectives. Bloemfontein: The University of the Free State.

Trow, M. (1996). On the Accountability in Higher Education in the United States. The

Princeton Conference on Higher Education, March 1996.

Vroeijenstijn, A.I. (1990). Autonomy and assurance of quality: two sides of one coin.

The case of quality assessment in Dutch universities. Higher Education Research and

Development, 9 (1 ): 21 -36.

Vroeijenstijn, A.I. (1995). Improvement and Accountability: Navigating between Scylla

and Charybdis. Guide for External Quality Assessment in Higher Education. London:

Jessica Kingsley Publishers.

Vroeijenstijn, A.I. (2001a). Fieldnotes - Visit to RAU 26 February 2001 - 3 March 2001.

Vroeijenstijn, A.I. (2001b). Fieldnotes - Visit to RAU 11 June 2001 - 14 June 2001.

Wellman, J.V. (2001). Assessing State Accountability Systems. Change, March / April

2001:46-52.

Weusthof, P.J.M. (1994). De Interne Kwaliteitszorg in het Wetenschappelijk

Onderwijs: Een Onderzoek naar de Kenmerken en het Gebruik van Zelfevaluatie aan

Nederlandse Faculteiten. Enschede: CSHOB.

Woodhouse, D. (1995). Audit Manual: Handbook for institutions and Members of Audit

Panels. Wellington: New Zealand Universities Academic Audit Unit.

50

1

ADDENDUM 1 ID 1

N — Ek sal prof net so min of meer vertel waaroor gaan die studie. Dit gaan oor die self-

evaluering wat ons nou hier by RAU doen, maar dit gaan oor ervarings daarvan sodat ons

in die toekoms negatiewe gevoelens kan uitskakel en positiewe gevoelens op die

voorgrond bring sodat die proses makliker sal wees. Ander instellings is ook

genteresseerd in die studie, omdat almal nou besig is om te begin met hierdie oefening

en omdat dit nog baie onseker is op hierdie stadium. Die HEQC (Higher Education

Quality Committee) het ook aangedui dat hulle baie sterk gaan leun op self-

evalueringsverslae van instelling, so dit beteken maar net dit is nou nie meer 'n keuse nie,

dit is nou 'n vereiste, so, maar die feit bly staan dat daar is maar redelik gevoelens

daarrondom. So, die plan met die studie is om nou 'n kort onderhoud te voer en weer

bietjie later in die proses warmer julle meer nader aan inhandiging is. Ek kan nie wag tot

die verslae eers in is nie, want ek moet die einde van die jaar ingee, maar weer 'n bietjie

later, wanneer dit nou meer gefinaliseer is. Net nou 'n kort onderhoud en dan later as prof

nie omgee nie. So wat ek wil weet is maar net in die algemeen, hoe ervaar prof self-

evaluering, wat dink prof daaroor, enige gevoelens daaroor? Dink prof dis nodig?

I — [stilte] Goed, uh, [stilte] ek dink dis beslis nodig. [stilte] Um, veral wat betref die

nagraadse opleiding, en daarmee bedoel ek die magister en doktor studie [stilte] was daar

in die verlede nie baie riglyne neergele, enersyds deur die department en of fakulteit nie.

So persone wat nuut ingekom het in die sisteem is maar aan hulleself oorgelaat om 'n

magister of doktorstudent op te lei. Nou, as 'n mens begin kyk na kriteria wat geld vir

goeie opleiding en jy begin mooi clink oor al die fasette betrokke by die opleiding, dan

kom jy agter dat 'n mens eintlik maar [stilte] 'n student inneem en hoop dat by die

vaardighede gaan aanleer oor die verloop van die tyd. Daar is nie 'n doelbewuste

evaluering van die student se vordering nie. Goed, daarmee bedoel ek nie die vordering

met navorsing nie, maar ook vordering in vaardighede, daardie uitkomstes van SAQA. So

dit was vir my nogal, net weer onder besef gebring hoe belangrik dit is dat mens weer

dink oor wat jy met jou opleiding wil bereik en tot hoe 'n mate 'n mens dan suksesvol is.

2

Baie keer gebeur dit dan dat studente dan goed vorder met een aspek van die navorsing,

maar dat daai ander vaardighede agterwee bly.

N — Dit word nie noodwendig getvalueer op 'n stadium nie.

I — Dit word nooit geevalueer nie. In ons departement het ons net die verhandeling of die

proefskrif wat geeksamineer word. Die ander fasette van eksaminering is baie subjektief.

Ons kyk na 'n student se houding, ons kyk na die aanbieding van seminare, maar daar

word nooit werklik 'n punt aan toegeken nie. Vroeer fare was daar nog 'n sisteem by ons,

afgesien van die verhandeling of die proefskrif, ook 'n mondelinge eksamen gehad het.

Dit was baie goed gewees. Daar kon jy die student se vakkennis toets, jy kon sy vermoe

om sy kennis en vaardighede oor te dra en te deel kon jy evalueer. Jy kon sy diepte peil.

N — Ja, weier as net dit wat by inhandig.

I — Baie keer gebeur dit dat dit wat inhandig word is 'n produk van die student sowel as

die studieleier. Dit is nie net die student se eie nie.

N — Hulle het baie leiding nodig.

I— So, swak studente en goeie studente [stilte] laat ek so se: 'n swak student kan 'n beter

punt kry as 'n goeie student, omdat die studieleier hom gedruk het.

N — Ja, op die ou einde gaan almal deur die sisteem.

I — Almal kry dieselfde graad, maar party [stilte] kom beter deur die sisteem in terme van

die uitkomstes. So dit was vir my sinvol om deur die hele oefening te gaan. My aandeel

in die departement se evaluering is die navorsing. Ons het reeds met voorgraadse tot en

met honneurs situasie-analise gedoen, ons het die SWOT-analise gedoen. Selfs daar

[stilte] was duidelik areas waarop ons baie kan verbeter.

3

N — Wat mens nie altyd aan dink of aandag aan gee nie.

I — Ja, ja. Veral wat betref die onderlinge samehang van die kursusinhoude. Urn [stilte] In

ons opset doseer een dosent gewoonlik een voile semester kursus. En dit is dan

gewoonlik 'n spesifieke onder afdeling van die vak. So [stilte] hoe daardie onder afdeling

aangebied word, en hoe daai raakvlakke met die ander aspekte van die vak aangetoon

word vir die student, kan mens nooit na waarde skat nie. So, ek dink een van die dinge

wat ons beslis in die toekoms sal na moet kyk is beter koordinering tussen die

verskillende modules.

N- Ja, want dis dikwels een dosent wat een semester doen en wat baie goeie werk doen,

maar wat nog steeds nie saamhang met die ander kursusse nie.

I — Ja, en dan aan die einde van die dag as die student daardie geheelbeeld oor die vak

moet kry , dan is die onderbou nie so sterk nie.

N — Hy sien nie die groter prentjie nie. Ja, waar dit inpas.

I — Urn [stilte], goed so die noodsaaklikheid vir so self-evaluering dink ek is daar. Die

voordele wat uit so 'n self-evaluering kan kom is baie goed dink ek. [stilte] Hoe ervaar ek

dit? Dit is tydrowend, dis frustrerend. Goed, ons moes nou van nuuts of begin om 'n

databasis saam to stel. Daar was klein gefragmenteerde databasisses, maar niks

oorkoepelend nie, niks volledig nie. So dit was een faset wat ons ook na moes kyk.

N — Maar prof, dit is nou half opgestel, en sal in die toekoms weer kan gebruik word. Dit

is die tipe van ding wat jy een keer doen, dan kan hy aangepas word.

I — Ons kan nou op 'n jaarlikse basis net aanvul. Dan die samewerking van die kollegas is

ook 'n probleem. In hierdie dokument van Dr Vroeijenstijn se hy dit moet 'n

gemeenskaplike poging wees, en dit is baie waar want, [stilte] anders [stilte] die persoon

wat dit opstel of die persone wat dit opstel ervaar die frustrasies van gebrek aan

4

samewerking en die voordele verbonde aan die evaluerings vloei nie oor na die groter

department toe nie.

N — Dink prof dit is die bewustheid van die kemgroepie wat dit hardloop, besef die

belangrikheid of loop 'n pad met die projek, maar die wat op die kantlyn half ingetrek

moet word, besef nie altyd die waarde of belangrikheid.

I — Dit is baie waar. En ek dink dit is uiters noodsaaklik, want dit help nie dat [stilte]

enkele individue verryking kry uit die evaluering nie, dit moet die department wees, want

dit is tog waaroor dit gaan. So, [stilte] dit het nou ongelukkig saamgeval met ons NRF-

aansoeke wat einde van hierdie maand moet in wees, so die personeel is baie besig en dit

bring frustrasies mee, maar ek kan se, urn, hulle kan beter samewerking gee. Mens kan

hulle nie te veel druk nie, want die klasgee en navorsing moet prioriteit geniet, daar is

vraestelle wat nagesien moet word.

N — Hulle het maar hulle ander pligte.

I — Ja, so ek [stilte] een gevaar wat ek sien in hierdie hele oefening is dat dit nie wyere

verryking gaan meebring nie. Dat dit net 'n aksie is wat die departement deurgaan om 'n

dokument op die tafel te le en die werk klaar te kry, sodat hulle met hulle normale ou

prosesse kan aangaan.

N — Laat dat dit nie regtig gaan infilter nie, die voordeel daarvan gaan deurfilter nie.

I — Dit sien ek as 'n moontlike gevaar, so, maar ons beplan om 'n departementele beraad

te hou, binnekort, waarop die hele departement dan insette gaan lewer oor die finale

dokument. Maar hierdie berade het ook maar hulle beperkte gevolge. [stilte] Wat was nog

vrae gewees?

N — Ek wil net you aanhak by iets wat prof nou gese het. Die departement was een van

die min departemente wat die hele departement bygetrek het by die aanvanklike besoek

5

van dr Vroeijenstijn. Die ander departemente het net die kernkomitee of party net die

voorsitter bygetrek. Maar dink prof dit het 'n invloed gehad, het (Iasi sessie nie gehelp

dalk om hulle, of was hulle nogsteeds half buite op die kantlyn gewees al het hulle, dis

asof hulle nog steeds nie die ems van die saak... ek het gedink dit sal anders wees hier

omdat hulle by was toe by hier was.

I— Dit is moeilik om te veralgemeen want ons is 'n baie diverse departement. In A het

hulle vier onderafdelings, en dan het ons B as 'n parallele department en van die, ek sou

se 80% van die personeel het dit bygewoon, die inligtingsessie, maar in daai stadium het

hulle nie werklik besef dat daar &ens in die toekoms insette van hulle verwag gaan word

nie. Hulle was meer passiewe ...

N — observeerders.

I — Ja, [stilte] goed, hierdie dokument is uitgedeel, en hulle het dit gekry en neergesit. Ek

is seker dat afgesien van die mense wat die aksies loods, die ander nog nie die moeite

gedoen het om werklik dit deur te gaan nie.

N — Ja, daar is nie iets konkreet, dadelik van hulle gevra nie.

I — Ja, so [stilte] maar ek moet ook se, dit is maar hoe dit met verskeie aksies hier gaan by

die Universiteit. Die personeel is werklik oorlaai sodat enige bykomende aktiwiteit word

nie ... word as ekstra werk beskou en daar is half 'n gevoel daarteen. 'n Weerstand.

N — Dis net nog iets wat bygekom het.

I — Wat net nog tyd wegneem van navorsing, die kernbesigheid.

N — Ons ervaar dit venal by die sentrum, ervaar ons dit omdat ons "skills development'

en daai tipe goed wat van die regering of kom en goed vereis, ervaar ons dat ons as

boodskappers altyd geskiet word. Want soos hierdie kom ook, dit is ook iets wat, soos

6

"skills development", die mense haat die vorms, hulle sien dit net as nog 'n ding, hulle is

vraelys-uitgeput op hierdie stadium. Maar ek weet nie of mense nie besef dat dit goed is

wat ons moet doen, hierdie is iets wat ons moet doen ek clink as ons die bewustheid in die

Universiteit kan vergroot. Ons wil, ons wil net he die HEQC moet konkrete riglyne op

die tafel sit, en se dit is wat hulle gaan vra, en as hulle nie dit het nie. Ons kan half lees al

wat gaan kom, soos hierdie, hulk het dit vasgemaak. Hulle kan nie by elke instelling, by

elke departement uitkom om 'n eksteme oudit to doen nie. Hulle gaan hierdie verslae vat,

wat saamgevat gaan word in 'n institusionele oudit en dit is wat hulle RAU op gaan

evalueer of assesseer.

I — Wat ek oor gewonder het, elke departement moet nou so 'n dokument gaan opstel, so

dan gaan sekerlik per fakulteit 'n dokument opgestel word. Gaan daar nie in die proses

baie van die aspekte wat na yore gekom het verwater of verdof nie. Dat so universitere

dokument dan eintlik baie niks seggend gaan wees.

N — Ons wil ook waak danrteen dat dit 'n baie algemene, sso prof nou se, niksseggende

papieroefening gaan wees. Dit is die grootste gevaar van self-evaluering is dat daar op die

ou end 'n dokument op die tafel is, en dat dit nie gaan voordeel inhou vir die hele

departement of die hele Universiteit dan nie. Maar die HEQC het gese hulle gaan

departementele verslae, hulle gaan nie net institusioneel doen nie...

Onderbreking

N — So ons hoop dat dit nie so gaan wees nie, en dat hulle duidelik riglyne en ... maar

hulle weet self nog nie hoe dit gaan wees nie, dit is die probleem. Ons wens dat hulle

hulleself net wil uitsorteer. Ons woon al hulle forums en berade woon ons by, want ons

probeer agterkom wat wil hulle he, wat gaan hulle doen. En hulle is baie duidelik dit gaan

nie 'n papier oefening wees nie, en daar gaan definitief... En nou is daar soveel ander

dinge soos die samewerking tussen die instellings met C en die D wat nou weer 'n

invloed het.

7

I — Ek moet se in van my gesprekke met van die personeel het dit ook na yore gekom dat

ons gaan nou deur hierdie oefening maar hoe relevant is dit nou gesien teen die ander

veranderings...

N — Wat hulle ook mee besig is. Ons het nou met C en D, nie net met C en D, daar was 'n

kwaliteitspersoon van elke instelling en die HEQC het gese spesifiek waar "mergers"

aanbeveel is, hulle van self-evalueringsverslae gebruik maak om te besluit. Maar dit

maak dit moeilik want ons wil he die doel van self-evaluering, dis mos of aan die een

kant "accountability" of "improvement". En "accountability" is "ons kan julle toemaak

op grond van dit". Ons wil nie he dat dit, dat departemente met daardie vrees loop hulle

gaan toegemaak word nie op grond van wat hulle daarin sit, want dan gaan hulle nie

eerlik wees nie. Dit moet vir ons meer 'n "improvement" tipe van oefening wees. Hulle

het wel gese dat, en hulle se dit deur en deur, hulle gaan swaar leun op self-

evalueringsverslae. So dit wys maar net vir ons, ons is op die regte pad en hulle is baie

geinteresseerd wat RAU doen, omdat ons een van die eerstes... die technikons is noual

heelwat verder in die proses van SERTEC en so, maar in terme van universiteite. Daar is

self-evaluering in plek by van die ... maar dit is nie regte self-evaluering nie.

I— So twee jaar terug was ek een van twee van die komitee wat die E department van F

Universiteit moes evalueer. [stilte] Dit was ook self-evaluering van hulle kant af, maar

glad nie geskoei op hierdie model nie. Dit wat hulle aan ons voorgele het was maar

voorbeelde van hoe hulle dit doen, nie dat hulle hulleself die vraag afgevra het: "is dit die

regte manier om dit te doen?" of "kan ons dit dalk beter doen nie?". Dit is aan ons

voorgele, ek is seker die negatiewe aspekte is weggehou van ons af.

N — Dit is so maklik om dit te doen.

I— Op die oog af was alles goed gewees, dit was anders as RAU sin, en dit was anders

die Universiteit van G sin, maar ek kon nie werklik kritiek fewer daarop nie. Self-kritiek

is eintlik goed. Um, ek het so vraelys opgestel voor ek begin werk het aan die

navorsingstruktuur en ek het vrae gevra soos: vind die personeel die navorsing in die

8

department bevredigend? Aan die een kant het almal van hulle gese ja, navorsing is uiters

krities vir hulle, en dat jy nie werklik in die H fakulteit 'n dosent kan wees sonder om ook

`n navorser te wees nie. Toe was ek baie verbaas om te sien hoe negatief die personeel

navorsing eintlik ervaar, en dit is dan veral die senior personeel. Die jong mense wat nou

nog besig is met hulle eie studies en so aan, hulle ken nog nie sisteem so goed nie en

hulle is nog half meer optimisties, maar die meer senior mense in die departement was

feitlik deur die bank baie baie negatief in hulle ervaring van navorsing. So aan die een

kant is dit 'n aspek wat almal se "ja, dit is uiters krities vir my as dosent in my

werksbevrediging", maar aan die ander kant ervaar hulle dit baie negatief Maar dit is nou

die frustrasies wat deur burokrasie, befondsing, gebrek aan infrastruktuur, gebrek aan

aparatuur meegebring word. So ek dink dit is 'n [stilte] situasie wat aangespreek sal moet

word, want jy kan tog nie vir so belangrike been van jou werkskomponent heeldag net

frustrasies ervaar nie.

N — Dit gaan later die ander komponente beinvloed.

I — Dit gaan, dit gaan. Ek is seker as die finale verslag deurgaan na die Delman toe, gaan

hierdie aspek baie afgewater wees. Dit gaan nie so sterk genoem word in daai verslag nie,

want ons wil nie die dekaan ontstel nie. Hy probeer van sy kant of doen as wat by kan.

[stilte] Ja, so, ek weet nie. Aspekte van die akademiese oefening: Dit is daar maklik om te

se ons moet hier of daar verbeter en ons kan dit deurvoer en resultate verwag.

N — Jy kan dit meet en sien dit is beter.

I — Navorsing is moeiliker. Ek is juis nou besig om 'n vraelys op te stel vir die studente

oor hoe ervaar hulle, hoe bevredigend, die "satisfaction" komponent wat hulle kry uit die

opleiding uit. Daar wil ek ook nogal bietjie [stilte] diepgaande vrae vra om werklik te kan

bepaal hoe bevredigend die opleiding vir hulle is. Ek dink ook wat hulle aanbetrek, hulle

dink nie eintlik wat hulle uit die opleiding behoort te kry nie.

N — Hulle weet nie eintlik nie.

9

I — Een van die vrae byvoorbeeld is: dink jy dat die opleiding wat jy by RAU kry jou

toerus om 'n goeie pos in Suid-Afrika te kry, en die B-deel van die vraag, om 'n

moonlike post-dok oorsee te kry. Daar speel mens nou of die markaanvraag in Suid-

Afrika, teenoor die diepte opleiding wat nodig is om oorsee in 'n post-dok in te gaan.

[stilte] Miskien is ons skuldig daaraan dat ons nie genoeg aandag aan beide strome gee

nie. Dat ons miskien studente oplei of navorsers oplei met die idee dat hulle baie goeie

navorsers gaan wees, terwyl die markaanvraag in Suid-Afrika is meer vir mense met 'n

honneurs.

N — En nie sulke hoe, in diepte navorsers nie.

I — Ja, en dan ook [stilte] wat ek in die verslag wil noem is dat die uitkomste, daar moet

deurplopend evaluering wees van hierdie uitkomste. En as 'n student nie aan 'n sekere

kategorie van uitkomstes voldoen nie, dan moet die studieleier dit met horn of haar

bespreek. Urn [stilte] want weereens, ons het innames, ons het hierdie swart kassie, en

ons het studente wat anderkant uitkom. Ons weet nie wat in die swart kassie aangaan nie.

N — Ja, hulle gaan deur die proses en hulle kom anderkant uit, maar dis wat gedurende die

proses gebeur en die vaardighede, mens wil weet of hulle ander vaardighede as net bloot

die skryf van 'n navorsingsverslag.

I — Ja, dit is, die 'criteria wat ons gebruik is as daar se nou maar twee navorsingsartikels

uit 'n magister kom, is dan beter as 'n magister wat dan net een artikel. En dis is nie

noodwendig so the. [stilte] 'n Student kan tydens sy studie as 'n tegniese handleier

ontmoet word, wat eintlik verkeerd is, ofjy kan hom werklik probeer oplei as

wetenskaplike, wat eintlik die regte ding is. Ja...

N — So prof hoop dat hierdie oefening daai tipe probleme gaan aanspreek. Dat dit nie

afgewater gaan word in 'n algemene verslag nie, en weer deur die mat gaan val of ...

10

I — Ja, ek dink as ons vat, die departement se verslag nou by die fakulteit eendag uitkom.

Dan hopelik sal daar [stilte] van die fakulteit se kant 'n bewuswording wees om hierdie

aspekte van nagraadse opleiding meer formeel te maak.

N — Dat hulle sal reageer daarop.

I — Om 'n ander voorbeeld te gebruik, oorsee is daar 'n komitee wat kyk na studente. Dit

is nie net 'n enkele studieleier nie. So daardie komitee is gewoonlik dan ten minste drie

persone, senior, ervare navorsers, wat dan daardie student se vordering moniteer.

N — Dit is nie 'n subjektiewe, een persoon studieleier nie.

I — En, uh, [stilte] die komitee evalueer dan'n paar keer die student se vordering en voor

die student mag inhandig moet die komitee toestemming gee. En hier by ons is daar baie

keer druk van die student se kant af: "ek moet nou inhandig", terwyl jy is huiwerig. Is die

student werklik gereed?

N — So 'n uitvloeisel hiervan, sal hopelik 'n meer formele sisteem...

I— Dat die resultaat 'n meer formele sisteem sal wees, gemeet aan die uitkomstes wat

verwag word.

N — En hierdie model, waarin RAU ingekoop het, dink prof dit was 'n goeie idee, of dink

prof dit was 'n goeie model, of dit was te voorskrywend, of... want mens wil 'n tipe van

`n riglyn he, maar ons wil steeds nog ook probeer om akademiese vryheid toe te laat.

I — Uh, ek dink die model is goed, vir voorgraads is by baie goed, vir nagraads is daar

[stilte] kan daar bietjie strenger kriteria... Ons het in die department besluit ons gaan horn

nie streng navolg nie.

N — En dis heeltemal goed.

1 1

I — Ons gaan hom bietjie aanpas volgens ons behoeftes en waar daar dan areas is wat ons

voel die model nie honderd persent van toepassing is nie, gaan ons dit net so noem en

aangaan.

N — Die gedagte is ook om as daar dirge is wat byvoorbeeld nie in die departement is nie,

byvoorbeeld om die satisfaksie van die mark te bepaal dan noem 'n mens dit, daar is nie

op hierdie stadium so iets in plek nie, maar ons is besig om iets daaraan te doen. Want

ons kry in party departemente, ons hoor van ander instellings ook, wat vreeslik skop teen

enige vorm van 'n model, wat baie, baie sterk steun op akademiese vryheid. Die

probleem is net dan word alle aspekte miskien nie aangespreek nie.

I — Wat vir my nuttig is in hierdie model is dit dwing jou om oor 'n baie wyer gebied te

dink.

N — Goed wat mens nie noodwendig self sou aan gedink het nie.

I — Ja, ja. As jy nou hierdie matriks gaan volg en puntsgewys elke [stilte] aspek die

situasie-analise moet doen, dit onderling bespreek en dan 'n SWOT-analise dan se wat

moet verbeter, dan dink ek [stilte] raak jy aan... dan kry jy diepte. Jy raak aan aspekte

wat baie keer, jy het momentum, maar oorspoel word. Dinge [stilte] gebeur maar met 'n

rede, op daardie patroon geskiet, maar weereens, dit is nie noodwendig die gewensde

patroon nie en dit spreek baie keer nie aan wat dit behoort nie.

N — Ons boor dit baie. Hulle doen dit al vir twintig jaar, so dit moet reg wees, en dis juis,

as jy dit al vir twintig jaar doen dan is daar fout. Maar hulle skop maar... Ek moet se by

hierdie departement het on regtig 'n openheid gekry om self te kritiseer en self te

evalueer want ons sien by ander instellings ook dat die self-evaluering wat gedoen word

is bloot 'n af'tick" van 'n vraelys, daar is geen evaluering en refleksie en daai situasie-

analise wat gaan ons nou doen om dit beter te maak.

12

I — Ek dink miskien wat uh, [stilte] ons as wetenskaplikes moet baie keer dinge evalueer

en die betekenisvolheid van 'n stel data, is daar so 'n ingesteldheid het en dat ons die

dinge op gesigswaarde aanvaar nie.

N — Jo, hulle is bang om te kritiseer, bang om self te kritiseer.

I — In ons werksopset, as jy nou navorsingsresultate voorle vir publikasie, dan word dit

deur sogenaamde "peer review" geevalueer. En daardie evaluering is partykeer baie

streng en baie vernynig. So ons is gewoond daaraan om andere te kritiseer, partykeer

onsself ook. Dis nie altyd lekker om kritiek te hoor nie. Jy sien tog die meriete van die

kritiek in.

N — Mens moet dit tog net ook evalueer, die kritiek evalueer, of daar waarde daarin is, ja.

I — En baie keer is daar, en dan is daardie kritiek tot voordeel en verbetering van jou eie

navorsing. So dit is 'n kultuur waaruit ons kom, so miskien is dit hoekom ons meer oop is

vir die proses. As dit nou net nie was vir die hoe werkslading nie, dan kon ons werklik

baie meer tyd daarin gesit het.

ADDENDUM 2

DETAILED NOTES ON THE VISIT OF DR TON VROEI3ENSTL1N 26/02/2001 — 03/03/2001

MEETING 1

Date:

Monday, 26 February 2001 Time:

12:00 Venue:

CHES — conference room CHES, Chairperson: Quality Care Committee

Our expectations of this introductory visit:

This visit is mainly seen as a "head turning" exercise, to sensitise colleagues regarding the self-evaluation (SE) initiative. Some colleagues still see quality assurance / SE as a bureaucratic action, and an inhibiter of academic freedom. They need to be informed about the global and local initiatives and tendencies in this regard, to show we are in line with international practices. The faculties must be made aware of the SE initiative and some initial uncertainties needs to be addressed.

The program:

The meeting with the Quality Care Committee (QCC) is especially important because:

They must be able to feed a positive attitude back to the faculties; They need to be provided with an opportunity to raise questions and concerns, as they have been part of this journey from the beginning.

The faculty meetings are important because: They are often not informed and "converted" as yet; Some of the faculties are more problematic/negative than others and They have to buy into and support the concept of SE for this to be an effective exercise.

Non-academic departments: The manual provided is, at this stage, strictly written for academic departments. Each support unit is unique and distinctive. Some support units also have an academic focus e.g. CHES. There will be a special meeting to address their needs.

1

Dr Vroeijenstijn's expectations:

To play a facilitating role. To convey the important message that this exercise is costly and time-consuming. To prepare the staff of RAU for what to expect. To get feedback on the draft SE-manual. To identify departments to take part in pilot project of SE. To call for the election of small work groups within participating departments. To sensitise staff that this will eventually be expected by the government and Higher Education Quality Committee (HEQC), and prepare them for an external audit. To stress the importance of the training of staff who will responsible to manage the process. To keep other departments informed. To identify problem areas. To stress the importance of SE for self-improvement, and not just as part of a governmental or external initiative.

The role of the QCC and CHES:

To develop the instrument. To facilitate the implementation of the instrument within faculties. To organise the process on a central level. The responsibility lies within the faculties to implement the SE.

Foreseen problems:

The IT-system because of the insufficient provision of records and the lack of reliable data.

Preparations for Dr Vroeijenstiin's next visit in May:

Preparation of departments that volunteered to take part in the SE exercise. Dr Vroeijenstijn would train the internal work groups within the identified departments. Exploration of the appropriateness of the manual for the different contexts. Provision of feedback about the manual to Dr Vroeijenstijn. The departments must experience a feeling of ownership regarding the instrument.

2

MEETING 2

Date: Monday, 26 February 2001 14:00

Venue: Conference room — Faculty of Education and Nursing Group: A

Dr Vroeijenstijn addressed the following issues:

This is not a bureaucratic, top-down approach. We, as an institution, have to personally take responsibility for quality, in our own environment and context. The SE-manual must be seen as an instrument departments can use to ensure and improve quality. It must be used to identify strengths, not only to point out weaknesses. The manual is not to be forced upon any department, it must be seen as a guideline and the department should decide what is appropriate for their situation. The manual must be adapted to be program, department and university specific. The whole department must be involved in this Icustomisation'-process. The SE-manual will also be adapted for the non-academic departments / support units according to their specific situations and the nature of services offered. The higher education context becomes more competitive each day. Dr Vroeijenstijn used the example of a lost person having two options, wasting time wandering around, or buying and using a map to navigate the route to the other side. The instrument must be seen as a map to guide us through this evaluation. A valuable SE-exercise needs involvement and insights from external sources. Departments sometimes assume they are doing the right things, and therefore they need an external validation of this belief. But it is important that the criteria for the internal and external evaluation are comparable. SE is context-based. The criteria for evaluating a 5-star restaurant is not the same for evaluating a fast food outlet e.g. McDonalds. Experts in the field of study should participate in selecting and determining the criteria for the specific area/department. The department would therefore play an active role in determining criteria for their own SE. All the stakeholders involved have different and diverse expectations, attitudes and demands regarding SE. Benchmarking should be done to compare the institution to other, similar national and international constituencies. As we all know, quality is not static or easy to measure. Dr Vroeijenstijn used the example of wine. You can measure whether it contains all the necessary ingredients, but that does not necessarily mean it is a quality wine. The external panel of evaluators should be experts in the specific field; the ideal group is usually 5 people.

3

Students should also be involved and aware of the initiative. They should be allowed to air their views for a specific part of the evaluation. A student should also be included in the external evaluation committee. The issue of intemationalisation vs. African relevance should be considered. We should deliver students that are trained for a specific employer. The input from the employers should also be considered when determining criteria for SE. The goals and objectives of a department should be aligned with the mission and vision of the university. A synthesis must be reached between a bottom-up and top-down approach. A SE-exercise must be conducted every x number of years in a cyclical manner. The HEQC will have their own cyclical process, and RAU would have to adapt accordingly. Client satisfaction is a major part of any SE-initiative. Departments should establish what information they require from their 'clients' and they should develop their own unique instruments to gather this information. This is a pattern of thought that must be established throughout the university. The issue of centralisation vs. decentralisation is an international problem. This initiative depends on support from management, but on the other hand, they cannot enforce it, the responsibility and ownership must lie within the department.

MEETING 3

Date: Tuesday, 27 February 2001 Time: 8:00 Venue: Conference room — Faculty of Education and Nursing Group: B

Dr Vroeijenstijn gave a brief introduction to the proposed project. (See meetings 1 & 2)

The Nursing department volunteered to participate in the initial SE-exercise. They are used to accreditation from a professional body. The Health Services are quite far advanced in this regard, and use a 3-step model, namely:

Formulation / provision of standards; Measuring / evaluating against it; Reaction to it.

This evaluation is done in a measurable format, indicating whether it is compliant, partially compliant or non-compliant. It is important that the department take responsibility and ownership for the activity.

4

A problem that the Nursing department experiences with the focus of the professional body, which is professional accreditation. But, what about quality? They want to consolidate Dr Vroeijenstijn's model with the existing accreditation exercise. The department of Nursing is used to and prepared for evaluation. They would like to critically investigate the model to see how it can be included and adapted according to the system they are used to. The department will provide Dr Vroeijenstijn with the necessary documents regarding their accreditation (Attached as Addendum 1). Dr Vroeijenstijn felt that the models are compatible. People will always experience the idea of formal measurement as bureaucratic. The department of Nursing feel that they are used to this form of evaluation, and do not experience it as being threatening. The instrument will be adapted specifically for the department and is not prescriptive in any way. No matter what instrument or model is used for SE, there are certain basic questions, which have to be included and reflected upon. The professional body have some standards, but academic standards are dynamic and must be adapted regularly to stay internationally competitive and comparable. The Quality Assurance Association (QAA) of England (www.qaa.ac.uk) developed benchmarking standards for 28 disciplines. It can be used as examples when we have to formulate our own standards. The South African Qualifications Authority (SAQA) is also in the process of formulating standards by means of the National Standards Bodies. But the central idea is that a faculty or department sets its own standard and measure themselves against it. We must strive not to have diverse and separate systems, but they should be consolidated to complement each other. The terminology needs to be cleared at the beginning of the exercise. The QCC can learn from Nursing in that they have come a long way in this regard. It is in the interest of RAU and its clients. The committee also expressed their appreciation to the department for their willingness to participate. The academic community needs to be kept informed and a culture of evaluation must be created. We do not have a choice, so the climate must be established and structures need to be formalised to manage the process. A committee must be appointed within the department of Nursing to manage and implement the process. The question of follow-up procedures was raised. Recommendations should be made for changes and improvements. It does not necessarily only mean an investment of money; some things can be changed without cost. It is not a punishing act, and departments should not feel threatened. The evaluation will also not be individual in nature, but a collective approach will be followed.

5

MEETING 4

Date: Tuesday, 27 February 2001 Time: 9:00 Venue: Conference room — Faculty of Economic and Business Management Group:

Prof D, provided the following introduction regarding the faculty: The Faculty of E have approximately 4000-5000 students (± 40% of the university's students). The faculty also indicated that they are predominantly output and result driven. They also have the lowest student-lecturer ratio. They are mainly concerned about the quality of the results, and concentrate less on the quality aspects of the process.

Dr Vroeijenstijn made the following remarks: How can I know my product is good, if I don't know that the production process is good? But he stressed that it is up to the faculty to decide how they want to ensure quality. Faculties often have questions and comments like: "Why should we do it?", "We don't have time", "It is bureaucratic nonsense", "It is a threat to my academic freedom", "Who do you think you are to come and tell me what to do?" etc. Most faculties, who expose themselves to such a SE, usually respond: "It was a hell of a job, but we are glad we did it!" Management should support the initiative, but they should not be seen as a control mechanism. South Africa should take a lead in this regard, seeing that government in the near future would enforce it. Faculties often blame low budgets or limited finances for low standards in quality, but it is not always a matter of money. Sometimes it is only attitudes that must be changed and thinking that needs renewal. But in the instances where money is needed, management should provide financial support as far as possible. Staff motivation is often problematic and a positive atmosphere should be created. During the implementation, staff should be motivated to rectify problem areas and to take corrective actions. Make it clear to staff members that it is also to improve their working conditions and levels of satisfaction. The aim of the exercise is not to close down any department, but to identify or analyse strengths and weaknesses. The focus is not on the negative, but to build and expand on the positive issues. An internal evaluation should be followed by an external assessment. The two would be linked and the external assessment should be seen as an instrument to check the internal quality. Departments should not do internal

6

evaluation merely for preparation for the external, but they should be aware of the departmental value of the SE. External advice and input are necessary to determine whether the department output meets the demands of the labour market. External forces should then influence, on a continuous basis, curriculum design and development. The department will play an active role in nominating the experts in the specific field. The programmes offered in a specific department are the starting point for assessment. All contributors to the program should be involved in the SE. Evaluation of research is also important, but we have to start somewhere, and quality in teaching and learning would filter through to research projects. The SE-exercise should be followed by a strategic planning session, based on the self-evaluation. This plan needs to be updated each year. The environment is changing so rapidly that a SE needs to be done at least every 5 years. The strategic plan will then be adapted accordingly. Management should rethink the role of the annual reports, and how the SE can be incorporated. The SE should be absorbed into it, and not be an additional burden and administrative exercise. It should be integrated in one report. Some departments within the faculty already have certain procedures in place. That should also be incorporated and used in the SE. Internal quality assurance systems within faculties should be put into place to determine, for example, student satisfaction. The evaluation of teaching by students should also be refined to be incorporated in the SE. The opinion of the labour market and Alumni should also be determined and used for strategic planning. Departments should not underestimate the importance of "small quality" e.g. being on time, preparation for lectures etc. Regarding the matter of undergraduate students being incompetent to evaluate lecturers, they are the 'clients, and their perceptions and experiences influence their studies and decisions. The SE-manual is not a questionnaire with certain questions that needs to be answered. It is more of a qualitative approach of reporting on a certain issue prompted by means of formulated questions.

MEETING 5

Date: Tuesday, 27 February 2001 Time: 12:00 Venue: Conference room — Faculty of Science Group: F

7

SE should not be seen as a bureaucratic system that is out of the control of the faculty or department. It is important and valuable for internal development. It is time-consuming and demands a lot of effort, but eventually it will be made compulsory from government side. We need to prepare ourselves internally for this. Faculties and departments must establish an internal quality assurance structure. Each department can adapt the instrument provided to suit their specific needs and situation. It will not be enforced and must be seen as a guideline. The exercise should be repeated every five years. It should also be followed up by the formulation of a strategic plan. Strengths and weaknesses must be identified. The weaknesses would be prioritised to determine which are the most important. The instrument should not be seen as questionnaire, but an aid to guide a department to reflect qualitatively on certain activities and issues within the department. The faculty already have some systems in place and the SE should be incorporated with the existing practices. The support and commitment from management is stressed once again. The role of the QCC should also be explicit and their activities visible on campus. The departments, which are taking part, should appoint an internal work group. They must have access to the necessary information. A draft report should be presented to the whole department for information and comments. Their experiences should be shared with other departments. The recommendations made must be followed up. It does not necessarily cost money to do this. Sometimes a change in attitude is required, but management must be committed to provide funding where possible. The tension between decentralisation vs. centralisation was discussed. A balance between the two is needed, but the responsibility lies within the department to establish their own criteria for SE, so that the staff can accept ownership for the process. Other institutions in South Africa are already actively involved in SE, e.g. Stellenbosch and Pretoria. We are not in the position, as yet, that government forces it upon us. The Netherlands would never have engaged in SE, was it not forced on them from the outside. Institutions in South Africa should take the initiative and prepare themselves for the inevitable. The HEQC promised in the Founding document that they will have a "light touch" if institutions are actively involved in audit activities.

MEETING 6

Date: Tuesday, 27 February 2001 Time: 14:00 Venue: Conference room — CHES

8

mug: Task Team 7 & QCC secretariat

Regarding the document of Task Team 7, Dr Vroeijenstijn had the following comments:

The context should be explained from an international, national and RAU perspective. The development of policy documents is a priority. Examples can be found on the website of the QAA (Quality Assurance Agency) of the United Kingdom. It is also important that it should be implemented on faculty level and that the QCC is not seen as a centralised bureaucracy.

MEETING 7

Date: Wednesday, 28 February 2001 Time: 9:00 Venue: Boardroom, A ring 3 Group: FOTIM

The notes of the workshop as provided by FOTIM are attached as Addendum 2. The slide presentation is attached as Addendum 3.

The following additional comments were made regarding SE:

SE should be done holistically, not in a fragmented manner. Certain problems and questions are experienced when engaging in a SE-exercise, e.g.:

Why should we do it? We never had complaints before. We don't have time. SE is seen as a firing or closing down tool. It is seen as a threat. The involved parties do not know why they are doing it. The main aim should be internal quality improvement. It is not a punishment. Dishonesty. What would the consequences be? Weaknesses should be discussed thoroughly. We need money to improve. Management does not support us. All aspects need to be evaluated, including management structures. It is seen as individual appraisals. Unexpected outcomes. Lack of data, results in 'white spots' where answers are not available. Provision should be made for the development of an instrument to address the 'white spots'.

9

There should not be two systems of SE in one country for internal and external assessment. The following are important perspectives captured in the January Founding document of the HEQC:

The importance of the relationship with professional bodies regarding accreditation procedures. The idea of Ileamerships' to build capacity. Even without external assessment, SE is necessary for personal development and planning. External assessment is often used as a political instrument. It is meant for the assessment of quality and should be independent of policies. Can we convince the external bodies that this is the correct instrument to use so that there is a correlation between internal and external assessment procedures? Public and private providers should all be subjected to accreditation.

The main aim for internal and external assessment is improvement. The process and the product need to be assessed. It you take care of the process, the product will be successful. But the desired product needs to be determined beforehand. The problem in higher education (HE) is that we need to define clearly what kind of graduate or output we expect, which is not an easy task. We are too process-orientated. The output should be clearly defined and then we must determine what kind of skills, knowledge and attitudes we need to achieve that specific output. The labour market is only interested in the product. Quality SE is group motivated with clear explanations about what is expected. Certain instruments should be in place to collect and access the required data. The SE report is addressed to and discussed within the department. It is also important to direct it to the faculty or body on central level. Recommendations should be addressed. The role of the Quality Unit (QU) is to monitor the process, motivate staff and facilitate the process. They should provide assistance regarding the process to follow. They should organise training sessions, but have no say in the assessment itself. The department set their own criteria. The QU should also assess and analyse its own situation. It is definitely problematic to continue with the process before clear guidelines are established by the HEQC, but on the other hand, institutions can't sit around and wait for governmental intervention. Quality depends on the setting of criteria; the department or institution decides what is acceptable for them. There can also be conflict between stakeholders, but they must strive to reach a balance between the different requirements and points of view.

10

The support needed from central management is stressed once again. They should make it possible and motivate the staff to participate. Quality is made at grass root level, and the departments are still responsible for their own initiative, but central management should provide the necessary instruments. They should be involved in the follow-up procedures, providing support and resources where possible. The idea of a 'contract' between the department and management was introduced, as to who plays which role in the follow-up procedure. The setting of criteria should be done within a certain framework. Benchmarking should also play a primary role in SE. Departmental workshops should be planned by the departments themselves and not prescribed. The QU should assist in training and support. Why do we use models when venturing in SE?

It helps us to direct our thinking, but leaves the opportunity to formulate and determine our own goals and aims. It helps us to discover 'white spots'. It helps the work group to structure the exercise.

MEETING 8

Date: Thursday, 1 March 2001 Mpg: 8:00 Venue: Faculty of G Group: Faculty G

The SE must not be seen as a bureaucratic process, but as something for personal benefit and improvement. Higher education becomes more and more competitive. SE is a cyclical process and the instrument can be adapted for the specific department or faculty. It must be seen as a guideline, and not a prescriptive questionnaire.

A professional body is accrediting the Faculty of G comprehensively. This will take place in August this year. The outcomes-based format will specifically be investigated this year. A member from the academic sector and industry constitutes the external panel. Dr Vroeijenstijn commented that producing material to an external body is not SE. There is no opportunity to reflect and make recommendations for future actions. The professional body also only look at undergraduate courses. Dr Vroeijenstijn will have a discussion with the Faculty of G in May to precede and assist with the accreditation, taking place in August.

1 1

MEETING 9

Date:

Thursday, 1 March 2001 ra_gn :

10:00 Venue:

Boardroom — H Group:

H

SE is done for personal benefit, not only because of external pressure. It is not a bureaucratic process, and does not inhibit or threaten academic freedom. The faculty / department must determine where they fit in within the broader university strategic plan. It is sometimes a painful process, but must be seen against the background of competing institutions and the importance of internationalisation. There is a definite need in South Africa to raise the levels of quality. A paradigm shift is needed and a change in mentality with regard to the idea of an individual academic moving towards the notion of a business-like operation. Higher education will not survive without making this conceptual shift. This needs to be attended to in a structured manner. SE is a formalised quality 'test', and we need to do it on our own initiative and not wait around for the Ministry for force it upon institutions.

A new I-programme was instituted in 1997. The success of the course needs to be determined. The faculty feel that the manual can assist them in this evaluation. The faculty also have a journal that is losing subscriptions, and they want to determine why. This is definitely a quality issue which needs to be addressed, also in terms of subsidy. The faculty wants to initiate an internal assessment, and the manual can be adapted for their specific needs. The faculty must have ownership and the process must be intemalised. The role of the annual report needs to be re-evaluated in this regard.

MEETING 10

Date:

Thursday, 1 March 2001 Time:

12:45 Venue:

Boardroom — Faculty of 3 Group: Faculty of 3

Higher education institutions become more and more competitive in terms of national and international developments.

12

SE should be introduced as a continuous process and instrument to assess the quality of programmes and departments. The faculty should be participating in adapting the manual as a thinking instrument about quality. It is actually nothing new, but just a more structured way of reflection. It is also not a questionnaire that needs to be completed. The focus needs to shift from the idea of the individual professor, to a completely joint enterprise including all stakeholders. There are certain problems and concerns involved, e.g. the reasons why it should be done, resistance, lack of time, overloaded personnel, bureaucracy, support from management, workload etc. A corporate identity should be promoted and programmes must also be viewed as joint enterprises. The definition we accept for quality needs to correspond with that of the national bodies. Benchmarking should be used in this regard The faculty of Arts consists of a variety of diverse programmes and departments. The SE would be done by means of investigating programmes, not departments. The focus is not only on determining weaknesses, but to build on and promote strengths.

MEETING 11

Date: Thursday, 1 March 2001 Time: 15:00 Venue: C les 303 Group: Non-academic department heads (Support units)

Non-academic departments are diverse and unique in nature and a manual specifically for that has not been developed. It would have to be tailor made for each unit. The support units (SU) have a large impact and role to play in the overall quality of an institution. It will be forced on the institution eventually by government, but it is also valuable and beneficial for self-improvement. The nature of higher education is becoming more and more competitive, and we see more and more international institutions entering the South African context. With regard to SU, it is very important to determine the satisfaction of the clients, and whether goals are achieved. The manual for SU will mainly focus on five areas to evaluate:

Goals and objectives; Resources used to achieve these goals; The procedures followed; The task performance; and

13

Feedback from and satisfaction of clients. Questionnaires must be compiled to determine the satisfaction of the clients. This can be done in a combined effort among different SUs. It is basically doing a SWOT-analysis. Sometimes an objective facilitator is needed to manage the process, or the unit itself can do it. The following SUs volunteered for the pilot SE project:

The library, Public relations and Sport bureau.

A holistic approach must be followed. The academic departments and SU influence each other and have a major impact on the overall quality of the institution.

MEETING 12

Date: Friday, 2 March 2001 lime: 8:00 for 8:30 Venue: A les 103 fi: RAU workshop

The slide presentation from the workshop is attached as Addendum 4.

MEETING 13

Date: Friday, 2 March 2001 Time: 14:00 Venue: C les 303 Group: CHES

Dr Vroeijenstijn will train the first two external teams. The CHES is responsible for the training of the other external committees and departments. The manual for SU will mainly focus on five areas to evaluate:

Goals and objectives; Resources used to achieve these goals; The procedures followed; The task performance; and Feedback from and satisfaction of clients.

An internal work group needs to be established. The staff of the university is our clients. We serve students via lecturers. The centre must reflect on its value for the RAU. What will they miss if the centre is no longer in existence?

14

MEETING 14

Date: Friday, 2 March 2001 Time: 15:00 Venue: Tearoom — Department K Group: Department K

The staff had to adapt to the merging of L departments. They felt that it was meaningful to combine the two departments because there were overlaps between programmes. They also feel they have not reaped any rewards or follow-up regarding the initial SE exercise of the university. Dr Vroeijenstijn pointed out that it was not really a SE, but merely a questionnaire that had been completed. An internal committee must be established to manage the process. They must have access to information and will be trained when Dr Vroeijenstijn visits us in May. It is preferable to have a student on the committee. The department enquired whether it is possible to obtain an example of a SE report done in a similar department Dr Vroeijenstijn will investigate this matter.

MEETING 15 - Debriefing

Date: Saturday, 3 March 2001 "Inc 15:00 Venue: 133's Midrand

vug: CHES

A meeting must be organised with the 7 volunteer departments. They need to appoint their internal work groups and convenors. We must investigate the manual and send suggestions to Dr Ton. He will complete the manual for SU.

15