43
Feasibility study cross-border impact finalversion | 10-11-29 Feasibility study on impact of cross-border quality assurance Don F. Westerheijden

1329324014 Feasibility Study Cross Border Impact

Embed Size (px)

DESCRIPTION

Regarding quality assurance and its impact on CBHE

Citation preview

  • Feasibility study cross-border impact finalversion | 10-11-29

    Feasibility study on impact of cross-border quality assurance

    Don F. Westerheijden

  • Feasibility study cross-border impact finalversion | 10-11-29 1

    Contents

    1 Rationale and research question 2 1.1 Quality assurance in cross-border higher education 4

    1.1.1 Higher education institutions and provision of cross-border higher education 5

    2 An overview of literature on impact of quality assurance 6 2.1 What is impact or effect of quality assurance? 6 2.2 Literature on impact of quality assurance 6

    2.2.1 United Kingdom 6 2.2.2 The Netherlands 8 2.2.3 United States 12 2.2.4 Hong Kong 16 2.2.5 Other examples 17 2.2.6 Good practices and sharing 19

    3 An overview of literature on cross-border education 22 3.1 National regulation of recognition and quality assurance in cross-border higher education 22 3.2 Guidelines regarding cross-border higher education 23 3.3 Higher education institutions in cross-border higher education 25

    4 Taking Stock: What is known and what not? 28

    5 Proposed research design 33 5.1 Selection of cases 33 5.2 Research methods 37 5.3 Estimate of study effort and final remarks 37 5.4 Plan B: What if access to higher education institutions is insufficient? 37

    5.4.1 B1: Telephone interviews Error! Bookmark not defined. 5.4.2 B2: Quality assessment agencies reports and underlying materials 38 5.4.3 Estimate of study effort 38

    References 39

  • Feasibility study cross-border impact finalversion | 10-11-29 2

    1 Rationale and research question

    Quality assurance has become a standard instrument of higher education policy since, approximately, the late 1980s. The policy theory underlying the introduction of quality assurance policies involves the expectation (hypothesis) that higher education institutions will use the instrument to improve or enhance their quality, either out of compliance with external pressure (from the quality assurance agency or funding authority), for its own interest (to attract more and better students1 as well as research contracts) or out of its professional ethos (higher education ought to strive for excellence, which is one of the possible conceptualisations of quality (Harvey & Green, 1993)). The different motivations (compliance, interest and ethos) might lead to different degrees and ways of higher education institutions reacting to external quality assurance. In the literature, mention is often made of superficial compliance culture (van Vught, 1994) as opposed to a genuine quality culture (EUA, 2006).

    Many QAAs have some (formal and informal) information on their impact, but unfortunately these data are not published. The INQAAHE now wants to underpin the policy theory with evidence, and it chooses to do so in the critical case of cross-border higher education, which for several reasons might be the place where the risk of low-quality provision is highest. The formulation of the research aim is establishing empirically:

    the impact that cross-border external quality assurance processes of agencies have on the policies and

    management, practices and outcomes of the institution regarding their operations abroad.

    The feasibility study aims to establish if this issue could be studied empirically, if so how? The policy theory could be presented graphically as in Figure 1.

    Several terms need further definition. We shall refrain from trying to define quality of education in a substantive sense, because that is what done by and between quality assurance agencies and higher education institutions.2 Also external quality assurance and internal quality work will not be defined by us; we shall take at face value whatever the quality assurance agencies and higher education institutions take them to be, although a critical examination of different understandings of quality assurance and quality work should be part of the case studies to get better understanding of potential impacts and about the lack thereof due to communication issues. The term education quality work was borrowed from Scandinavian quality assurance agencies (via Massy, 2003) to denote the activities inside higher education institutions to monitor, enhance and assure the

    1 Students will be used here as the traditional term denoting all learners. 2 If a quality assurance agency and a higher education institution hold different conceptions of quality, that

    may cause problems in the impact of external quality assurance, but that is another matter than our having to

    define quality, and might have to be addressed in the empirical study as one of the obstacles for achieving

    impact.

  • Feasibility study cross-border impact finalversion | 10-11-29 3

    quality of education, without assuming that it has to be a full-fledged quality management model.

    Figure 1 Policy theory of quality assurance impact

    Use of external quality assurance for the moment has been put in terms of no use superficial compliance quality culture. This is a different operationalisation of use than in some of CHEPSs earlier work on the subject (Frederiks, 1996; Frederiks, Westerheijden, & Weusthof, 1994; Jeliazkova & Westerheijden, 2000). We will come back to this crucial term later in this feasibility study. Anyway, use is about the internal response of the higher education institution to external quality assurance; it concerns the activity (and change) of internal quality work. The final major term in need of definition is impact. This too will be the subject of further conceptualisation as a result of the literature study. As a working definition, impact is the changes made to education offered. It is the result of use in internal quality work. In all probability, it will be proposed to limit the empirical study to the whit boxes and arrows in this figure and thus not to assess independently the impact of quality assurance on education offered. Finally, and even further off the map for our empirical study, are the outcomes of all of this for education as experienced by students.

    For delimiting the field of the study, we also need to define some further terms. In particular, what are operations abroad? ENQA uses as definition of transnational higher

  • Feasibility study cross-border impact finalversion | 10-11-29 4

    education any higher education provision (including distance education programmes) available in more than one country (Bennett et al., 2010). For all practical purposes, transnational education and cross-border higher education can be treated as synonyms (Stella, 2006).

    There are three options for transnational or cross-border education:

    demand travels (student mobility, or GATSs Mode 2), supply travels (GATSs Mode 4 if only individual teachers travel, Mode 3 if the higher

    education institution sets up a branch campus etc.) or the education service travels (distance education, GATSs Mode 1).

    The practice of cross-border education is not well framed in the GATS modes: they do not easily accommodate e.g. joint study programmes (combining modes 2, 3 and 4: students travel but a new programme is offered from different institutions, sometimes with teachers moving as well). The most common situation, as David Woodhouse said (personal communication, Oct. 2010), is an institution employing staff overseas but far short of establishing a branch campus (in between modes 3 and 4), while According to Coleman (2003), franchising is the most popular yet least monitored mode of trans-national education (Lim, 2010), which would fall in between Modes 1 and 3, perhaps. Nevertheless, branch campuses and the like are not in short supply, from the University of Nottinghams well-advertised locations in China or RMITs building towering over the countryside just outside Hanoi.

    The more common mode of operation is often called supported distance education. Reasoning pragmatically, we take operations abroad to cover the range of supported distance education to setting up a full offshore campus, but exclude foreign students studying in an institutions normal programmes as well as pure distance education (the latter because it is indistinguishable from within-country distance education).

    1.1 Quality assurance in cross-border higher education

    At the international level, quality assurance in cross-border or transnational (we take these terms as equivalent for all practical purposes) has been the subject of extensive work in the INQAAHE and in UNESCO, particularly. UNESCOs Guidelines are the point of departure in this discussion, almost against their professed non-normative character (UNESCO, 2005):

    The Guidelines were conceived as being voluntary and non-binding in character and as providing

    orientation for developing national capacity and international cooperation in this area. They are neither a

    normative nor a standard-setting document.

    UNESCOs guidelines aim to collect international best practices in order for authorities to protect students and other stakeholders from low-quality provision and disreputable providers (UNESCO, 2005). The operation of providers cross-nationally and the worry that low quality may be more prevalent in this area than in any other is the lead theme in the international discussions.

  • Feasibility study cross-border impact finalversion | 10-11-29 5

    Whereas the focus of the UNESCO Guidelines is on the quality of provision, INQAAHE, as the network of quality assurance agencies, published guidelines on the operation of external quality assurance, which might include review of cross-border higher education (INQAAHE, 2007).

    Both sets of guidelines will be discussed in somewhat more detail below ( 3), from the perspective of how they can help structure an empirical study of impact on the quality of an institutions operations abroad.

    1.1.1 Higher education institutions and provision of cross-border higher education

    Providers is used in this discussion to denote the suppliers of cross-border higher education rather than higher education institutions to show awareness that much of it is provided by other types of organisations than traditional, (public) higher education institutions whose core business consists of on-campus, face-to-face education of adolescent students. The field is populated with open universities that specialised in distance education from before the ICT revolution of the late 20th century as well as new online providers, which often are of a private and predominantly even for-profit nature. However, also more traditional higher education institutions have become active in the field of cross-border higher education, mainly in response to the need to acquire additional income after home (public) funding having become more restricted. The economic rationale may not be the only reason for traditional higher education institutions interest in cross-national higher educationolder notions of academic cooperation keep playing a role as wellbut the ascendance of the economic drive has given a major boost to the volume of cross-border higher education (Altbach & Knight, 2006; van Vught, van der Wende, & Westerheijden, 2002) as well as having led to increasing worries about low-quality provision. In terms of risk profiles, then, cross-border higher education has risen much on the international agenda for quality assurance agencies.

  • Feasibility study cross-border impact finalversion | 10-11-29 6

    2 An overview of literature on impact of quality assurance

    Two sets of literature converge in the research question for this study:

    Literature on impact of quality assurance Literature on cross-border higher education

    It is expected that the intersection of the two sets may be (well-nigh) empty of empirical studies even though many reports on the formal structures, processes and intentions of systems, regulations, and proposals for a better future may have been published. The empirical question how higher education institution and teaching staff as well as students within them may have reacted is the challenge of the proposed project. The current section treats the first of the two sets of literature; the next one the second.

    2.1 What is impact or effect of quality assurance?

    Preliminary to studying impacts, it should be established what is understood by the impact of quality assurance: which of the many changes in higher education institutions can be ascribed to the influence of external reviews (quality assurance and accreditation)?

    Methodologically, we are confronted with the question of how much of the ensuing change in the higher education institutions depends on the internal and external quality assurance processif any change can be detected at all (Harvey, 2006). From evaluation studies, the consideration is taken that it is more feasible to discuss the contribution of certain factors to outcomes in a multi-causal organisational process, than the strict attribution of (parts of?) impacts to a certain (single) cause. In a critical tone, it was stated recently that [t]he literature on quality assurance is replete with examples of insiders talking among themselves rather than engaging with the public, governments or students (Massaro, 2010), which might serve as a reminder that in the end, impacts must be visible for these external stakeholders.

    2.2 Literature on impact of quality assurance

    Impacts of quality assurance have been researched within national higher education systems, in particular in the UK and in the Netherlands. In this section, literature is to be summarised to develop a list of actually found types of impacts and wherever given, also the correlates of impacts: are they different in, e.g., subsectors of higher education institutions or knowledge areas? Do they depend on the assessment methodology? Etc. Some examples of studies are mentioned below.

    2.2.1 United Kingdom

    In the UK, external quality assurance had been introduced in the 1980s in a confrontational situation between government and the rest of the higher education

  • Feasibility study cross-border impact finalversion | 10-11-29 7

    system. Hence, one of the themes in studies of the system was the stress produced on academe by this instrument in this situation (Brennan, 1990; McNay, 1997). Much of the attention in the UK went to the quality assessment procedure with most tangible consequences, i.e. the Research Assessment Exercises (RAEs) and to their positive as well as negative effects on research focus and productivity (Westerheijden, 2008) in the forms of refocusing research on short-term projects and salami publishing but also producing more publications that were cited more often (indicating impact, hence quality of those publications). Another effect of the RAE was that academic staff followed the moneyin Deep Throats words (Bernstein & Woodward, 1974)and devoted more attention to their research productivity than to teaching quality (Jenkins, 1995). Assessment of teaching quality was to redress the balance between education and research, amongst other objectives.

    Regarding the UKs Teaching Quality Assessment (TQA) of the 1990s, findings included for instance that in a former polytechnic (the most teaching-oriented institutions, one would expect), nearly two-thirds of those interviewed considered audit and quality assurance mechanisms a bureaucratic practice that had little impact on their work. Only about one-third found the audit useful for improving undergraduate classroom teaching practice, particularly increasing academics' awareness of the importance of good teaching (Cheng, 2010). One study concluded that the TQA failed to produce meaningful impact because it was not supported by the academic community (Laughton, 2003); however, as this study was based on views of academics volunteered in a newspaper, bias against the TQA is to be expected. Views on the TQA have been critical from the beginning (Pollitt, 1987, 1990; Race, 1993) and until this day it remains difficult to distinguish genuine and sustained academic arguments from controversial feelings against the government that was seen to have forced these New Public Management tools (Pollitt & Bouckaert, 1995) upon an unwilling community.

    An additional element of the quality policies in the UK starting from the year 2000, concerned the introduction of nationwide subject benchmarks in a range of disciplinary areas to set out the characteristics and standards of study programmes, explicating the implicit understanding of the gold standard that external examiners (a phenomenon hardly known outside the UK and Commonwealth countries) traditionally had been expected to uphold among British higher education institutions. There is a national recognition scheme to ensure their sufficiency, distinction and connectedness to the subject (and professional) community (QAA, 2010b). Development of subject benchmarks was descried at the time as a potential infringing on academic freedom or institutional autonomy (Hargreaves & Christou, 2002; Hodson & Thomas, 2003; Trowler, 2004) and even more nuanced commentators feared the neo-vocational turn (Yorke, 2002) implied by the concentration on graduates competencies. Yetas I heard on a couple of occasions, informally, years laterwith the benefit of hindsight they were seen as beneficial for raising awareness of what good education meant per field.

    In a way, these subject benchmarks can be seen as precursors of the learning outcomes sets defined in, until now, 24 Europe-wide Tuning projects (Tuning Educational Structures in Europe, see http://tuning.unideusto.org/tuningeu/). This passage also goes

  • Feasibility study cross-border impact finalversion | 10-11-29 8

    to show that impacts of quality assurance policies may differ markedly in the short and at the long run.

    A contributing factor to the fears for subject benchmarks being softened later may be that the TQA in which subject benchmarks were going to play a role, was abandoned suddenly at the moment that they were being developed. Consequently, in their application, subject benchmarks lived in a different environment with more room for institutional autonomy (reviewed in institutional audits) than when the fears were first voiced. Thus, two more point that can be made with looking at subject benchmarks in the UK are (1) that methods of external quality assurance can make much difference for the reception of quality assurance among the higher education institutions and teaching staff, and (2) that policies may influence each other, strengthening or de-emphasising different elements of each other.

    2.2.2 The Netherlands

    The Dutch studies on impact of quality assurance dated mostly from the 1990s, when the country had a soft evaluation system for the education function of higher education, i.e. without accreditation-type consequences. The studies focused on the question if recommendations from external reviews were used for decision-making inside higher education institutions (Frederiks, Westerheijden, & Weusthof, 1993; Frederiks et al., 1994; Scheele, Maassen, & Westerheijden, 1998; Westerheijden, 1990, 1997; Westerheijden & Frederiks, 1997). Later, when accreditation was introduced, this went along with monitoring of its impacts as well (Goedegebuure, Jeliazkova, Pothof, & Weusthof, 2002; Inspectie van het Onderwijs, 2002, 2005a, b; NVAO, 2009; Westerheijden et al., 2008). Elements drawing much attention were that programme accreditation was more expensive in direct and indirect costa very real impact for higher education institutionsand required more bureaucracy in comparison with quality assessment of the same programmes; what this meant was largely that there were higher demands on documentation of quality assurance policies on the one hand, and that the serious consequences of not being accredited made higher education institutions took implementing and documenting their quality management much more serious than before. In addition, the more official nature of accreditationwhich might eventually be challenged in courtdid indeed lead to more paperwork than the softer evaluation processes existing previously. Correlated with that point, there were also worries about the costs of the system rising significantly from before.

    Such worries have not subsided much during the experiences of the first round of programme accreditation in 2003-2010 (Inspectie van het Onderwijs, 2005a, b, c, 2006a, b, c; NVAO, 2009) and led to proposals for a lighter touch in the second round of accreditation (starting 2011): institutional accreditation focusing on sound internal quality assurance would, if successful, decrease the burden of external review on individual study programmes.

    Policy intentions from the beginning were that quality improvement/enhancement would have to be stressed as much as accountability goals. Ownership of the external quality assurance process in the hands of the umbrella bodies of the higher education institutions

  • Feasibility study cross-border impact finalversion | 10-11-29 9

    was to enhance the improvement orientation by focusing on peer review rather than an inspectors examination through a state-related agency. Nevertheless, comments from the higher education community remained throughout that accountability and ticking off checklists remained an important part of the process, and this type of comments certainly did not diminish after external assessments were changed into accreditations. These impressions persisted, even though several ways were tried to emphasise the improvement orientation. For instance, over the generations of evaluation protocols, increasing focus was put on the institutions own quality management as an internal mechanism for improvement. This was often interpreted at the shop-floor level, however, as taking attention away from the teaching and learning process and requiring more bureaucracy.

    Some of the same studies also noted additional impacts of introducing quality assurance (Westerheijden, 1997), which we want to mention here before going into the issue of use of external quality assurance:

    more cooperation regarding the organisation of teaching (which often had been lacking as a consequence of academic freedom ideas)

    more cooperation in producing research increasing power for institutional managers

    The external quality assurance systems reports and recommendations were extensively used in the higher education institutions. In brief, it was found (Frederiks, 1996; Frederiks et al., 1994; Westerheijden, 1997) that higher education institutions never showed complete neglect of external evaluation reports (no cases of no use), passive use quickly had become a standard procedure, i.e. it became a normal reaction to table external evaluation reports for the next meeting of the appropriate programme-level committees and afterwards to consider them at higher hierarchical levels in the higher education institutions (trickling up to faculty deans and central level decision-makers). Besides these formal reactions, it appeared that the external evaluation reports were regarded as unbiased, externally validated information leading to external legitimation of persons and units reputations.

    Research attention then shifted to further distinctions of active use, i.e. making decisions regarding behaviour or policy in higher education institutions based on external quality assurance reports. In a revisited version of the conceptual model, then, further distinctions were made along the type of active use (instrumental or conceptual), and the time dimension (Fig. 1). There may be a correlation between time and type of use: instrumental use can take place immediately or in the long run, but conceptual changes in the frameworks of thought of actors are more probable in the longer term.

  • Feasibility study cross-border impact finalversion | 10-11-29 10

    Fig. 1 Dimensions of active use

    Instrumental use meant to address the question of whether recommendations made in external evaluation reports were implemented by the higher education institution. The majority of recommendations were indeed implemented (Frederiks, 1996); not a surprising finding as (1) part of the recommendations were the external visiting teams reflections of remarks and plans in the self-evaluation reports on which they based themselves, and (2) the Dutch government was among the first to initiate a formal follow-up process to monitor if and how higher education institutions reacted to the external quality assurance (Scheele et al., 1998).

    Instrumental use flowed over into other decisions in higher education institutions: temporary staff places were awarded to well-performing units, badly-performing units were reorganised. We called this a halo effect: the externally validated quality reputations gave institutional management an objective tool to justify differential decisions also in matters not directly related with the issues reviewed in the evaluation process.

    In a further study into use (Jeliazkova et al., 2000), a conceptual model was applied to help explain when an institution might adopt a recommendation, or when such a recommendation would lead to prolonged debate while a decisive outcome would be postponedsometimes indefinitely. The complicated flow chart was based on the work of Fischer (Fischer, 1995; Fischer & Forester, 1993), focusing on the communication process involved in organisational decision-making.

  • Feasibility study cross-border impact finalversion | 10-11-29 11

    Fig. 2 Communicative model of follow-up of external quality assurance

    The flow chart intends to illustrate a number of things. First, the coloured boxes on the left (A to D) indicate that recommendations can be of different types, from simple technical solutions to a problem recognised inside the higher education institution, to calling into question the goals and values defining the programme. Second, whether a recommendation is of type A, B, C, or D depends not only on the content of the recommendation but also on how it is read by the receivers of the message. Third, as long as the debate moves at the system level (associated with boxes C and D), the expected outcome of the debate will be rejection of the recommendation, while acceptance is only possible once the debate has cooled down (symbolised by the colours of the boxes between blue and red) to the programme level. Third, there are connections between all of the levels: a decision-making process can start at any level and end at any level. The dynamics of the process can be charted but not predicted with such a model.

    It was found in this study, by the way, that study programmes felt strong pressure from external quality assuranceor its perceived threatto improve education to meet threshold quality, but for the programmes easily meeting the threshold standards, there was little incentive to engage in continuous, further, improvement (Jeliazkova et al., 2000), even though improvement orientation had always figured highly in public statements about the Dutch quality assurance system.

  • Feasibility study cross-border impact finalversion | 10-11-29 12

    Moving beyond instrumental use, conceptual use was also distinguished. Conceptual use concerned the frameworks that actors in higher education institutions used to think about their work. Interviews (Westerheijden, 1997) brought to light that decision-makers in higher education institutions, even if they said that the external quality assurance had had little (instrumental) impact on them, framed their answers in terms and categories that would have been unheard of before the introduction of external quality assurance. Thus, both the research and education processes were since then seen as matters that could be managed; quality was an operational category rather than only an ideal of excellence; and administrators began to turn into managers. Above all, in the egalitarian Dutch culture, excellence began to be allowed to be visible. In that sense, quality assurance has had a pervasive impact on the Dutch higher education system.

    Admittedly, the gamesmanship of managers in Dutch higher education institutions did not develop as quickly and in such sophisticated manners as in the UK, where especially the RAE required much managerial involvement (Westerheijden, 2008). In both countries, however, the institutional management acted as a buffer between external influences (quality assurance, funding) and internal life (Westerheijden, 2008).

    2.2.3 United States

    The United States have the longest history of formal external quality assurance, in the form of accreditation. Accordingly, long-term impacts should be visible here. However, the character and functions of accreditation have changed a lot in recent decades, mainly under the influence of federal legislation requiring more evidence of student learning in reaction to political attention to an increasing rate of loan defaults after graduates failed to obtain the type of jobs (and associated salaries) expected of them.

    US accreditation consists of two main types: institutional accreditation, needed for obtaining access to several funding sources in at the federal level both for higher education institutions and for students (grants), and specialised or programme accreditation which only applies to fields where professions organise themselves for this purpose, often to control access of graduates to the profession. This contrasts with the European approach, where as a rule national quality assurance or accreditation systems must be applied to all institutions or all programmes.3

    Institutional accreditation has played a role in state and federal policies regarding higher education at least since the 1944 GI Bill, but was made into a gatekeeper with the Higher Education Act of 1965 (Ewell, 2007). Based on the US experience, Ewell (2007) proposed six propositions on conditions to make external quality assurance effective:

    1. The likelihood that state interests will be served increases as quality approaches convey a clear and carefully delimited message about what the state values, and when consequentiality visibly reinforces this message

    3 Still it should be remarked that there may be loopholes in European ideas of all: in some countries, all

    private higher education institutions were accredited but not public ones, or the other way around, public

    higher education was governed by quality assurance but nor private providers.

  • Feasibility study cross-border impact finalversion | 10-11-29 13

    2. The likelihood of institutional engagement with quality initiatives increases with consequentiality [in a neutral sense of what is at stakeDFW] (but this reaction may not always be consistent with state interests)

    3. The likelihood that state interests will be served increases when quality approaches allow significant institutional discretion, and are implemented flexibly to empower local leadership and recognise significant differences in institutional circumstances

    4. The likelihood that institutions will be meaningfully engaged increases when quality approaches are implemented by quasi-governmental third party organisations (but state interests are served only if such organisations pursue an agenda that is consistent with state objectives)

    5. The likelihood that public interests will be served increases when quality approaches are open, transparent and provide meaningful public information

    6. The likelihood that all interests will be served depends on the level of trust accorded to higher education institutions by states (and their agents undertaking quality reviews), and upon the level of respect accorded to quality reviewers by the academics under review

    Regarding impacts of the external reviews in the USA, a few years later Ewell (and similarly: Banta, 2010; Houston, 2010) wrote that the goal of providing adequate evidence of student learning remains elusive (Ewell, 2010). Concerning impacts of professional accreditation on US higher education, articles found in a literature search all concerned business studies and engineering.

    Notwithstanding the long traditions, US teaching and research staff struggle with much the same issues as their colleagues in more recently established quality assurance systems, as study on accreditation in business schools testified (Roberts Jr., Johnson, & Groesbeck, 2004):

    Obtaining AACSB accreditation is a major undertaking. It takes time, diverts a lot of administrative and

    faculty time from other activities, is fraught with uncertainty, and takes money. A fundamental question

    is whether or not it is worth the effort and expense.

    The total costs for even a small school might rise to over US$ 500,000 per year (Roberts Jr. et al., 2004).

    Roberts et al. on the other hand listed benefits as well, such as easier access to graduate schools, appreciation by employers knowledgeable about AACSB accreditation, attractiveness to better-qualified students, higher salaries for teaching staff and more emphasis on research, even after research was de-emphasised in new, mission-driven AACSB criteria in the 1990s. Their study focused on reactions among the teaching staff itself, who saw as impacts, on average:

    More time devoted to research4,5

    4 Additionally, it was found that marketing department chair holders valued research publications from

    AACSB accredited business schools higher than others Heischmidt, K. A., & Gordon, P. 1993. Rating of

  • Feasibility study cross-border impact finalversion | 10-11-29 14

    More stress and less work satisfaction

    No pay increase except for newly-appointed teaching staff

    Obtaining promotion and tenure becomes more difficult

    Few changes to education, though classroom instruction may have deteriorated and time devoted to students was reduced

    Staff impressions are that students and employers benefit from accreditation status, notwithstanding the previous points

    Also despite the sometimes undesired consequences, the respondents, on average, indicated strongly that accreditation was worth the effort (Roberts Jr. et al., 2004).

    Beyond these direct impacts on the programme as a whole and on staff, the authors questioned whether the new, mission-driven criteria, were really affecting the business schools. Whether this means that teaching staff (and business school leadership?) did not know about the new criteria, believed that a hidden agenda remained focused on research, or whether a prestige race among staff members operating through research and publication is a positive feedback loop that is almost impossible to stop once it has started, is not known.

    In another article on AACSBs accreditation, Hedin et al. started from the fact that in business studies, the AACSB aims to support quality improvement, but they found that the AACSBs accreditation criteria fell short of that aim, because many are not process-based (Hedin, Barnes, & Chen, 2005). There is no core curriculum for, or minimal level of provision of, for example, ethics education (Lowrie & Willmott, 2009). Lowrie & Willmot commented that the exclusion of issues of content from the pyramid-style, peer-administered architecture of the AACSBs mission-linked approach stems as much from its pursuit of expansionary ambitions as from its case for diversity, innovation and inclusion (Lowrie et al., 2009). And indeed, the AACSB was becoming more inclusive as a result of its less research-focused, mission-based criteria, including more than before business schools without doctoral programs, with lower enrollments, and with more part-time faculty. The trend has also been to accredit schools with more women and minority students and lower GMAT scores (Jantzen, 2000). Nevertheless, on an international scale, Lowrtie & Willmot saw AACSB promote the US model of business education, thus reducing diversity and strengthening elitism in the field .

    In engineering, another major area of professional accreditation, ABET, the organisation for accreditation of engineering programmes, significantly changed its criteria (EC2000)

    Marketing Publications: Impact of Accreditation and Publication History. Journal of Education for Business,

    68(3): 152-158.. 5 Similar situations pertain in all fields of knowledge in the USA: Only in community colleges has the time

    commitment to teaching remained stable Dill, D. D., Massy, W. F., Williams, P. R., & Cook, C. D. 1996.

    Accreditation and academic quality assurance -- Can we get there from here? Change: The Magazine of Higher

    Learning, 28(5): 17-24..

  • Feasibility study cross-border impact finalversion | 10-11-29 15

    because of dissatisfaction even within ABET with the situation existing in the 1990s (Prados, Peterson, & Lattuca, 2005):

    Unfortunately, the ABET process of engineering accreditation had become an impediment to reform. As

    the number of accreditation visits multiplied and, in the litigious atmosphere of the late twentieth

    century, the prospect of legal challenges to unfavorable accreditation decisions increased, ABET review

    criteria became more quantitatively focused and less dependent on professional judgment. Despite its

    best intentions, the pre-1990 ABET could well be characterized as a protector of the status quo:

    Relations with engineering schools were adversarial and arms-length.

    Evaluation criteria focused on measurable inputs, e.g., numbers of faculty members, curricular

    distribution (i.e., seat time in given subject areas), etc.

    Criteria were increasingly prescriptivefrom less than one page of general criteria in 1959 to more

    than nineteen pages of smaller type in 1999.

    Program criteria provided additional prescription, with additional seat time requirements beyond the

    general criteria that specified almost the entire curriculum in a few disciplines, as well as degree

    requirements for faculty, etc.

    The new criteria were more focused on continuous quality improvement, by putting strong emphases on defining program objectives (program differentiation rather than cookie-cutter uniformity) and learning outcomes (intellectual skills of graduates rather than subject-area seat time). The specification of curricular content was significantly reduced (Prados et al., 2005). In an evaluation after some years of experience, the key changes reported by programme chairs and teaching staff included (Lattuca, Terenzini, & Volkwein, 2006 --emphasis added, DFW):6

    Increases in programmes emphasis on professional competencies, such as communication, teamwork, use of modern engineering tools, technical writing, lifelong learning, and engineering design (confirmed by graduates, while no loss was reported in basic science and mathematics competencies)

    Increased use of active learning methods, such as group work, design projects, case studies, and application exercises

    EC2000 also requires that engineering programs assess student performance on the learning outcomes and use the findings for program improvement. The use of student assessment data, however, appears to lag behind support for assessment efforts. (Prados et al., 2005)

    Overall, the support of teaching staff for the improvements was high

    Opinions whether staff incentive schemes were changed towards more rewards for education were mixed.

    6 Respondents to the questionnaires differed in their attributing the changes to the new criteria; the issue of

    attribution vs. contribution was alluded to in the beginning of this report (p. 6).

  • Feasibility study cross-border impact finalversion | 10-11-29 16

    On the student experiences, the researchers compared graduates before and after the introduction of the new criteria and found the following significant differences (Lattuca et al., 2006):

    More active engagement in their own learning;

    More interaction with instructors;

    More instructor feedback on their work;

    More time spent studying abroad;

    More international travel;

    More involvement in engineering design competitions;

    More emphasis in their programs on openness to diverse ideas and people.

    2.2.4 Hong Kong

    Hong Kongs higher education system has been subject of elaborate external quality assurance since the 1990s, possibly due to the areas unique international position. Besides the accreditation of the non-university subsector and private postsecondary education by HKCAAVQ (previously HKCAA7), the university sector, under the auspices of the UGC, has gone through several quality assurance exercises (Massy, 2003; Massy, 2010; TLQPR Review Team, 1999). These were each designed with a somewhat different aim in mind, and all made use of lessons learnt from a wide range of international examples. In brief, the first Teaching and Learning Process Quality Process Review (TLQPR) series until 1998 focused on universities having their internal quality assurance processes in place, the second one, in 2002-2003, on their being actually applied, while the third round of university audits in 2009-2011 emphasises their effectiveness for improving student learning. Having taken part, in some way, in all of these three rounds of external quality assurance, my impression is that, starting from an in some case already strong base (legacy of the UKs CNAA influence on parts of the system until the early 1990s), Hong Kongs universities have established on the whole fairly strong internal quality assurance systems and their application has become part of the organisational routines. The extent to which these measures have helped to improve student learning is a question that awaits an evaluation of the third round of audits.

    7 An interesting long-term impact of HKCAAs quality assurance procedures (and of its further

    predecessor, the British CNAA) was the promotion of several of its institutions to the university sector, e.g.

    Polytechnic University, Chinese University and Baptist University. Moreover those institutions well-

    formalised internal quality assurance arrangements can be traced back to the tutelage of CNAA and HKCAA.

  • Feasibility study cross-border impact finalversion | 10-11-29 17

    2.2.5 Other examples

    The EUA, which established the first Europe-wide quality assurance process with its Institutional Evaluation Programme (Amaral, Rovio-Johansson, Rosa, & Westerheijden, 2008; van Vught, 1991; van Vught & Westerheijden, 1996) has undertaken several monitoring exercises to ascertain the impact of its IEP (partly internal and not published, but also Hofmann, 2005; Tavares, Rosa, & Amaral, 2010). This process was a voluntary engagement by individual higher education institutionsunless national governments contracted the EUA to undertake national reviews as in e.g. Ireland, Portugal and Slovakiaemphasised its character of supportive peer review and concomitant improvement orientation. Follow-up was stimulated through several measures, such as follow-up visits by the external review team, alumni meetings to present institutions use of the IEP, and other measures (e.g. I was invited to an institutional quality assurance committee for two years after the review). Tavares et al. (2010) summarised their findings as follows:

    IEP evaluations generally give a precise account of problems faced by each university, identifying its

    strong and weak points, opportunities and threats, and presenting clear recommendations and

    suggestions for improvement. If properly discussed inside the university, these evaluations can form the

    basis for an improvement plan.

    That summary, with its conditional statement, echoes the Dutch findings detailed above: impact of external quality assurance mainly depends on internal follow-up, on decision-making within the higher education institution after the evaluation has taken place.

    Recommendations have also been analyzed in other countries: in Australian dental programme accreditation, the themes of recommendations proved to remain stable over a decade even in a changing environment; they mostly concerned staff, external relationships, funding, structure, documentation, curriculum, and communications (Arena, Kruger, & Tennant, 2007).

    Private and public higher education institutions respond differently to accreditation: it was found that market niche and ties to an accrediting organization affected the responsiveness of both types of organizations. However, technical factors (potential economic gains from accreditation) had a greater effect on the responsiveness of private organizations, and institutional factors (diffusion through both social cohesion and structural equivalence) had a greater effect on the responsiveness of public organizations (Casile & Davis-Blake, 2002).

    A system of external quality assurance that had to deal with public and private higher education institutions, i.e. in Chile, was seen to have affected the higher education system in several ways (Lemaitre, 2004). Some additions to what was mentioned in this overview already:

    Fewer private higher education institutions were opened due to increased quality requirements

    Resistance against external oversight was overcome as institutions learned what external quality assurance involved

  • Feasibility study cross-border impact finalversion | 10-11-29 18

    Higher education institutions much under the quality threshold were closed, others upgraded in several ways to meet threshold standards, which were however seen as not high: most of the mediocre ones survived, became autonomous and then were free to act as they wished: in many cases, offering very low quality programmes

    Collection of information on student attrition etc. helped change the attitude to drop out in higher education institutions and act on it.

    A study looking into the inner life of higher education institutions undergoing external reviews in Argentina, summarised its findings as follows:

    [U]niversities faced problems when they attempted to implement changes to adjust curricula to quality criteria due to individual and organisational resistance to change. The sources of resistance identified are

    structural inertia, resistance to resource [re-]allocation between teaching departments, lack of consensus

    and threats to expertise and teaching habits. However, as the accreditation process was mandatory and

    institutions responded to peer review, the accreditation process had a significant impact on programmes

    because it enabled universities to implement curricula[r] changes. (Coria, Deluca, & Martnez, 2010).

    A survey among institutional representatives, students and other stakeholders on the diverse methods of external evaluation used simultaneously in Norway (but audit, evaluation and accreditation apply to different situations of disciplines or institutions), came to the conclusion that whatever the method used, views and impacts on the higher education institutions were broadly similar (Stensaker, Langfeldt, Harvey, Huisman, & Westerheijden, 2010): There is a strong tendency among the respondents to perceive the aim of the process as one associated with controlindependent of the formal purpose of the process Few respondents (between 6% and 13%) perceived that any of the methods were mainly improvement-oriented. And: most of the respondents perceive the impact of the process as moderately positive irrespective of the type of evaluation. Moreover, there was no correlation between respondents perception of aim and impact. In the Scandinavian context, a bit of control is not all that bad. This might be explained, authors reasoned, by the generally positive outcomes of even the most control-oriented evaluation method, institutional accreditation, which in Norway is an often successful, once-in-a-lifetime procedure to upgrade higher education colleges to university status. Nevertheless the type of beneficial effects most frequently observed by respondents differed somewhat across the evaluation types (see Table 1): institutional accreditation clearly affected the reputation of the institution, while other evaluation types affected the education process and the quality work surrounding it (this includes some of the new routines and procedures) more directly. It is also interesting to note which types of effects were not mentioned that often: internal resource allocation, governance structures, involvement of staff and students in education matters, and the quality of the educational offer.

    The Norwegian studys relevance is that it shows the importance of the policy context in which evaluation methods are operated and the rewards or incentives it implies for the persons involved.

  • Feasibility study cross-border impact finalversion | 10-11-29 19

    Table 1 Most-mentioned effects of evaluation types in Norway

    Evaluation type Most-mentioned categories of effects (% of respondents)

    Audit of internal quality assurance system New routines/procedures (51%)

    Quality assurance of education/teaching (43%)

    Institutional accreditation Scholarly reputation of the institution (63%)

    Study programme (re-)accreditation Quality assurance of education/teaching (40%)

    National evaluation of quality [in a discipline] New routines/procedures (46%)

    The scholarly discussion on learning and teaching (44%)

    Source: (Stensaker et al., 2010, summarised from table 5)

    2.2.6 Good practices and sharing

    Further details on quality practices in higher education institutions learnt during external quality assurance processes, can be shared and disseminated from purpose-built publications and databases, for instance in the UK (QAA, 2008, 2010a), Hong Kong (Editorial Committee, 2005), in Australia (www.auqa.edu.au/gp/search/index.php) and based on the EUAs Institutional Evaluation Programme (Hofmann, 2005). Some of these practices may have existed independent of external quality assuranceit is only claimed that they became public through the external reviewing, so there is no necessary causal link between their introduction and quality assurance. It was an effect of quality assurance that they were published, though.

    The collections are large and wide-ranging, including all aspects reviewed: quality management within study programmes and whole institutions, strategic management of higher education institutions (including for instance the rising popularity of benchmarking), as well as more educational innovations to delivery and assessment.

    An overarching outcome of the UK quality audits was that the higher education institutions audited in this period were working strenuously to meet the continuing demands of changes in many areas (QAA, 2008); we should not forget that quality assurance is only one among many factors influencing the institutions. Globalization is anotherand it figures in the next chapter of this report.

    Quality assurance in several countries was found to affect the cooperation among staff members (stimulating team formation around teaching; sometimes stimulating cooperation, sometimes competition in research) and increases the power of management in higher education institutions (Harvey, 2004; Westerheijden, 1997). The increased stress on staff members, for instance through the need to perform ever better in publications and citations, may have negative consequences: Educational quality is being threatened by a "hollowed collegiality" in which faculty members' pursuit of discretionary time and academic specialization and their assertions of academic autonomy undermine the

  • Feasibility study cross-border impact finalversion | 10-11-29 20

    campus-based quality assurance upon which the entire structure of voluntary self-regulation rests (Dill et al., 1996).

    Reporting on a discussion among an international assembly of INQAAHE members, Harvey listed a number of impacts following on introduction of external quality assurance in higher education (Harvey, 2006):

    1. Compliance with recommended changes is visible in follow-up reviews;

    2. Various performance indicators show improvements although the group was adamant that any evaluation of impact of external quality assurance should not be judged on the basis of available metrics as they have considerable potential to distort reality;

    3. Internal quality assurance processes and units have been instituted;

    4. Student satisfaction surveys show increases;

    5. Graduates are more reflective and better attuned to the labour market;

    6. Employers hold positive opinions of graduates, larger numbers notwithstanding;

    Regarding student learning, Harvey (2006) detailed a number of impacts:

    1. Institutions see students as their responsibility and attrition rates have been reduced;

    2. Curricula have been adjusted;

    3. Course evaluation has become a practice;

    4. Appeals and complaints procedures have been set up;

    5. Pedagogy has been reviewed in many study programmes;

    6. Standards of student achievement (knowledge, academic skills, work-related competencies) have been raised in many countries;

    7. Graduate employment has risen.8

    At the same time, the discussion showed that quality assurance agencies saw little impact of their evaluations on research. And although some thought that a cost-benefit analysis would show a very positive balance for quality assurance, staff and institutions complain about lack of time, much bureaucracy and managerialist control (Harvey, 2006).

    8 As an example of growing interest for programme outcomes, Australia has developed a voluntary graduate

    skills test, but few graduates take it, and it is still a snapshot rather than a measure of progress Massaro, V.

    2010. Cui bono? The relevance and impact of quality assurance. Journal of Higher Education Policy and

    Management, 32(1): 17-26..

  • Feasibility study cross-border impact finalversion | 10-11-29 21

    There is a fear in some places that application of systems and models may lead to compliance but would stifle creativity, which is seen an essential element of high-quality higher education; the EUA organised a project to see how higher education institutions under modern governance conditions could stimulate creativity (QAHECA consortium, 2009).

  • Feasibility study cross-border impact finalversion | 10-11-29 22

    3 An overview of literature on cross-border education

    This section gives an overview of findings in the literature about issues related to quality assurance of cross-border higher education.

    3.1 National regulation of recognition and quality assurance in cross-border higher education

    A prominent theme in literature on cross-border education is the acceptance or recognition of foreign providers of higher education in receiving countries. Often the receiving countries are Asian or African, rather than European or Northern American countries. This literature has at least indirect bearing on quality assurance issues, in that a major theme is under which conditions and how foreign higher education providers can be accepted in the receiving country (McBurnie & Ziguras, 2001) and quality assurance procedures are given prominence in that discussion. A moot point is whether the sending country, the receiving country, or both, should be responsible for quality assessment (van der Wende, 1999), and who has the capacity to do so (Altbach et al., 2006):

    Many countrieslacking capacity or political willdo not have the regulatory systems to register or

    evaluate out-of-country providers. Regulatory frameworks for quality assurance or accreditation, even

    when they exist, usually do not apply to providers outside the national education system. This loophole

    permits both bona fide and rogue foreign providers to avoid compliance with national regulations in

    many countries and makes monitoring their activities difficult. []

    Historically, national quality-assurance agencies did not assess the quality of imported and exported

    programs, with notable exceptions. But Hong Kong, Israel, Malaysia, and South Africa, as receivers of

    cross-border education, have developed systems to register and monitor the quality of foreign provision.

    Among Asian-Pacific countries, by 2004 a minority had regulated import or export of higher education; only Australia, Malaysia and New Zealand had regulated both (Stella, 2006). Also a recent masters thesis, following UNESCOs remark that the diversity and unevenness of the quality assurance and accreditation systems at the national level, create gaps in the quality assurance of cross-border higher education (UNESCO, 2005) holds that cross-border higher education has long remained a no mans land in terms of formal quality assurance (Karapurath Jayaprakash, 2010); this might apply with most force to providers who are not part of the regulated, (public?) higher education sector in their home country (Altbach et al., 2006), which points to private for-profit providers as the most risk-prone sector. Besides, even if cross-border education is included in national regulations and quality assurance systems, these systems differ from one country to the other (UNESCO, 2005).

    The same thesis (Karapurath Jayaprakash, 2010) also addresses the issue that sometimes governments overreact to the no mans land by issuing very strict policies. Thus South Africa curbs the number of cross-border provisions in its country: due to strict new government regulations and accreditation processes (Altbach et al., 2006). Some of the policies to protect against low-quality provision (better be safe than sorry?) may not be

  • Feasibility study cross-border impact finalversion | 10-11-29 23

    making full use of available knowledge in the international higher education research/quality assurance networks. For instance, recent legislation in India encourages cross-border higher education provision from higher education institutions figuring in the top of global rankings while discouraging other providers (Karapurath Jayaprakash, 2010). The hypothesis of Indias policy theory (top-rank in global rankings high-quality education provision) appears at least debatable in the light of critiques of global rankings methodologies (van Vught & Westerheijden, forthcoming) and of the underperformance in undergraduate education noted in some top-producers of research (Bok, 2005). In contrast, Hong Kongs approach has been presented as proceeding from the simple rule as good as That is, the transnational education delivered locally must be of the equivalent, but not same, standard as the home course (Cheung, 2006).

    The no mans land metaphor can also understood in another way: some degree mills seem to play smart games of providing degrees to students in country A from a base in country B, claiming to be recognised or accredited in country C, while country C gives recognition based on the fact that the provider is allowed to operate in country B which does not have regulation about organisations that do not provide education in country B. Loopholes invite innovation, it seems, and institutions operating in no mans lands. Policy initiatives (CHEA & UNESCO, 2009) and web sites listing rogue providers maintained by bona fide quality assessment agencies and the like (e.g. www.chea.org/degreemills/default.htm) have emerged to help learners protect themselves against such cases, and here too, the free market seems to be filling gaps left by official agencies (e.g. who maintains www.degree.net?). Mechanisms to monitor diploma mills and accreditation mills need strengthening (Stella, 2006).

    To address the issue of different demands and pressures on higher education institutions engaged in cross-border higher education, especially institutions operating franchised study programmes, it was suggested that overcoming these challenges requires more and better communication among the main stakeholders in the quality assurance of transnational education (Lim, 2010). While acknowledging the importance of transparent information, I am not so sure about communication being a feasible solution, as there are so many parties to communicate with, once the number of cases increases, and different ones in each case.

    3.2 Guidelines regarding cross-border higher education

    The alternative to national-level policy borrowing regarding quality assurance, is internationalisation of quality assurance itself:

    The accreditation process is becoming internationalized and commercialized. Bona fide national and

    international accreditation agencies now work in many countries. For instance, U.S. national and regional

    accreditors provide or sell their services in more than over 65 countries. Accreditation bodies of the

    professions such as ABET (engineering) from the U.S. and EQUIS (business) from Europe, also offer their

    services abroad. (Altbach et al., 2006)

  • Feasibility study cross-border impact finalversion | 10-11-29 24

    Accreditation or other forms of recognition of the providers of higher education would seem to be most appropriate for sending countriesand in the interest of the providers themselves.

    In the introductory section of this paper, two sets of guidelines were mentioned concerning quality of provision of cross-border higher education (UNESCO, 2005), and external quality assurance world-wide (INQAAHE, 2007), respectively. Both would help national authorities to gear up their national quality assurance systems, and both could give guidance in including aspects of cross-border higher education.

    UNESCOs Guidelines recommend governments to organise quality assurance (or at least registration) on both sending and receiving sides, with good information to the public, and linking with the regional conventions on recognition of qualifications. They hold open the option of bilateral recognition agreementsuseful in case of large and regular streams of students or credits between two countries.

    Higher education institutions are first of all, in UNESCOs eyes, to:

    Ensure that the programmes they deliver across borders and in their home country are of comparable

    quality and that they also take into account the cultural and linguistic sensitivities of the receiving

    country. It is desirable that a commitment to this effect should be made public;

    This guideline immediately shows the dilemma of comparable quality on the one hand and sensitivity to local sensitivities on the other. It might have been more pertinent to talk about local relevance perhaps, rather than sensitivities, for it has nothing to do with cultural habits, let alone political taboos, but rather with graduates being able to use the knowledge and skills acquired in the local economy and society.

    Also, higher education institutions ought to have internal quality management in place, to respect the local quality assurance arrangements in the receiving country, to support recognition of its qualifications, and to provide clear information about all of that (UNESCO, 2005).

    For the quality assurance agencies, UNESCO recommends including cross-border higher education in their evaluation procedures (focusing on consistency and appropriateness of student assessment guidelines, standards and procedures), linking between sending and receiving countries, and informing the public about all of this. International reviewers on panels, international benchmarking of standards and procedures as well as joint assessment projects were recommended as well. The INQAAHE Guidelines have little to add to UNESCOs as they only demand of a quality assurance agency to have policies relating to both imported and exported higher education. These policies may be the same as those for domestic providers and domestic provision (INQAAHE, 2007). Other sets of standards for quality assurance agencies do not seem to address cross-border higher education to any significant extent, either (Aelterman, 2006). However, the APQN in cooperation with UNESCO developed a toolkit for quality assessment agencies for this very purpose, to complement the UNESCO-OECD Guidelines (UNESCO & APQN, 2007).

    Engineering is an area with special regulation regarding recognition of degrees, in that 13 signatories have agreed in the Washington Accord to recognise the substantial

  • Feasibility study cross-border impact finalversion | 10-11-29 25

    equivalence of such programs in satisfying the academic requirements for the practice of engineering at the professional level (www.washingtonaccord.org) once these programmes have been accredited within their respective national or territorial boundaries. Accordingly, the Washington Accord does not apply to cross-border higher education directly, although the documents contain principles for quality assurance agencies in the area working internationally (cf. also Prados et al., 2005).

    In a second form of internationalisation, public-policy led internationalisation of quality assurance is being experimented in Europe, with the establishment of the European Standards and Guidelines for quality assurance in higher education (ESG) in the framework of the Bologna Process (European Association for Quality Assurance in Higher Education, 2009), and based on them, a European register (EQAR) for bona fide quality assurance agencies (Westerheijden et al., 2010). It includes explicitly registration possibilities for quality assessment agencies from outside the EHEA (European Association for Quality Assurance in Higher Education, 2009). Yet attention specifically to cross-border higher education is not a feature of the demands on quality policies that higher education institutions should have to fulfil Part I of the ESG beyond specifying as a basic principle that institutions should be able to demonstrate their quality at home and internationally (European Association for Quality Assurance in Higher Education, 2009).

    The regulatory gap left by the official quality assessment agencies has been filled by other groups, in rapid attempts to build international credibility in the fast-growing and for some lucrative market of cross-border higher education:

    But new, self-appointed networks of institutions and organizations also accredit their membersa

    positive development when academic quality improves. But some of these networks and organizations

    may not offer objective assessments and may be more interested in racing for accreditation stars than in

    improving quality. A related, more worrisome development: the growth of non-recognized, illegitimate

    accreditation mills that sell accreditation without any independent assessment. (Altbach et al., 2006)

    While their relationship with quality is questionable, global rankings also play a role in establishing international credibility, up to e.g. QS actually commercially licensing higher education institutions to use a stars rating predicted some years before by Altbach & Knight.

    3.3 Higher education institutions in cross-border higher education

    Although regulations and guidelines emphasise the nation-state level and other public authorities, the actual units engaging in cross-border higher education are higher education institutions and their basic units (van Vught et al., 2002). Their engagement in cross-border higher education may proceed from different motivations, from traditional academic (or religious) sort of altruistic motives (Cheung, 2006) to income generation. The economic rationale is gaining ground (van Vught et al., 2002), leading also to increasing fear forand actual numbers ofrogue higher education providers (Stella, 2006). Further distinctions among especially private higher education provision may be necessary along modes of operation: institutions operating for-profit or philanthropic (distinctions building on Geiger, 1985) may develop different motives leading to different

  • Feasibility study cross-border impact finalversion | 10-11-29 26

    operations of cross-border higher education and possibly to different approaches to quality of education. It has to be kept in mind that private provision of cross-border higher education may come from institutions that operate as public entities in their home country: the mixture of public provision and for-profit activities has been fading due to the rise of the need for public higher education institutions to find part of their income outside the traditional governmental grants.

    Empirical case studies of how higher education institutions were affected in their cross-border higher education activities by reviews through quality assessment agencies were rare. The largest body of knowledge that we can build on, it seems, are experiences gainedand probably only partially published in review reportsby quality assessment agencies that monitored cross-border provisions regularly; cases in point include among other agencies the UKs QAA, Hong Kongs UGC and HKCAAVQ, South Africas HEQC, and Australias AUQA. In professional fields, AACSB, EQUIS and ABET come to mind.

    Institutions active in cross-border higher education in Malaysia and Singapore adopt certifications for their quality management, such as ISO9000, and are increasingly scrutinised by the sending countries quality assessment agencies, e.g. AUQA (Lim, 2010). Nevertheless, the image of private providers in Malaysia and Singapore is not positive, as Lim (2010) summarised from previous publications:

    the general perception in Malaysia and Singapore appears to be that private tertiary education providers

    are incapable of self-discipline, unlike self-accredited universities. The intense competition developed

    over the recent decades in these two countries has seemingly proven this assumption correct, manifested

    by the increasing number of complaints from students and other stakeholders regarding the quality of

    imported degree programmes as well as the deteriorating quality of private higher education providers...

    Many complaints have related to the overselling of programmes and schools ; the use of soft

    marking; the provision of poor school facilities; and the use of unethical business practices,

    particularly those that appear to collect school fees without providing students with any value in return

    More stringent quality assurance processes were then introduced in both countries, but commenting on the complex of receiving countries quality assessment agencies, those of the sending countries and professional external quality assurance (Lim, 2010):

    Unfortunately, the sum of the parts is less than the whole. First, none of the quality assurance systems

    discussed here has mechanisms for examining teacher training or monitoring teaching effectiveness.

    Although all the systems call for student feedback and evaluation and place great importance on these

    forms of assessment, the data collected thereof are not applied in a manner that would allow lecturers to

    reflect on their teaching techniques. []

    Second, the different perceptions of quality in transnational education have been translated into different

    standards for private higher education providers. For example, while an Australian partnering university

    may approve the employment of a lecturer to teach within a transnational degree in Malaysia, LAN

    [Malaysias quality assessment agency at that momentDFW] might disapprove his or her employment.

    []

    Singapore private higher education providers often find themselves coping with the many standards of

    many different schemes, as well as the different standards adopted by their partnering universities

  • Feasibility study cross-border impact finalversion | 10-11-29 27

    As compliance with these requirements is often mandatory, the interviewees expressed feeling a lack of

    control in managing quality

    On the whole, Lims conclusion was that: Although the current mechanisms have successfully deterred the entrance of dubious operators into the private education market, the interviewees reported that these mechanisms have also unfairly penalised legitimate operators by forcing them to invest extensive resources that could better be diverted elsewhere in complying with varied standards (Lim, 2010)

    The shop-floor level measures to assure equivalence of education quality provided, focused on student assessment. However, [a]lthough interviewees from both Malaysia and Singapore private higher education providers cited standardisation of the setting and marking of examination papers as the most common control measure, they described being granted differing amounts of authority by their partnering universities (Lim, 2010). Indeed: in practice, geographical distance and contextual constraints limit a universitys ability to monitor and review all aspects of delivery, leaving private higher education providers to define quality and set their own standards in its assurance (Lim, 2010).

    Tight control of materials and pedagogy delivered in teaching was piloted in another case, involving an Omani higher education institution adopting (and adapting) a study programme from New Zealand (O'Rourke & Al Bulushi, 2010). While this may have been helpful to kick-start a new programme, local teaching staff raised the issue of academic freedom, possibly also expressing in this way that there was at first little room for local and cultural contextualisation (O'Rourke et al., 2010). However, in another Arab country, local adaptations were not wanted; Dubai established an innovative quality assurance system focused on validating the equivalency of institutions and programmes in Dubai with their place-of-origin, rather than on assessing these programmes against local standards. The fundamental premise is that the foreign institutions and programmes were brought to Dubai precisely because they are valued in their original form (Rawazik & Carroll, 2009).

  • Feasibility study cross-border impact finalversion | 10-11-29 28

    4 Taking Stock: Priorities for Research

    4.1 Introduction

    This section summarises the combination of the previous two chapters and focuses on issues that need to be included in empirical research. It has become clear in this overview of literature that the two bodies of literature on impacts of quality assurance and on cross-border higher education indeed are two separate bodies of literature, without overlap. Empirical research on the impacts of quality assurance on cross-border higher education is therefore the only way to learn more about it. While below some general conclusions are given regarding areas of impact to be studied, of course it has to be borne in mind all the times that quality assurance operates within a certain social and temporal context, and may thus have different functions and impacts even though the instrument may look the same (Vlimaa & Mollis, 2004); in the causal model contextual variables will have to be taken into account.

    It appears that the distinction that is almost classical in the quality assurance literature, i.e. the distinction between accreditation-goes-with-accountability and quality-audits-do-with-improvement is not so clear-cut in reality: audits might have no impact on the quality culture whatsoever, and accreditations may fuel improvements (Dano & Stensaker, 2007). External quality assurance may become more focused on impact on the education processand thus more closely related with quality improvement or enhancementin two ways. First, through the use of criteria that can only be fulfilled by establishing a strong internal quality improvement process (e.g. criteria associated with proven increase of learning outcomes, or proof of effective internal quality management) and/or through inciting internal decision-making processes that lead to (increased focus on) quality improvement. Second, it should be remembered that the formal external quality assurance event with, usually, a site visit, is not the only pressure that external quality assurance agencies exert on higher education providers: in advance there is the quality assurance agencys threat to visit and assess the institution and/or its programmes that may spur institutions to start quality management and improvement activities, and after the culmination of the visit and judgement/accreditation there is in several cases a formal follow-up procedure to maintain the movement. Moreover, there is the sheer presence of a quality assurance agency, which may be made more effective by conscious activities to support professionalization of higher education institutions quality management.

    4.2 Catalogue of variables

    The issues studied regarding impacts of quality assurance, and where applicable the findings of those studies, may suggest a catalogue of possible impacts on cross-border higher education providers. These will be helpful in defining the dependent variables of an empirical study. The literature on cross-border higher education may lead to definition of some independent, intermediary and contextual variables (see Figure 2).

  • Feasibility study cross-border impact finalversion | 10-11-29 29

    Figure 2 General scheme of causal model

    4.2.1 Independent variables: characteristics of the quality assessment agency and its procedures

    Does the quality assessment agency represent the sending or the receiving country, or is it international?

    Does its remit normally include cross-border higher education? Is the cross-border higher education provider engaging in the external quality

    assurance process voluntarily, or is it required to do so by an authority in the receiving or sending country?

    What are the consequences of a positive evaluation in the receiving country? E.g. does the quality assessment agency offer accreditation or similar kite marks associated with official recognition in the receiving country?

    If the quality assessment agency is not located in the receiving country: What are the consequences of a positive evaluation in the sending country of the cross-border higher education provider or in the quality assessment agencys home country?

    Clarity of quality assurances goals and requirements in the eyes of the evaluated actors

    Does the cross-border higher education provider experience contradictory messages from quality assessment agencies with which it works (e.g. different requirements in sending and receiving countries)?

    Does the quality assurance process include follow-up activities in order to keep the provider focused on quality improvement?

    To what extent can the quality assurance process be characterised as a peer review process?

    4.2.2 Intermediary variables I: characteristics of the cross-border higher education provider

    If the quality assurance procedure concerns the cross-border higher education provider or its establishment in the receiving country, the providers characteristics may play a role, affecting the relationship between the quality assessment agency and the impacts of quality assurance.

  • Feasibility study cross-border impact finalversion | 10-11-29 30

    Is the provider an established higher education institution in its home country, or is it a new type of provider?

    Is it operating as a public, private non-profit or private for-profit provider (at the location of the cross-border higher education, whatever its official status in the sending country)?

    Is it offering higher education courses leading to degrees of the sending or the receiving country?

    Is it operating on its own, through cooperation with a higher education institution in the receiving country, with another type of local partner (e.g. a non-higher education institution franchiser)?

    4.2.3 Intermediary variables II: characteristics of the study programme

    If the quality assurance process involves judgements of study programmes instead of, or next to, evaluations of the cross-border higher education provider, the following intermediary variables must be taken into account.

    Is the study programme of professional nature? If so, is it a controlled profession in the receiving and/or sending country? Which non-standard quality assurance requirements apply to such study programmes?

    4.2.4 Dependent variables: impacts of quality assurance

    Whether directed at a single study programme or at the cross-border higher education provider as a whole, the following impacts may occur (with different emphases for programme and institutional assessment procedures). Their presence should be checked in all cases, therefore.

    Institutional decision-making: Does the external evaluation lead to considering consequences for decision-making structures and processes, e.g.

    Follow-up of recommendations and other statements in external quality assurance reports (active use, short term and/or long term)?

    Management structures

    Increased power of managers over the education processes and content, and over teaching staff?

    Tighter process control

    Increase of internal use of performance indicators

    Does the providers leadership view the external quality assurance as helpful to the institution, or as something they must comply with but that is not helpful (gamesmanship needed)?

  • Feasibility study cross-border impact finalversion | 10-11-29 31

    Representation of teaching staff and/or students/learners on democratic bodies at school/faculty or institutional level

    Reconsideration of reallocation of funding and other resources

    Reorganisation or closure of weakly performing programmes or units

    Establishment and professionalization of internal quality management

    Evaluation procedures for study programmes or institutional units

    Increased involvement of learners/students in evaluation

    Increased involvement of teaching staff in evaluation and follow-up

    Establishment/upgrading of quality management support offices

    Aspects of costs

    Is more money being spent on internal quality management/quality assurance?

    Does external quality assurance require (paper-)work that was not done for other (quality assurance) purposes?

    Impacts on teaching staff

    Are staff members cooperating more than before on curriculum design and delivery?

    Are they spending a larger proportion of their time than before on education rather than on research?

    Are they spending a larger proportion of their time than before on quality assurance and associated bureaucracy rather than on education and research?

    Have incentive schemes changed to give more prominence to good education (and how is that conceived in the practice of shop-floor level personnel (HR) management: just positive evaluation results on course unit evaluation questionnaires, or more complex understandings)?

    Do they, on the whole, support the quality assurance process or do they see it as a burden and/or as an incursion into their academic freedom?

    Impacts on education/curriculum

    Raising of educational standards

    Curriculum change towards more considerations of learning outcomes, (professional) competencies or graduates qualifications

  • Feasibility study cross-border impact finalversion | 10-11-29 32

    More focus on appropriateness of student assessment to curriculums design

    Do students report improved study experience (inside/outside curriculum)?

    Outcome indicators

    Improved retention rates

    Improved time to degree

    Improved graduate employment (quantitatively and qualitatively)

    Improved employer satisfaction with graduates

    External effects

    Review by provider of its public information to make it more objective or truthful

    More information available on quality assurance arrangements

    Increased reputation or prestige of the institution or study programme

    Do national and/or international rankings play a role for the cross-border higher education provider in this respect?

    4.2.5 Contextual variables

    Does the receiving country have an established quality assurance system for its regular higher education?

    Is the type of steering of higher education in the receiving country usually of a more stringent control type or of a self-regulation type?

    Does the receiving country welcome cross-border higher education or is it rather trying to restrict it?

    Does the receiving country apply policy rules like as good as (p. 23)?

    Does the receiving country put stringent requirements on newly establishing foreign cross-border higher education providers, e.g. financial conditions?

  • Feasibility study cross-border impact finalversion | 10-11-29 33

    5 Proposed research design

    In this final section, several alternatives for an empirical study will be presented. First, following up on INQAAHEs first ideas for such a study, a case study approach will be detailed, which has two purposes: one purpose is to develop the arguments for selecting cases, the other is to show how extended and expensive a full-fledged case study approach would be. Next, an alternative design will be proposed, which may make compromises regarding the depth of the study but which may be feasible.

    5.1 Case study proposal

    5.1.1 Selection of cases

    The assumption made by INQAAHE that a case study logic should be followed, is agreed: in-depth study of cases is needed to assess use or impact and to take the different circumstances of cases into account, and that rules out more superficial research methods focusing on verbal behaviour such as surveys. The disadvantage of case study research is obviously that large numbers of cases cannot be included, which leaves us without the main strength of large-N studies, i.e. statistical testing of correlations and disturbance factors. The small number of cases also rules out random sampling of cases. How then should cases be selected?

    For a good test of empirical assertions in case study research designs, there should be spread along the independent variables. Two main independent variables come out of the discussion, simplifying the questions of providers and of modes of cross-border education to dichotomies:

    Status of the higher education provider (public vs. private) Mode of education provision (face to face vs. distance/online)

    These variables make for a space of cross-border higher education provision to be studied. We have four types of provision as in the following Table 2.

    Table 2 Types of cross-border education providers

    Public Private

    Face to face A B

    Distance/online C D

  • Feasibility study cross-border impact finalversion | 10-11-29 34

    For further delimitation of the selection, a convincing argument is to look for most likely and/or least likely cases (Eckstein, 1975), or in more recent terms, to work with risk profiles of institutions: which type of higher education institutions is least likely to show impact of quality assurance? Those considerations would lead to prefer cases of types A and D, leaving out the hybrid types B and C. A complication arising from the risk profile considerations is that the most at-risk institutions (type D) maybe are the least willing to cooperate in an empirical study.

    The next step is to consider at an abstract level the differences of quality assurance regimes that apply to the provision of cross-border higher education:

    Coverage of the provision in a national quality assurance system of sending country Coverage of the provision in a national quality assurance system of host country

    This again leads to a 2x2 matrix (Table