18
The International Education Journal: Comparative Perspectives Vol. 16, No. 4, 2017, pp. 29-46 https://openjournals.library.sydney.edu.au/index.php/IEJ 29 What contributes more to the ranking of higher education institutions? A comparison of three world university rankings Ya-Wen Hou National Taichung University of Education, Taiwan: [email protected] W James Jacob University of Memphis, US: [email protected] Recently, many universities have drawn attention to world university rankings, which reflect the international competition of universities and represent their relative statuses. This study does not radically contradict types of global university rankings but calls for an examination of the effects of their indicators on the final ranking of universities. By using regression analysis, this study investigates the indicator contribution to the ranking of universities in world university ranking systems, including the Academic Ranking of World Universities (ARWU), Times Higher Education (THE), and QS World University Rankings. Results show that in the ARWU system, three indicators regarding faculty members who won Nobel Prizes and Fields Medals and papers published in Nature and Science and in the Science Citation Index and Social Science Citation Index journals predicted the ranking of universities. For the QS and THE systems, the more powerful contributors to the ranking of universities were expert-based reputation indicators. Keywords: world university rankings; ranking indicators; indicator contribution; ranking of universities; university position INTRODUCTION Driven by globalization and massification in higher education (Altbach, 2012), university rankings and league tables are having an ever greater impact on higher education institutions (HEIs). Similar to the pursuit of accountability and objective evaluation, university rankings exist ubiquitously (Wildavsky, 2010). Both national and international university rankings are growing explosively and becoming more specialized by, for example, focusing on research performance or institutional reputation (see Rauhvargers, 2011; Shin & Toutkoushian, 2011). In particular, world university rankings, which are our concern in this paper, are considered by many to be a means of representing academic excellence and increasing prominence of HEIs in both local and global contexts.

What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

The International Education Journal: Comparative Perspectives Vol. 16, No. 4, 2017, pp. 29-46 https://openjournals.library.sydney.edu.au/index.php/IEJ

29

What contributes more to the ranking of higher education institutions? A comparison of three

world university rankings

Ya-Wen Hou

National Taichung University of Education, Taiwan:

[email protected]

W James Jacob

University of Memphis, US: [email protected]

Recently, many universities have drawn attention to world university

rankings, which reflect the international competition of universities and

represent their relative statuses. This study does not radically contradict

types of global university rankings but calls for an examination of the effects

of their indicators on the final ranking of universities. By using regression

analysis, this study investigates the indicator contribution to the ranking of

universities in world university ranking systems, including the Academic

Ranking of World Universities (ARWU), Times Higher Education (THE),

and QS World University Rankings. Results show that in the ARWU system,

three indicators regarding faculty members who won Nobel Prizes and

Fields Medals and papers published in Nature and Science and in the

Science Citation Index and Social Science Citation Index journals predicted

the ranking of universities. For the QS and THE systems, the more powerful

contributors to the ranking of universities were expert-based reputation

indicators.

Keywords: world university rankings; ranking indicators; indicator

contribution; ranking of universities; university position

INTRODUCTION

Driven by globalization and massification in higher education (Altbach, 2012),

university rankings and league tables are having an ever greater impact on higher

education institutions (HEIs). Similar to the pursuit of accountability and objective

evaluation, university rankings exist ubiquitously (Wildavsky, 2010). Both national and

international university rankings are growing explosively and becoming more

specialized by, for example, focusing on research performance or institutional reputation

(see Rauhvargers, 2011; Shin & Toutkoushian, 2011). In particular, world university

rankings, which are our concern in this paper, are considered by many to be a means of

representing academic excellence and increasing prominence of HEIs in both local and

global contexts.

Page 2: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

30

Improving global rankings 1 in league tables is often a priority goal for many

universities. World university rankings serve as a reference point for student choices for

universities and scholar mobility across the globe, provide a guide to public policies,

help in decision-making by funding agencies and university leaders, and even play a role

in positioning and measuring the performance of higher education institutions in the

domestic and global contexts (Altbach, 2006, 2012; Bastedo & Bowman, 2011;

Hazelkorn, 2009, 2014; Huisman & Currie, 2004; Marginson & van der Wende, 2007;

Salmi & Saroyan, 2007; Williams, 2008). In light of the positive relationship between

Web links and ranking orders found by Lee and Park (2012), universities themselves

endeavour to participate in global ranking activities and pursue higher ranks to obtain

greater visibility and resources from multiple stakeholders (Hazelkorn, 2014). Thus,

global rankings are often regarded as “a mechanism of agenda setting” with soft power

(Lo, 2011, p. 216) and “an integral part of [the] status culture” of higher education

competition (Marginson, 2014, p. 45). The higher the ranking, the more visibility and

opportunities HEIs generally gain within their respective countries and across the globe.

World university rankings are an attractive and often competitive measurement of

institutional performance by bibliometric methods (van Raan, 2005). To some extent,

global rankings value stakeholder choices and investments, set institutional benchmarks,

reorganize higher education institutions that work ineffectively, determine institutional

priorities, and boost faculty academic professional reputation (Hazelkorn, 2009; Shin &

Toutkoushian, 2011). An empirical study conducted by Bastedo and Bowman (2011)

links college rankings with an institutional ability to gain greater financial resources. In

terms of student recruitment, global rankings play an important role in student

preferences and choice. A report initiated by the QS Intelligence Unit (2015) notes that

over 70% of surveyed students consider global rankings more important in their

university selection process than national or regional rankings, making them a crucial

factor in the institutional selection for many students (Roberts & Thomson, 2007)

because students tend to relate higher ranked institutions with better reputation and

academic excellence.

World university rankings also influence strategic direction and decisions made by

senior higher education administrators, including in how they react among and between

leaders of other HEIs (Hazelkorn, 2009). Higher ranked universities are like institutional

sponges that generally have greater opportunities to gain sustained public funding and

private investments. Institutional reputation linked to global rankings also makes it

easier for the top-ranked HEIs to attract scholars and students from domestic and

international locations. World university rankings serve as an important underpinning of

institutional reputation (Bowman & Bastedo, 2011) and in providing greater perceived

“credibility” (Vieira & Lima, 2015, p.63). Rankings are also influential in producing a

non-negligible effect on graduates’ wages (Carroll, 2014). Many HEIs strive to align

strategic plans and institutional performance to the criteria of world university rankings

to solidify and boost their ranking position among the top institutions.

However, world university rankings have raised controversy, including their neglect of

audiences’ needs, the preference for English-language publications as a key indicator, an

overemphasis on the fields of science and medicine, subjectivity of survey indicators,

arbitrary weighting, the variability of ranking results, and the bias between

1 In this article, the term global rankings refers to world university rankings produced on an annual basis by several leading ranking systems.

Page 3: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

31

ordinal/numeric representation and the actual quality of university education (Bastedo &

Bowman, 2011; Dill & Soo, 2005; Fidler & Parsons, 2008; Frey & Rost, 2010;

Marginson, 2014; Proulx, 2007; Saisana, d’Hombres, & Saltelli, 2011; Taylor &

Braddock, 2007; Tofallis, 2012; van Vught & Westerheijden, 2010; Williams, 2008).

Furthermore, an overemphasis on world university rankings is like jumping into a risky

venture; for instance, rather than focusing their decision on which institution to attend

based on outstanding academic performance, students often make their choice on

institutional reputation (Taylor & Braddock, 2007).

Another common indicator critique of world university rankings is the preference for

research publications from the Science Citation Index (SCI) and the Social Science

Citation Index (SSCI). SCI and SSCI are widely recognized in academic circles for

defining success, boosting institutional reputation, and justifying university rankings.

But overvaluing the SCI and SSCI indicator may give rise to what Su (2014, p. 51) calls

an “I-idolization” or an overemphasis on the leading publication indices. Other scholars

list several shortcomings that relate to academic recognition, the marginalization of the

humanities and social sciences, and institutional image (Deem, Mok, & Lucas, 2008;

Delgado & Weidman, 2012).

There is also an invisible pressure in the pursuit of increasing institutional reputation

that often intensifies the competition between HEIs and between countries. Moreover,

Proulx (2007) argued that ranking results based primarily on SCI and SSCI research

outputs are likely to persuade many leaders of ranked universities to over-emphasize the

need for greater research publication outputs rather than focusing on developing relevant

strategies to become world-class universities. Thus, world university rankings often lead

to in an inherent risk of competition that ultimately excludes many of flagship

universities (Amsler & Bolsmann, 2012; Douglas, 2016); HEIs compete internationally

for human and financial resources, and their competitive institutional behaviours are

simultaneously reinforced by the global ranking results (van Vught & Westerheijden,

2010).

Although many studies have documented various issues surrounding global university

rankings, few studies have demonstrated the relationship between the indicators used

and the ranking of universities in a particular ranking system; that is, which indicators

have a greater impact on determining the ranking of universities. Understanding the

contribution of indicators of global rankings is fundamental to understanding the role of

global rankings and their methodologies as well as HEIs’ strategies for pursuing global

rankings.

This paper reports on a study investigating indicator contributions to the ranking of

institutions of three of the most prominent world university ranking systems: the

Academic Ranking of World Universities (hereafter referred to as ARWU) developed

by Shanghai Jiao Tong University in China; the Times Higher Education World

University Rankings (hereafter referred to as THE) created by Thomson Reuters; and

Quacquarelli Symonds’ World University Ranking (hereafter referred to as QS). In other

words, our study sought to explore whether the weights of the indicators in these three

global ranking systems are different to the assigned weights shown in their

methodologies, and whether some indicators matter more than others. In this study, we

first describe the characteristics and methodologies of the three ranking systems and

common criticisms regarding their indicators. We then analyse the indicators’

contribution to the ranking of HEIs to better understand the implications global

university rankings have in practice.

Page 4: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

32

THREE WORLD UNIVERSITY RANKING SYSTEMS:

FEATURES AND CRITICISM

The ARWU, THE, and QS rankings are the “big three,” according to Hazelkorn (2014,

p. 17), being among the most frequently used by scholars, administrators, policy makers,

and students. The first global ranking system developed was the ARWU in 2003. The

next year, Times Higher Education and the Quacquarelli Symonds Company co-

published their own ranking systems, which is usually referred to as THE-QS (Liu &

Cheng, 2011). However, in 2010, THE, and QS ended their collaboration and separated

into two separate ranking systems.

Features of the selected ranking systems

Table 1 shows the background of these three systems.

Table 1: Main characteristics of the three university ranking systems

Academic Ranking of

World Universities

(ARWU)

QS World University

Ranking

Times Higher

Education World

University Ranking

(THE)

Background

Issued by

Academic institution

(Shanghai Jiao Tong University)

Media

(Quacquarelli Symonds)

Media

(Thomson Reuters)

Years 11 (since 2003) Since 2004, THE cooperated with QS. However, THE decided to change the partner and developed its own methodology in 2010

Target audience No No No

Methodology

Criteria/Dimensions 4 0 5

Number of indicators 6 6 13

Conducting reputation survey

No Yes Yes

Data sources

Thomson Reuters' Web of Science Database, Resources of National agencies

Scopus Database, University portfolio, Survey

Thomson Reuters' Web of Science Database, University portfolio, Survey

Results published on the web

Yes Yes Yes

Ordinal Results

Top 500

(Single ranks to 100 and then groups)

Top 700

(Single ranks to 400 and then groups as 401–410, 411–420, 421–430, 431–440, 441–450, 451–460, 461–470, 471–480, 481–490, 491–500, 501–550, 551–600, 601–650, 651–700)

Top 400

(Single ranks to 200 and then groups as 201–225, 226–250, 251–275, 276–300, 301–350, 351–400)

Source: Authors.

ARWU is created by an academic institution (Shanghai Jiao Tong University), while the

other two are developed by the mass media. All the selected ranking systems focus on

the evaluation of research-led universities worldwide, even though their methodologies

are not similar.

Page 5: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

33

Rather than the specific target groups, all individuals and stakeholders engaged in

higher education are the intended audience. These global rankings are likely to influence

and drive the perceptions and behaviours of individuals and organizations, such as

students, parents, faculty, and staff members, public authorities, and employers and

community members (Thakur, 2007; van Vught & Westerheijden, 2010).

Except for surveys and resources from national agencies and university profiles, all the

selected ranking systems use databases to analyse research publications and citations

through a bibliometric method. QS uses the Scopus database, and the other two collect

information from Thomson Reuters’ Web of Science database. The ARWU system also

collects data from select websites (e.g., SCI, SSCI, Nobel laureates, and Fields Medals),

and THE and QS also conduct reputation surveys.

The three ranking systems commonly publish their results online using ordinal rankings

in lists. ARWU publishes a list with the top ranked 500 institutions in which

universities are ranked from one to 100 and then grouped as 101–150, 151–200, 201–

300, 301–400, and 401–500. The QS system uses the methodological framework that

served as the original version of the THE-QS rankings (Quacquarelli Symonds, 2014)

and publishes a list of the top 700 universities online, of which the top 400 institutions

are singly ranked and the latter 300 are grouped. THE releases an online league table of

the top 400 universities, which are singly ranked up to 200 and then grouped as 201–

225, 226–250, 251–275, 276–300, 301–350, and 351–400. The ARWU system has

better ordinal proportionality than the QS and THE systems (Marginson, 2014) because

of its fixed proportion of ordinal results.

Evaluative standards of the selected ranking systems

Each system uses its own standards for evaluation and weighting. In 2011, THE ended

its collaboration with QS and developed a new methodology with a different partner—

Thomson Reuters. Instead of the old six indicators that QS still uses, the new THE

methodology consists of 13 indicators ranging from teaching and research to knowledge

transfer (Thomson Reuters, 2010). Table 2 illustrates the indicators and their assigned

weights in these ranking systems.

The ARWU ranking system includes six indicators among four dimensions (Shanghai

Jiao Tong University, 2014). First, the dimension of education quality is determined by

one indicator—the number of alumni who have won Nobel Prizes and Fields Medals

(coded as Alumni), and this indicator contributes 10% to the overall score. Second, the

dimension of faculty quality is evaluated by two indicators; one is related to Nobel

Prizes and Fields Medals granted to faculty members (coded as Award), and the other is

HiCi, a parameter related to highly cited researchers in 21 subject categories. These two

indicators account for 20% each. Third, the research output dimension is also

determined through two indicators: papers published in Nature and Science (coded as

NS) and those indexed in SCI and SSCI (coded as PUB). These two indicators account

for 20% each. Finally, the per capita performance of an institution (abbreviated to PCP)

contributes 10% to the overall score.

Page 6: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

34

Table 2: Indicators and assigned weights of selected university ranking systems

Title ARWU QS THE D

imen

sio

n/

Ind

ica

tor

Quality of Education

- Alumni (10%)

Quality of Faculty

- Award (20%)

- HiCi (20%)

Research Output

- Nature and Science (20%)

- PUB: SCI & SSCI (20%)

Per Capita Performance (10%)

Academic reputation (40%)

Employer survey (10%)

Citation per faculty (20%)

Faculty-student ratio (20%)

International students (5%)

International faculty (5%)

Teaching (30%)

- Reputation survey for teaching (15%)

- Staff-student ratio (4.5%)

- Doctoral-bachelor's ratio (2.25%)

- PhDs awarded (6%)

- Institutional income per faculty member (2.25%)

Research (30%)

- Reputation survey for research (18%)

- Research grants (6%)

- Papers in peer-reviewed journals (6%)

Citation impact (30%)

Industry income (2.5%)

International outlook (7.5%)

- Ratio of international-domestic students (2.5%)

- Ratio of international-domestic staff (2.5%)

- Publication with international co-authors (2.5%)

Source: Created by the authors with criteria from Quacquarelli Symonds (2014), Shanghai Jiao Tong University (2014), and Thomson Reuters (2014).

The THE system uses 13 indicators for five dimensions (Thomson Reuters, 2014). First,

the teaching dimension is assigned a weight of 30% and is determined through five

indicators, teaching reputation survey, staff-to-student ratio, doctorate-to-bachelor ratio,

doctorate awards by an institution, and institutional income scaled against academic

staff numbers. Second, the research dimension has a 30% share and is established

through research reputation survey, research grants, and the number of papers published

in academic journals. The third dimension is citation impact, given a weight of 30%.

The fourth dimension is research funding from industry, contributing 2.5% to the overall

score. Finally, the international outlook dimension of an institution is assigned a weight

of 7.5% and is determined through the international-to-domestic student ratio,

international-to-domestic staff ratio, and number of internationally co-authored research

papers.

The QS ranking system still uses the original methodological framework (Quacquarelli

Symonds, 2014). Among the six indicators included in the QS system, the most

important is the academic peer reputation survey, with a weight of 40%. Another

reputation survey addresses employers and contributes 10% to the ranking. Then, the

two indicators of citations per faculty and faculty-student ratio contribute 20% each to

Page 7: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

35

the overall score. Finally, the numbers of international students and faculty indicators

have a weight of 5% each.

It is important to note the similarities in these ranking systems; the differences lie in the

weights of the indicators used. University rankings, according to Proulx (2007), should

represent the characteristics of the ranked universities and “avoid a one-size-fits-all

typology” (p. 76). All the indicators of these ranking systems are grouped into five

categories, including teaching, research, service mission, reputation management, and

internationalization of higher education institutions (Table 3). These five categories

result from several multifaceted and interactive factors. In particular, the last two

categories seem to be relevant in the context of knowledge-based economic societies.

However, the weights assigned to the indicators seem to reflect the emphasis of each

global ranking system and have some biases. For instance, the ARWU system focuses

on research and eliminates teaching, service mission, and internationalization; and the

indicators of the QS and THE systems are incomplete in assessing the effectiveness of

teaching and learning and research funding (Marginson, 2014), though both have some

measures for these five categories.

As shown in Table 3, the ARWU system emphasizes research excellence, while the QS

system focuses more on universities’ reputation (Aguillo, Bar-Ilan, Levene, & Ortega,

2010; Huang, 2011). For instance, the ARWU system assigns a weighting of

approximately 90% to impressive research performance, such as Nobel Prizes and

Fields Medals granted to alumni and faculty members and publication in famous

English-language journals. By contrast, the QS system depends greatly on university

reputation, representing nearly 50% of the total score.

Table 3: Priorities of selected university ranking systems

Title

Category

ARWU QS THE

Teaching

Research (*) (*)

Service

Reputation (*)

Internalization

Note: * refers to the category given the most assigned weights in the system. Source: Authors.

In addition, the dimensions evaluated by the QS and THE ranking systems are similar,

but the indicators of the THE system are more detailed and complex (Marginson, 2014).

In the THE system, more than one-third of the overall score is associated with research

outcomes such as research grants, publications, and citations. However, in the QS

system, less than one-fourth of the total weight is allocated to research outcomes. Even

though both of these two ranking systems measure university reputation, the THE

system assigns approximately 33% to reputation surveys, while, in the QS system, it

accounts for 50% of the overall score. Both ranking systems give approximately 10% to

the internationalization of higher education institutions and have some indicators to

assess the teaching mission, but the QS system employs only one indicator of it, the

faculty/student ratio.

Page 8: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

36

Criteria debate of the selected ranking systems

As already noted, the three ranking systems have been extensively criticized (see

Marginson, 2014; Taylor & Braddock, 2007). The incomplete databases used by the

three ranking systems tend to have biases against the non-English-language publications

and the fields of social science and humanities (Amsler & Bolsmann, 2012; van Raan,

2005). The biases in favour of hard-science and English-language publications also

result from the different publication cultures and citation habits in diverse fields (Frey &

Rost, 2010; Saisana et al., 2011; van Vught & Westerheijden, 2010).

Another similar criticism of the three ranking systems is the challenge concerning their

selections of indicators and arbitrary weighting. The numeric, comparable, and

standardized league tables produced by global university rankings often lead the

uninformed public to believe the truth of the information. Through league tables,

everyone can easily interpret and compare the quality of certain universities. Those who

publish rankings also believe that the ranking results reflect the position and quality of a

university through a rigorous and objective process of evaluation (Rauhvargers, 2011).

However, as argued by Williams and Van Dyke (2008), “the objectivity does not ensure

that the measures actually chosen are always appropriate” (p. 2). Taylor and Braddock

(2007) argued that the weights of indicators depend on the significance of the set of

indicators suggested by higher education consultants, and most ranking systems have

chosen the “suitable” indicators instead of the negative ones, jeopardizing institutional

or national interest (Marginson, 2014, p. 46). For instance, because of the initial purpose

of understanding the global standing of top Chinese universities (Liu & Cheng, 2011),

the ARWU system relies heavily on research performance without consideration for the

teaching, social service, internationalization, and employability of university graduates

(Marginson, 2014; Saisana et al., 2011).

In the ARWU ranking system, focusing on research-oriented indicators is frequently

criticized. Aside from the biases in favour of hard-science and English-language

journals, the Nobel indicator affects the lower-ranked universities located in countries

with few Nobel Prize and Fields Medal winners, and underrepresents the diversity of

academic fields and other scholars’ achievements (Huang, 2011; Marginson, 2014).

For the QS and THE systems, the major criticism involves the subjectivity of reputation

surveys, the teaching indicators, and the instability of the rankings. Employing expert-

based reputation surveys as a ranking indicator is subjective to the bias caused by

human opinions and judgments on a university (Salmi & Saroyan, 2007; Williams &

Van Dyke, 2008). In other words, the subjectivity of survey indicators is inevitable in

relatively objective ranking indicators (Dill & Soo, 2005; Taylor & Braddock, 2007). In

terms of teaching criteria, Trigwell (2011) stated that, in the QS system, using a single

indicator—staff-to-student ratio—to assess the teaching performance of a university is

questionable; this indicator depends on class size but also cannot accurately reflect

teaching quality and the diversity of teaching and learning activities. As with the QS

system, it is difficult to evaluate actual teaching effectiveness even though the THE

system adds other indicators to assess teaching performance, such as PhDs, the doctoral-

bachelor's ratio, and the facilities and income of an institution. Then, the variability and

fluctuation of the rankings result from the frequency of changes in methodology and the

use of surveys in the QS and THE systems (Marginson, 2014; Saisana et al., 2011). The

empirical study conducted by Aguillo et al. (2010) indicated that the dissimilarities

between the THE-QS rankings for different years are high. In other words, the THE and

QS systems are more unstable than the ARWU system.

Page 9: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

37

Intuitively, all ranking systems have different impacts on ranking results, including the

overall score and the ordinal ranking of universities because of their different

frameworks of indicators. Investigating which indicators of the three world university

rankings best predicts the ranking of universities would be interesting.

RESEARCH METHOD

Data sources

All data were obtained from the ARWU, THE, and QS world university rankings. As

the source of data, we chose the top 100 universities from each selected ranking system

in accordance with their 2013-14 world university rankings released on their websites.

For each ranking system, the data we collected included the scores for every criterion

and the overall scores, as well as the ranking of the 100 institutions. However, because

Harvard University is the benchmark in the ARWU system, it was excluded from our

data from the ARWU rankings.

Data analysis

We used secondary data and regression analysis in the study. We used regression

analysis to explore the effects of independent variables on an outcome variable

(Treiman, 2009). For each ranking system, we used bivariate regression to examine the

effect of a single indicator on the ranking separately, and we employed multiple

regression to investigate the impact resulting from the whole set of the indicators.

Limitations and contributions

The major limitation of this study is the change of ranked universities that are shown on

the lists of the three world university rankings. In this study, we chose world university

rankings in 2013-14 as our data set. We chose to study only the top 100 ranked

institutions but those in the top 100 changed depending on the year, thus leading to a

bias. However, the variation of ranked universities on the top 100 lists of these three

global rankings is smaller than those on other ordinal categories. In order to eliminate

biases resulted from the uncertainty and variation, we focused on the top 100 in the

selected world university rankings.

However, two features of our study should be emphasized. First, although every ranked

university receives an overall score and a respective ranking, this ordinal ranking is

likely to represent the position of a university. Thus, in our study, we paid more

attention to the ranking of universities than to the final scores these HEIs received.

Second, we only selected the top 100 universities instead of all ranked schools shown in

the rankings (e.g., the top 500 in the ARWU rankings) because these top 100

universities are given unique rankings. Moreover, receiving the first-tier rankings

implies that these universities have more opportunities and better competitiveness than

others in terms of marketing.

RESULTS AND COMPARISON

Overall, all Pearson correlation coefficients between the single indicator and the ranking

of a university in each system were negative. The reason is that the increase of the

numerical value refers to more attention on indicators but not on the higher ranking of

Page 10: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

38

institutions. The smaller the value (such as Top1means) relates to institutions with the

best performance. Rather than indicating the relative significance between positive and

negative correlations, this shows that the indicators and the ranking of a university in

each system move in the opposite way (Treiman, 2009). According to the results of our

study, all correlation coefficients were statistically significant with respect to ranking of

universities in the ARWU and QS ranking systems, while some were nil in the THE

system. The following section describes the regression analysis of each selected ranking

system in detail.

ARWU system

Table 4 shows the regression summary of the ARWU rankings. The final model (model

7) that includes the six indicators explained 83% of the variance in the ranking of

universities (F (6, 92) = 73.403, p < .001). Even the adjusted R2 also provided an

explanation of 82%. As shown in Table 4, three indicators were significantly and

inversely related to the ranking of universities, including Award (b = -.542, p < .001),

NS (b = -.770, p < .001), and PUB (b = -.728, p < .001). By comparing their

standardized weights (βeta-weights), we found that the Award indicator had the most

substantial impact on the ranking of universities (β = -.405), more than the NS (β = -

.362) and PUB (β = -.294) indicators.

Table 4: Regression analysis for the ARWU system

Model (with indicators) R R2 Adjusted R2

1 Alumni .615 .378 .372

2 Award .697 .486 .481

3 HiCi .766 .587 .583

4 NS .838 .702 .699

5 PUB .590 .348 .341

6 PCP .453 .205 .197

7 Alumni, Award, HiCi, NS, PUB, PCP .910 .827 .816

b-weight (βeta-weight)

Indicator Between each indicator and the rank Final model

Alumni -1.066*** (-.615)

-.066 (-.038)

Award -.932*** (-.697)

-.542 (-.405)***

HiCi -1.646*** (-.766)

-.200 (-.093)

NS -1.783*** (-.838)

-.770 (-.362)***

PUB -1.462*** (-.590)

-.728 (-.294)***

PCP -1.091*** (-.453)

.141 (.058)

Notes: (a) ***p ≤ .001, **p ≤ .01, *p ≤ .05. (b) Alumni = Alumni winning Nobel Prizes and Fields Medals; Award = Staff winning Nobel Prizes and Fields Medals; HiCi = Highly cited researchers; NS =Articles published in Nature and Science; PUB = Science Citation Index and Social Science Citation Index; PCP = Per capita academic performance.

Page 11: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

39

Source: Authors.

Using bivariate regression, each indicator had statistical significance in explaining its

effect on the ranking of universities. The top two contributors to institutional ranking

were the NS and HiCi indicators (models 4 and 3), which could individually explain

more than 50% of the variance in the ranking of universities, but unfortunately, the HiCi

indicator was not statistically significant in the final model (model 7). The third most

influential contributor to institutional ranking was the Award indicator, with an adjusted

R2 of 48%. In other words, 48% of the variation in the ranking of universities could be

explained with Award. Then, the Alumni and PUB indicators separately explained

approximately 35% of the variance in the ranking system, but the PUB indicator had

statistical significance in the final model. Finally, the PCP indicator received an

adjusted R2 value lower than 20%; relatively, the PCP indicator contributed less to the

ranking of universities.

THE ranking system

The overall model (model 6) that includes the five criteria explained 90% of the

variance in the ranking of universities (F (5, 85) = 164.782, p < .001). As the results in

Table 5 illustrate, three indicators—teaching (b = -.697, p < .001), research (b = -.775, p

< .001), and citation (b = -.490, p < .001)—were significantly and inversely related to

the ranking of universities and international outlook (b = -.124, p < .05), while the

industry income criteria had no significance. In comparing their standardized weights

(βeta-weights), we found that the research indicator had the most powerfully substantial

impact on the ranking of universities (β = -.506), and the second one was teaching (β = -

.400). Both of them were more than twice that of the citation indicator (β = -.206) and

more than four times that of the international outlook indicator (β = -.084).

Table 5: Regression analysis for the THE system

Model (with indicators) R R2 Adjusted R2

1 Teaching .904 .817 .815

2 Research .902 .813 .811

3 Citation .410 .168 .160

4 Int’l Outlook .127 .016 .006

5 Industry .113 .013 .002

6 Teaching, Research, Citation, Int’l Outlook, Industry .952 .906 .901

b-weight (βeta-weight)

Indicator Between each indicator and the rank Final model

Teaching -1.586*** (-.904)

-.697 (-.400)***

Research -1.398*** (-.902)

-.775 (-.506)***

Citation -.991*** (-.410)

-.490 (-.206)***

Int’l Outlook -.193 (-.127)

-.124 (-.084)*

Industry -.142 (-.113)

.014 (.011)

Notes: (a) ***p ≤ .001, **p ≤ .01, *p ≤ .05. (b) Int’l Outlook = International outlook; Industry = Industry income.

Source: Authors.

Page 12: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

40

When analysing the relationship between single indicator and the ranking of

universities, we found that three of five criteria had statistical significance in explaining

their effect on the ranking except the international outlook and industry income criteria.

The top two contributors to the ranking were teaching and research (models 1 and 2),

which could individually explain approximately 80% of the variance in the ranking of

universities. The third contributor to the ranking was the citation indicator, with an

adjusted R2 of 16%. The other two indicators had adjusted R2 values lower than 1%.

Interestingly, the international outlook indicator was statistically significant in the final

model but not important when we assessed its single effect on the ranking.

QS Ranking System

The regression results for the QS ranking system are illustrated in Table 6.

Table 6: Regression analysis for the QS system

Model (with indicators) R R2 Adjusted R2

1 Peer .746 .556 .552

2 Employer .535 .286 .279

3 F/S ratio .630 .396 .390

4 Int’l faculty .205 .042 .032

5 Int’l student .327 .107 .098

6 Citation .336 .113 .104

7 Peer, Employer, F/S ratio, Int’l faculty, Int’l student, Citation

.985 .970 .968

b-weight (βeta-weight)

Indicator Between each indicator and the rank Final model

Peer -2.035*** (-.746)

-1.456 (-.533)***

Employer -.974*** (-.535)

-.255 (-.140)***

F/S ratio -.751*** (-.630)

-.596 (-.500)***

Int’l faculty -.192* (-.205)

-.101 (-.108)***

Int’l student -.355*** (-.327)

-.161 (-.148)***

Citation -.506*** (-.336)

-.499 (-.332)***

Notes: (a) ***p ≤ .001, **p ≤ .01, *p ≤ .05. (b) Peer = Academic reputation by peer review; Employer = Employer survey; F/S ratio = Faculty/Student ratio; Int’l faculty = Proportion of international faculty; Int’l student = Proportion of international student.

Source: Authors.

The final model (model 7) that includes the six indicators provided a very strong

explanation of 97% for determining the ranking of universities (F (6, 93) = 497.673, p <

.001). As shown in Table 6, all indicators were significantly and inversely related to the

ranking of universities. By comparing their standardized weights (βeta-weights), we

found that the top two indicators with the most substantial impact on the ranking of

Page 13: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

41

universities were peer review (β = -.533) and faculty-student ratio (β = -.500). The third

most influential contributor was the citation indicator (β = -.332). The influence of these

three indicators was more than twice that of the other three indicators, including

employer survey and the number of international students and faculty members.

According to the bivariate regression, each indicator had statistical significance in

explaining its effect on the ranking. The most influential contributor to the ranking was

peer review (model 1), which could explain 55% of the variance in the ranking of

universities. The next most influential contributor to the ranking was the faculty-to-

student ratio, which could explain 39% of the variation in the ranking of universities.

The third one was the employer survey indicator, with an adjusted R2 of 28%. In other

words, 28% of the variation in the ranking of universities could be explained by the

employer survey. The other three indicators had adjusted R2 values equal to or less than

10%.

Most indicators of these three ranking systems made substantial contributions to the

ranking of the top universities except two indicators of the THE system: international

outlook and industry income. In the ARWU system, three indicators: Award, NS, and

PUB, had statistical significance in predicting the ranking of universities. Even though

the variance in the ranking of universities could be explained with HiCi (adjusted R2 of

58%), it was not statistically significant. That is to say, the ARWU ordinal ranking

could be determined by Award, NS, and PUB. In terms of the QS and THE systems, the

more powerful contributors to the ranking of universities were expert-based reputation

indicators, including the peer review of the QS system and the teaching and research

criteria of the THE system. These findings were consistent with the results of several

previous studies (e.g., Huang, 2011; Marginson, 2014; Saisana et al., 2011). This

seemed to imply that universities have opportunities to receive higher rankings if they

have tangible, popular, and customer-appreciated research products and an excellent

reputation.

The regression analysis of these three ranking systems suggest that not all indicators in

each ranking system contribute equally to the prediction of their final ranking of

universities. In other words, several indicators could explain their respective rankings,

while some indicators might not make an authentic contribution of the ranking

prediction. Moreover, the importance of the most influential indicators to the final

ranking of universities could be dissimilar to their assigned weights in the

methodologies. For instance, according to the ARWU methodology, the NS indicator is

assigned a weight of 20%, but this single indicator can explain approximately 70% of

the variance in the ARWU ranking of universities. Although the final ranking of

universities results from multiple factors and are influenced by them, the effect of a

single indicator on the final ranking results cannot be neglected.

CONCLUSION

Facing increasing competition between HEIs in domestic and global contexts, the

number of ranking systems at the national and international levels is increasing.

University rankings are seen as a meaningful representation of bettering academic

excellence and institutional reputation. In order to achieve these goals, most HEIs make

a concerted effort to participate in institutional ranking activities rather than escape from

them. In particular, world university rankings have gradually drawn greater attention in

international and comparative higher education. The basic goal of global university

Page 14: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

42

rankings is to provide information to inform student choices of universities, but the

impact and use of global rankings have changed. Global university rankings serve as

tools for evaluating universities’ outstanding performance as well as marketing and

positioning within countries and around the world. They become politically

exclusionary instruments (Amsler & Bolsmann, 2012) and a tool of status control

(Marginson, 2014). In other words, the ordinal numbers shown in the global rankings

implies the position and competitiveness of a university. Thus, in the current study, our

intention is not to deny global university rankings but to argue that international ranking

systems should be carefully examined and that their results should be deliberately

interpreted.

The selected global ranking systems are not perfect in measuring higher education

institutional performance and in awarding their ordinal statuses across the globe. After

analysing the contributions of the indicators to the final ranking of universities in the

three ranking systems, we obtained several findings. First, most indicators of the three

ranking systems were positively correlated with their overall ranking except the

international outlook and industry income indicators of the THE ranking system. The

reason that these indicators were not statistically significant might involve their lower

assigned weights and whether universities were willing to provide accurate information

on financing and internationalization. Thus, we highlight the need for HEIs and those

who publish the rankings to be aware of and sensitive to the methodological issues and

the transparency of institutional financial and internationalization data.

We also note that, in the three ranking systems, not all indicators contribute to the

ranking of universities. This seems to imply that the ranking of HEIs might be

determined by a few indicators. The various methodologies of different ranking systems

may cause vulnerabilities in the seemingly objective evaluations and redundant

evaluative criteria. However, as stated by Rauhvargers (2011), readers seldom receive

and understand the actual information regarding the calculation to obtain the final

ranking. Unfortunately, too often readers might be misinformed. The audience might

also overestimate or underestimate the contributions of some indicators to the final

ranking. Hence, we caution higher education stakeholders at all levels against using and

interpreting the surface ordinal numbers and about making public decisions based solely

on global ranking systems. We also suggest that the indicators chosen for each ranking

system should be regularly examined to avoid redundant biases.

REFERENCES

Aguillo, I. F., Bar-Ilan, J., Levene, M., & Ortega, J. L. (2010). Comparing university

rankings. Scientometrics, 85(1), 243–256. doi: 10.1007/s11192-010-0190-z

Altbach, P. G. (2006). The dilemmas of ranking. International Higher Education, 42, 2–

3.

Altbach, P. G. (2012). The globalization of college and university rankings. Change:

The Magazine of Higher Learning, 44(1), 26–31. doi:

10.1080/00091383.2012.636001

Page 15: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

43

Amsler, S. S., & Bolsmann, C. (2012). University ranking as social exclusion. British

Journal of Sociology of Education, 33(2), 283–301. doi:

10.1080/01425692.2012.649835

Bastedo, M. N., & Bowman, N. A. (2011). College rankings as an interorganizational

dependency: Establishing the foundation for strategic and institutional accounts.

Research in Higher Education, 52(1), 3–23. doi:10.1007/s11162-010-9185-0

Bowman, N. A., & Bastedo, M. N. (2011). Anchoring effects in world university

rankings: Exploring biases in reputation scores. Higher Education, 61(4), 431–444.

doi: 10.1007/s10734-010-9339-1

Carroll, D. (2014). An investigation of the relationship between university rankings and

graduate starting wages. Journal of Institutional Research, 19(1), 46–54.

Deem, R., Mok, K. H., & Lucas, L. (2008). Transforming higher education in whose

image? Exploring the concept of the 'world-class' university in Europe and Asia.

Higher Education Policy 21(1), 83–97. doi: 10.1057/palgrave.hep.8300179

Delgado, J. E., & Weidman, J. C. (2012). Latin American and Caribbean countries in

the global quest for world class academic recognition: An analysis of publications in

Scopus and the Science Citation Index between 1990 and 2010. Excellence in Higher

Education 3, 111–121. doi: 10.5195/ehe.2012.73

Dill, D. D., & Soo, M. (2005). Academic quality, league tables, and public policy: A

cross-national analysis of university ranking systems. Higher Education, 49(4), 495–

533. doi: 10.1007/s10734-004-1746-8

Douglas, J. A., Ed. (2016). The new flagship university: Changing the paradigm from

global ranking to national relevancy. New York: Palgrave Macmillan.

Fidler, B., & Parsons, C. (2008). World university ranking methodologies: Stability and

variability. Higher Education Review, 40(3), 15–34.

Frey, B. S., & Rost, K. (2010). Do rankings reflect research quality? Journal of Applied

Economics, 13(1), 1–38.

Hazelkorn, E. (2009). Rankings and the battle for world-class excellence: Institutional

strategies and policy choices. Higher Education Management and Policy, 21(1), 55–

76.

Hazelkorn, E. (2014). Reflections on a decade of global rankings: What we've learned

and outstanding issues. European Journal of Education, 49(1), 12–28. doi:

10.1111/ejed.12059

Huang, M.-H. (2011). A comparison of three major academic rankings for world

universities: From a research evaluation perspective. Journal of Library and

Information Studies, 9(1), 1–25.

Huisman, J., & Currie, J. (2004). Accountability in higher education: Bridge over

troubled water? Higher Education, 48(4), 529–551.

Page 16: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

44

Kehm, B. M. (2014). Global university rankings—Impacts and unintended side effects.

European Journal of Education, 49(1), 102–112. doi: 10.1111/ejed.12064

Lee, M., & Park, H. W. (2012). Exploring the web visibility of world-class universities.

Scientometrics, 90(1), 201–218. doi: 10.1007/s11192-011-0515-6

Liu, N. C., & Cheng, Y. (2011). Global university rankings and their impact. In P. G.

Altbach (Ed.), Leadership for world-class universities: Challenges for developing

countries (pp. 145–158). New York and London: Routledge.

Lo, W. Y. W. (2011). Soft power, university rankings and knowledge production:

Distinctions between hegemony and self‐determination in higher education.

Comparative Education, 47(2), 209–222. doi: 10.1080/03050068.2011.554092

Marginson, S. (2014). University rankings and social science. European Journal of

Education, 49(1), 45–59. doi: 10.1111/ejed.12061

Marginson, S., & van der Wende, M. (2007). To rank or to be ranked: The impact of

global rankings in higher education. Journal of Studies in International Education,

11(3/4), 306–329. doi: 10.1177/1028315307303544

Proulx, R. (2007). Higher education ranking and leagues tables: Lessons learned from

benchmarking. Higher Education in Europe, 32(1), 71–82. doi:

10.1080/03797720701618898

QS Intelligence Unit (2015). How do students use rankings? The role of university

rankings in international student choice. London, UK: QS Intelligence Unit.

Quacquarelli Symonds. (2014). QS world university rankings 2013/14: Methodology.

Retrieved on January 19th, 2014 from http://www.iu.qs.com/university-

rankings/world-university-

rankings/?__hstc=238059679.bbf5e5a93ba9268580a1ed3ca7d6e2d7.1389930394437

.1389930394437.1389930394437.1&__hssc=238059679.2.1389930394438&__hsfp=

3348744073

Rauhvargers, A. (2011). Global university rankings and their impact. Belgium:

European University Association.

Roberts, D., & Thomson, L. (2007). Reputation management for universities: University

league tables and the impact on student recruitment. Leeds, UK: The Knowledge

Partnership.

Saisana, M., d’Hombres, B., & Saltelli, A. (2011). Rickety numbers: Volatility of

university rankings and policy implications. Research Policy, 40(1), 165–177. doi:

10.1016/j.respol.2010.09.003

Salmi, J., & Saroyan, A. (2007). League tables as policy instruments: Uses and misuses.

Higher education management and policy, 19(2), 31–68.

Page 17: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

What contributes more to the ranking of higher education institutions?

45

Shanghai Jiao Tong University. (2014). Academic ranking of world universities 2013:

Methodology. Retrieved on January 19th, 2014 from

http://www.shanghairanking.com/ARWU-Methodology-2013.html

Shin, J. C., & Toutkoushian, R. K. (2011). The past, present, and future of university

rankings. In J. C. Shin, R. K. Toutkoushian & U. Teichler (Eds.), University

rankings: Theoretical basis, methodology and impacts on global higher education

(pp. 1–16). New York: Springer.

Su, S.-W. (2014). To be or not to be: Impact of “I” idolization from the perspectives of

humanities and social sciences faculty in Taiwan. In C. P. Chou (Ed.), The SSCI

syndrome in higher education: A local or global phenomenon (pp. 51–80).

Rotterdam, Netherlands: Sense Publishers.

Taylor, P., & Braddock, R. (2007). International university ranking systems and the idea

of university excellence. Journal of Higher Education Policy and Management,

29(3), 245–260. doi: 10.1080/13600800701457855

Thakur, M. (2007). The impact of ranking systems on higher rducation and its

stakeholders. Journal of Institutional Research, 13(1), 83–96.

Thomson Reuters. (2010). World university rankings 2010-2011: Methodology.

Retrieved on January 19th, 2014 from http://www.timeshighereducation.co.uk/world-

university-rankings/2010-11/world-ranking/methodology

Thomson Reuters. (2014). World university rankings 2013-2014: Methodology.

Retrieved on January 19th, 2014 from http://www.timeshighereducation.co.uk/world-

university-rankings/2013-14/world-ranking/methodology

Tofallis, C. (2012). A different approach to university rankings. Higher Education,

63(1), 1–18. doi: 10.1007/s10734-011-9417-z

Treiman, D. J. (2009). Quantitative data analysis: Doing social research to test ideas.

San Francisco, CA: Jossey-Bass.

Trigwell, K. (2011). Measuring teaching performance. In J. C. Shin, R. K. Toutkoushian

& U. Teichler (Eds.), University rankings: Theoretical basis, methodology and

impacts on global higher education (pp. 165–181). Dordrecht Heidelberg London

New York: Springer

van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in

the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.

doi: 10.1007/s11192-005-0008-6

van Vught, F. A., & Westerheijden, D. F. (2010). Multidimensional ranking: A new

transparency tool for higher education and research. Higher education management

and policy, 22(3), 31–56. doi: 10.1787/hemp-22-5km32wkjhf24

Vieira, R. C. & Lima, M. C. (2015). Academic ranking—from its genesis to its

international expansion. Higher Education Studies, 5(1), 63–72. doi:

10.5539/hes.v5n1p63

Page 18: What contributes more to the ranking of higher education ...next year, Times Higher Education and the Quacquarelli Symonds Company co-published their own ranking systems, which is

Hou & Jacob

46

Wildavsky, B. (2010). The great brain race: How global universities are reshaping the

world. Princeton, NJ: Princeton University Press.

Williams, R. (2008). Methodology, meaning and usefulness of rankings. Australian

Universities' Review, 50(2), 51–58.

Williams, R., & Van Dyke, N. (2008). Reputation and reality: Ranking major disciplines

in Australian universities. Higher Education, 56(1), 1–28. doi: 10.1007/s10734-007-

9086-0