25
When Knowledge Wins: Transcending the Sense and Nonsense of Academic Rankings NANCY J. ADLER McGill University ANNE-WIL HARZING University of Melbourne “Not everything that can be counted counts, and not everything that counts can be counted.”—Albert Einstein Has university scholarship gone astray? Do our academic assessment systems reward scholarship that addresses the questions that matter most to society? Using international business as an example, we highlight the problematic nature of academic ranking systems and question if such assessments are drawing scholarship away from its fundamental purpose. We call for an immediate examination of existing ranking systems, not only as a legitimate scholarly question vis-a ` -vis performance—a conceptual lens with deep roots in management research— but also because the very health and vibrancy of the field are at stake. Indeed, in light of the data presented here, which suggest that current systems are dysfunctional and potentially cause more harm than good, a temporary moratorium on rankings may be appropriate until more valid and reliable ways to assess scholarly contributions can be developed. The worldwide community of scholars, along with the global network of institutions interacting with and supporting management scholarship (such as the Academy of Management, AACSB, and Thomson Reuters Scientific) are invited to innovate and design more reliable and valid ways to assess scholarly contributions that truly promote the advancement of relevant 21st century knowledge, and likewise recognize those individuals and institutions that best fulfill the university’s fundamental purpose. ....................................................................................................................................................................................................... “Measurement of scientific productivity is difficult. The measures used ... are crude. But these measures are now so universally adopted that they determine most things that matter [to scholars]: tenure or unem- ployment, a postdoctoral grant or none, suc- cess or failure. As a result, scientists have been forced to downgrade their primary aim from making discoveries to publishing as many papers as possible—and trying to work them into high impact-factor journals. Consequently, scientific behaviour has be- come distorted and the utility, quality, and objectivity of articles have deteriorated. Changes ... are urgently needed ...—Peter Lawrence, Cambridge University (2008: 1) Truly great universities are one of society’s great- est assets (Garten, 2006). The world’s first univer- sity, Nalanda, founded in 427 C.E. (almost a millen- nium before the most prominent universities of Europe and the Americas) near the Nepalese bor- der in what is now India, boasted over 10,000 stu- dents and 2,000 professors. Former Dean of Yale’s School of Management Jeffrey Garten (2006), in supporting Asia’s 21st century revival of Nalanda, raised a fundamental question: Do societies under- stand that real power comes from great ideas and from the people who generate them? Do today’s universities, operating more than sixteen centuries There is no way that this paper would have been born without the thoughtful and consistent support from the wide range of people who care passionately about issues of academic pro- cess, and who took the time to read and reread the manuscript as it went through its 150 iterations prior to being published. We Academy of Management Learning & Education, 2009, Vol. 8, No. 1, 72–95. ........................................................................................................................................................................ 72 Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder’s express written permission. Users may print, download or email articles for individual use only.

Academy of Management Learning & Education When Knowledge …cmsdev.aom.org/uploadedFiles/Publications/AMLE/Adler2009.pdf · 2012-07-06 · When Knowledge Wins: Transcending the Sense

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

When Knowledge Wins:Transcending the Sense and

Nonsense of Academic RankingsNANCY J. ADLERMcGill University

ANNE-WIL HARZINGUniversity of Melbourne

“Not everything that can be counted counts, and not everything that counts can be counted.”—Albert Einstein

Has university scholarship gone astray? Do our academic assessment systems reward scholarshipthat addresses the questions that matter most to society? Using international business as anexample, we highlight the problematic nature of academic ranking systems and question if suchassessments are drawing scholarship away from its fundamental purpose. We call for animmediate examination of existing ranking systems, not only as a legitimate scholarly questionvis-a-vis performance—a conceptual lens with deep roots in management research—but alsobecause the very health and vibrancy of the field are at stake. Indeed, in light of the datapresented here, which suggest that current systems are dysfunctional and potentially cause moreharm than good, a temporary moratorium on rankings may be appropriate until more valid andreliable ways to assess scholarly contributions can be developed. The worldwide community ofscholars, along with the global network of institutions interacting with and supportingmanagement scholarship (such as the Academy of Management, AACSB, and Thomson ReutersScientific) are invited to innovate and design more reliable and valid ways to assess scholarlycontributions that truly promote the advancement of relevant 21st century knowledge, andlikewise recognize those individuals and institutions that best fulfill the university’s fundamentalpurpose.

.......................................................................................................................................................................................................

“Measurement of scientific productivity isdifficult. The measures used . . . are crude.

But these measures are now so universallyadopted that they determine most thingsthat matter [to scholars]: tenure or unem-

ployment, a postdoctoral grant or none, suc-cess or failure. As a result, scientists have

been forced to downgrade their primary aimfrom making discoveries to publishing as

many papers as possible—and trying towork them into high impact-factor journals.Consequently, scientific behaviour has be-come distorted and the utility, quality, and

objectivity of articles have deteriorated.Changes . . . are urgently needed . . .”

—Peter Lawrence, Cambridge University(2008: 1)

Truly great universities are one of society’s great-est assets (Garten, 2006). The world’s first univer-sity, Nalanda, founded in 427 C.E. (almost a millen-nium before the most prominent universities ofEurope and the Americas) near the Nepalese bor-der in what is now India, boasted over 10,000 stu-dents and 2,000 professors. Former Dean of Yale’sSchool of Management Jeffrey Garten (2006), insupporting Asia’s 21st century revival of Nalanda,raised a fundamental question: Do societies under-stand that real power comes from great ideas andfrom the people who generate them? Do today’suniversities, operating more than sixteen centuries

There is no way that this paper would have been born withoutthe thoughtful and consistent support from the wide range ofpeople who care passionately about issues of academic pro-cess, and who took the time to read and reread the manuscriptas it went through its 150 iterations prior to being published. We

� Academy of Management Learning & Education, 2009, Vol. 8, No. 1, 72–95.

........................................................................................................................................................................

72Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder’sexpress written permission. Users may print, download or email articles for individual use only.

after the founding of Nalanda, remember that theirprimary role is to support scholarship that ad-dresses the complex questions that matter most tosociety?

Many leaders both inside and outside academiafear that universities today are no longer fulfillingtheir fundamental mission; they fear that univer-sity scholarship has gone astray. The FinancialTimes, for example, recently asked “why businessignores business schools” and concluded thatbusiness views business school research as irrel-evant, pointedly highlighting the fact that “[c]hiefexecutives . . . pay little attention to what businessschools do or say” (Skapinker, 2008; see also Pfeffer& Fong, 2002). Writing in the Academy of Manage-ment Journal, one of academia’s most prestigiousjournals, Professor McGrath (2007: 1372) noted that“Most of what we publish isn’t even cited by otheracademics.”

Underscoring the importance of returning to abroader and more relevant appreciation of univer-sity scholarship, the prestigious 18,000-memberAcademy of Management chose “The QuestionsWe Ask” as its 2008 conference theme. The highlyregarded Academy of International Business sim-ilarly fostered a prominent search to identify themost important 21st century question(s) (Adler,2008a, 2008b; Buckley, 2002; Buckley & Lessard,2005; Butler, 2006; and Peng, 2004, among others).London Business School convened a conferencein 2007 on “Conducting Research with Relevanceto Practice,” honoring Sumantra Ghoshal (seeGhoshal, 2005; Pfeffer, 2005; Donaldson, 2005), aleading international business scholar who com-

mitted his career to “maintaining academic re-spectability by combining rigor with relevance”(Rynes, 2007b). Following the conference, the Acad-emy of Management Journal (AMJ) convened anEditor’s Forum on “Research with Relevance toPractice” (Rynes, 2007b), highlighting some of thebest thinking to date on how to recombine important,interesting, integrative, and globally relevant re-search with rigor (see Gulati, 2007; McGahan, 2007;Tushman & O’Reilly, 2007; and Vermeulen, 2007).

In this article, we question whether the aca-demic assessment systems currently used to rankscholars, universities, and journals undermine,rather than foster and reward, scholarship thatmatters. We invite scholars from around the world,along with the global network of institutions inter-acting with scholars and supporting managementscholarship, to create new systems that would sup-port the advancement of knowledge by encourag-ing the types of contributions that matter most tothe broader society.

THE PROLIFERATION OF ACADEMIC RANKINGSYSTEMS

“The result is an ‘audit society’ in whicheach indicator is invested with a specious

accuracy and becomes an end in itself.”—Peter Lawrence (2003: 259)

In stark contrast to the goal of asking and re-searching questions that matter, the past decadeshave witnessed a growing competition among in-dividual scholars, universities, and journals toachieve high rankings (see, e.g., MacDonald &Kam, 2007; Segalla, 2008a). According to Cam-bridge University scholar and journal editor PeterLawrence (2003: 259), “[s]cientists are increasinglydesperate to publish in a few top journals and arewasting time and energy manipulating theirmanuscripts and courting editors. As a result, theobjective presentation of work, the accessibility ofarticles, and the quality of research itself are beingcompromised.”

All professions (and organizations), of course,use metrics and benchmarks to assess theirprogress. The very concept of measurement be-came central to the ascendency of scientific meth-ods during the Enlightenment, and thus has beenviewed as a primary contributor to human knowl-edge for centuries. Given its primacy, it is incum-bent on all professions to regularly reconsiderwhether their key metrics—their key performanceindicators—are accomplishing what they are in-tended to accomplish. Whereas management is noexception, no field is potentially better equipped

would especially like to thank James Bailey, one of AMLE’sfounding editors, who first saw the potential of the paper to helpshape the field. Similarly, we would like to profusely thank RichKlimoski, who defines what it means to give generative feed-back (as opposed to the much more common evaluative feed-back). With the addition of an institutional perspective, Rich’ssuggestions took the paper far beyond its initial, more circum-scribed appreciation of the dynamics at play. We thank EleanorWestney, who guided us in our growing appreciation of howinstitutional theory, and more important, institutional entrepre-neurship could explain the dynamics within our field. We alsoowe a huge debt of gratitude to Troy Anderson for his multiplereadings and detailed suggestions on what had to change tomake the paper work. In addition, we would like to thank themany people who read various iterations of the paper andgenerously offered their perspective, suggestions, and encour-agement, including Ariane Berthoin Antal, Art Bedeian, HowardBrowman, Michelle Buck, Christina Cregan, Lorraine Eden, Ax-ele Giroud, Carol Kulik, Yadong Luo, Steve Maguire, DavidMerrett, Niels Noorderhaven, Ron van der Wal, Jim Walsh, SriZaheer, and Tatiana Zalan. We are grateful to Ben Arbaugh,Myrtle Bell, and Jonathan Doh, the new AMLE editorial team, fortheir expert choices in how best to present the set of ideas to thepublic.

2009 73Adler and Harzing

than our own to set an example of what effective,thoughtful, and encompassing performance indi-cators could be. Management must therefore re-consider the metrics it most pervasively uses tomake certain that it has not created a web of “un-intended consequences” (Merton, 1936: 897) by hav-ing fallen into the “folly”, as labeled by formerAcademy of Management President Steven Kerr(1975: 769), “of rewarding A [publications in a narrowset of top-listed journals] while hoping for B [schol-arship that addresses the questions that matter mostto society]”. Our profession needs to guard againstturning rankings into, as James March describedsuch metrics in his 1996 (p. 286) Administrative Sci-ence Quarterly article, “magic numbers” investedwith a sacred, but false, aura of truth.

While the most publicized rankings are mediarankings of undergraduate and MBA programs(see, e.g., Gioia & Corley, 2002; Morgeson & Nahr-gang, 2008; Zemsky, 2008), the most aspired-to rank-ings claim to measure what is labeled as researchproductivity, with the definition of productivity of-ten reduced to simply counting publications inhigh impact-factor journals along with citations inthe limited set of journals that such systems rec-ognize (see Rynes, 2007a). Rather than genuinelyfostering relevant knowledge, the emphasis onranking seems to be driven by a desire to identifywinners and losers in a game of academic pres-tige. The stated reasons for creating and usingsuch academic ranking systems usually include (a)“fairness” in universities’ hiring, promotion, andtenure decisions, and (b) accountability and value-for-research-dollars in the grant-awarding pro-cesses of governments and other providers ofresearch funds (Murphy, 1996).

Governments worldwide, all of which have man-dates to foster society’s best interest, have intro-duced formal rankings-based research assessmentprocesses. Although the British Research Assess-ment Exercise is probably the oldest and bestknown of such systems, it is by no means unique.These national research evaluation systems rein-force universities’ proclivity to systematically rankjournals, scholars, and academic institutions.

Some disciplines embraced academic rankingmuch earlier and more extensively than did others.Within business, rankings first became prominentin accounting, economics, and finance. In a com-prehensive evaluation going back to 1956, Macriand Sinha (2006) reviewed nearly 50 ranking stud-ies in economics alone. In just the past year, asingle journal (Review of Quantitative Financeand Accounting) published no less than four arti-cles on rankings in finance and accounting. Fol-

lowing their lead, other management disciplinescreated their own systems and studies.

Using international business (IB) as an example,we illustrate how rankings of both scholars anduniversities are subject to a range of arbitrary de-cisions, including those related to choice of publi-cation outlet, choice of time period, weighting ofdata, and aggregation of individuals to an institu-tional level. In reviewing the arbitrary nature ofthese commonly used but questionable rankingpractices, we present the problematic nature ofjournal rankings and citation analyses. We ex-press reservations about the validity and reliabil-ity of current academic rankings, and questionwhether such systems fundamentally underminethe very purpose of universities and universityscholarship. In reviewing the currently enactedsystem, we invite the field to identify alternativesto the present situation that will result in both thecreation of more research that is of greater use tosociety, and the development of more accurate andfair ways to recognize researchers and their con-tributions to research that matters.

RANKING SYSTEMS: THE ARBITRARY NATUREOF DECISION CRITERIA

“What has rank to do with the process ofcreative discovery in science? Very little.

What has rank to do with the politics ofscience and the allocation of credit for

discoveries? Almost everything.”—Peter Lawrence (2002: 835)

Once the decision is made to rank individualscholars or universities, the most important deci-sion is the set of criteria on which the rankings willbe based. Should individuals and universities beranked according to productivity (based on suchmeasures as the total number of publications orpublications in “prestigious” journals), impact(based on such metrics as the number of peopleciting the author’s work or citing work in the jour-nal in which an author’s article is published),and/or some surrogate for quality (such as an ex-pert reading of the article, publication in a journalthat has a very low acceptance rate or that is ledby a highly respected editorial board, selection asan “editor’s choice,” best publication of the year, orother quality-based recognition)?

Published assessments vary substantially interms of the publication outlets and weightings onwhich they are based. Each choice leads to differentoutcomes, and thus the appearance—if not the real-ity—of arbitrariness. In international business, for

74 MarchAcademy of Management Learning & Education

example, where most IB rankings have been basedon publication counts (as opposed to citation counts),usually in a subset of IB journals, no IB rankingcurrently enjoys a consensus regarding its superior-ity over competing approaches. Whereas each sys-tem adds value within its own circumscribed do-main, none constitutes an adequate basis for theimportant decisions universities make concerninghiring, promotion, tenure, and grant making, or forthe ranking of individuals or institutions.

Which Publications to Include: The Need toBecome More Global and Comprehensive

“Many . . . articles would be betterappreciated, published more quickly, and

perhaps have more impact if they werepublished in specialised journals.

However, because [specialized] . . . journalstend to have lower citation impact, or are

less well known, they are avoided byyoung researchers trying to build animpressive promotion file. This is an

understandable strategy, but one thatultimately slows the diffusion of ideas into

the research literature and stiflesacademic dialogue.”

—Michael Segalla (2008a: 126)

Why Only Journal Articles?

To date, most rankings privilege articles in selectjournals (such as those appearing on the FinancialTimes list of top-40 journals in economics and busi-ness [FT40] and the University of Texas at Dallaslist of the top-24 journals in business [UTD]). Thereare, however, convincing arguments for incorpo-rating a more encompassing set of publications,including books, book chapters, conference pro-ceedings, and a much wider range of journals.

Books, for example, often offer a depth of analy-sis that is impossible in a limited-length article.Moreover, the impact of books is often greater thanthat of articles, even those published in the bestjournals. Based on their analysis of emerging re-search themes, Griffith, Cavusgil, and Xu (2008:1233) acknowledged that “some of the most impor-tant contributions to the international business liter-ature have been advanced through non-journal out-lets.” In their Delphi study, Griffith et al. (2008) askedprolific authors to nominate recent influential books:Nine of the nominated books received more than 100citations in Google Scholar, with some receiving con-siderably more citations than influential articles.Similarly recognizing the importance of books, a re-

cent study of citations in the top three IB journalsrevealed that scholarly books made up more than40% of the most cited works. New cross-disciplinaryconcepts that address the multifaceted aspects ofmost societal issues rarely find a home in traditionaldisciplinary journals. Such contributions are mostlikely to be conveyed in the form of books, bookchapters, or multidisciplinary journals.

Conference proceedings are more likely to commu-nicate research in a timely fashion than are journals.Today, the growth in open-source publishing is be-ginning to challenge all traditional, limited-access,slow-turnaround (and often highly prestigious) jour-nals (Cohen, 2008). The field of computer science pro-vides a compelling, but troubling, example. Confer-ence proceedings constitute the most importantpublication outlet in computer science, due primarilyto the rapid advances in the field. The world’s mostcited computer scientist, Hector Garcia-Molina (seehttp://www.cs.ucla.edu/�palsberg/h-number.html) hasgathered nearly 30,000 citations in Google Scholar,with most of his papers having been published andcited in conference proceedings. In the Science Cita-tion Index, however, Garcia-Molina only receivesslightly more than 250 citations, as Thomson ReutersISI fails to appreciate the importance of timeliness,and chooses not to recognize citations in conferenceproceedings. As the rate of societal change quickens,cycle times in academic publishing, which havelagged behind those in industry and technology, be-come crucial. In a world of instant communication inwhich 70 million blogs already exist and 40,000 newblogs come on line each day—the majority of whichare not in English (Lanchester, 2008)—academia can-not continue to rely on a venerated journal-publish-ing system that considers publication delays of up to2 years to be both acceptable and normal.

In a world of instant communication inwhich 70 million blogs already exist and40,000 new blogs come on line eachday—the majority of which are not inEnglish (Lanchester, 2008)—academiacannot continue to rely on a veneratedjournal-publishing system that considerspublication delays of up to 2 years to beboth acceptable and normal.

Taking a leadership position, Harvard announcedin 2008 that it plans to begin publishing all re-search articles free on-line, so that the ideas willbe immediately available to the wider public (Co-hen, 2008). Rather than waiting to be published in

2009 75Adler and Harzing

often highly expensive, limited-circulation jour-nals, web-based open-access publishing guaran-tees that new articles are available in a timelyfashion to a broad audience. As Carroll (2008) in-dicates, the discussion about copyright is not justtechnical, “it reflects a difference of opinion aboutthe value of public access to scholarly thought andresearch.” Will the same authors who choose web-based publishing also continue to publish in peer-reviewed journals? The answer is not clear. Whatis clear is that the goal of fostering knowledge bymaking scholarly research accessible to as manypeople as possible, free of cost, could well trumpmore arcane publishing processes, and simulta-neously, the ranking systems that have relied onrestricted publishing in “elite” journals. No rank-ing system that chooses to ignore work pub-lished on the web will remain meaningful. Noscholar who chooses to limit his or her intellectualbase to work that, while published in top-rankedjournals, is already 2–3 years out of date, can re-main relevant.

Why English Only?

If rankings are to be relevant to 21st century schol-arship, they need to expand beyond work pub-lished in English. In management, the five journalsconsistently identified as top (AMJ, AMR, ASQ, JAP,and SMJ; see Table 1 for abbreviations) are allpublished in English and are dominated by theU.S. research community and its particular defini-tion of scientific rigor (Singh, Haddad, & Chow,2007). Singh et al. (2007: 320) suggest that editorsand editorial board members of these journals“tend to emphasize technical thoroughness andrefinement over the advancement of less techni-cally developed but potentially more fundamentalideas (Swanson, 2004).” Similarly, the Thomson Re-

uters ISI citation indices are derived almost en-tirely from journals published in English. Makingthis bias explicit, Thomson Reuters Scientificstates that “English is the universal language ofscience at this time in history. . . . Thomson Scien-tific [therefore] focuses on journals that publish fulltext in English or at the very least, their biblio-graphic information in English” (Testa, nd).

By contrast, Google Scholar, rarely used by thoseconstructing rankings, includes on-line scholar-ship from around the world, irrespective of lan-guage of publication. Publishing in accountingillustrates the dramatically different resultsachieved using Google Scholar versus ThomsonReuters ISI as a data source. French accountingscholar Gerard Charreaux accumulated some 30citations in ISI-listed journals over his lifetime,while Google Scholar credits him with over 1,000.The reason is both easy to understand and unac-ceptable: Most of Charreaux’s citations occur inFrench journals that are not ISI listed. It would beextremely unfortunate if we were to conclude(based on ISI data) that Charreaux has had noimpact on his field.

Why Only These Particular Journals?

Most rankings evaluate individuals and universi-ties based on articles published in a subset ofjournals. Journals in the selected subset are gen-erally labeled as being high-impact, core, A-list, ortop, based on a combination of explicit and im-plicit criteria. Although currently widely acceptedas being valid indicators of quality, the most com-monly used lists of A-level journals neither claimto comprehensively include “the best of the best”nor do they inadvertently succeed in such a task.The journals included in the FT40 and the UTDlists, for example, are merely a sample of high-quality journals; they do not even attempt to rep-resent (let alone equitably and comprehensivelyinclude) all 13 (AACSB-defined) disciplines associ-ated with business. While these lists includemore journals from accounting and finance, theyfail to list all top-quality journals within mostbusiness fields. Similar to other well-known lists,the FT40 does not reveal how the 40 journals on itslist are selected; however, given that the FT40 in-cludes both academic and practitioner journals, itis clear that top-quality scholarship is not its onlycriterion.

Like the broader field, a brief review of several IBstudies illustrates the arbitrary nature of journalselection for academic rankings. Early studies ei-ther included only JIBS (Inkpen & Beamish, 1994) orincluded JIBS and JWB along with up to seven

TABLE 1Journal Abbreviations

Academy of Management Journal AMJAcademy of Management Learning and Education AMLEAcademy of Management Review AMRAdministrative Science Quarterly ASQAsia Pacific Journal of Management APJMInternational Business Review IBRInternational Journal of HRM IJHRMInternational Marketing Review IMRJournal of Applied Psychology JAPJournal of International Business Studies JIBSJournal of International Marketing JIMarJournal of World Business JWBManagement International Review MIRStrategic Management Journal SMJ

76 MarchAcademy of Management Learning & Education

additional top-ranked functional journals such asAMJ, Journal of Finance, and Journal of Marketing(Morrison & Inkpen, 1991). More recent rankingshave added two general IB journals, MIR (Kumar &Kundu, 2004) and IBR (Chan, Fung, & Lai, 2006),while dropping all journals that focus only on asingle discipline (such as marketing or finance).

Whereas arguments can be made both for theinclusion of broader multidisciplinary journalsand for more narrowly focused disciplinary jour-nals, the two alternatives produce significantlydifferent rankings, and implicitly convey differentappreciations of what scholarship is most valu-able. Xu, Yalcinkaya, and Seggie’s (2008) additionof two disciplinary (marketing) journals to the morestandard set of core IB journals provides a goodexample. An analysis of publications in the twoadded international marketing journals (IMR andJIMar) reveals that the university that ranked firstin Xu et al.’s (2008) assessment, Michigan State,out-published all other universities in these twojournals, with nearly three times more articlesthan 2nd-ranked University of Texas and 3rd-ranked Old Dominion University. As measured bypublications in these two journals, Michigan Stateis clearly the top university in international mar-keting. But is Michigan State the overall top uni-versity in IB, as implied by their place at the top ofXu et al.’s ranking? When only the more recognizedcore ISI-listed IB journals (JIBS, JWB and 2 years ofIBR) are analyzed over the same period, the high-est ranked universities are the Chinese Universityof Hong Kong (ranked 6th by Xu et al.), the Univer-sity of Texas (ranked 12th [Dallas] and 16th [ElPaso], respectively, by Xu et al.), and the Universityof South Carolina (tied for 17th by Xu et al.). First-ranked Michigan State drops to a tie for 6th withtwo other universities. Whereas each of these uni-versities should be recognized for their contribu-tions to scholarship, especially when viewedthrough the prism of journal-based productivitymeasures, their specific positions in the rankingsare highly dependent on which journals are in-cluded and on the larger issues of breadth andmultidisciplinary perspective.

Why View International Scholarship as a Subsetof Domestic Scholarship?

The distinction between general IB journals andthose journals focusing more narrowly on a partic-ular discipline obfuscates a more fundamental dis-tinction. IB research—and practice—should not beconsidered a subset of “general” or “mainstream”business research (which traditionally, yet implic-itly, has been defined as U.S. domestic research).

Rather, domestic research is always a specificcase (and thus a unique subset) of internationalresearch. Using geography as an analogy, a singlecountry is a subset of the world, not vice versa. Byexcluding “mainstream” journals from IB rankings,which is still common today, IB scholars who pub-lish in such journals are rendered invisible: theirrankings as IB scholars are diminished.

Fortunately, the internationalization of the“mainstream” has already begun. Although rare inthe past, IB research is increasingly published intop “mainstream” disciplinary journals (Werner &Brouthers, 2002). Using a very broad definition ofinternational, nearly half of the articles publishedin AMJ, for example, are now classified as interna-tional (Kirkman & Law, 2005; see also Tung & vanWitteloostuijn, 2008; Tsui, 2007; Adler & Bar-tholomew, 1992, and Boyacigiller & Adler, 1991).The influence of IB research published in top“mainstream” journals is also increasing. Griffithet al. (2008), for example, found that the 15 mostcited IB articles in AMJ, AMR, and SMJ receivednearly twice as many citations as did the 15 mostcited articles in all six top IB journals. Similarly,the most highly cited works in articles published inthe three leading IB journals were more likely tocome from “mainstream” management journalsthan from IB journals. Whereas not true in the past,“mainstream” is becoming the definition of inter-national scholarship in the 21st century.

Choosing journals on which to base a rankingclearly establishes who the field identifies as thewinners and losers. Small changes in the set ofjournals lead to dramatic changes in the rankings.Definitional discrepancies, most strikingly be-tween mainstream and specialist journals, and be-tween multidisciplinary journals and those morenarrowly focused on within-discipline scholarship,skew the resulting rankings to the point of mean-inglessness. Moreover, as will be discussed in thefollowing sections, the lack of clarity and consis-tency in assessing the distinct dimensions of quan-tity, quality, and impact undermines the value ofall current ranking systems, no matter how broadlyor narrowly defined. Both through implicit and ex-plicit influence, competition among universities toimprove their rankings-based reputations compelsindividual scholars to direct their research towardA-listed journals. Whereas reputations may be en-hanced, scholarship suffers. Given the arbitrarynature of the range of decision criteria used toconstruct our current ranking systems, the fieldmust seriously question if such metrics are capa-ble of accurately and fairly recognizing research-ers, and of supporting the types of scholarship thatmatter most to the world.

2009 77Adler and Harzing

Assessing Quality: Journals Fail as Proxies forArticle Quality

“Using journal rank as a proxy for qualitycan lead to substantial judgment errors:

Many top articles are published in non-topjournals, and many non-top articles are

published in top journals.”—Singh et al. (2007: 321)

How does one measure the quality of scholarship,and, by extension, the accomplishments of individ-ual scholars? Management, like other disciplines,has identified a set of top journals to serve asproxies for quality (see, among many others, Singhet al., 2007; Franke, Edlund, & Oster, 1990; Johnson& Podsakoff, 1994; Podsakoff, MacKenzie, Bach-rach, & Podsakoff, 2005). Many scholars argue thatthe quality of a journal reflects the quality of thearticles published in that journal. It has becomecommon to refer to a scholar’s worth by saying thathe or she has two AMJs, three JIBSs and one ASQ.Without ever mentioning the content, quality, orimpact of the article itself, the implication is thatthe scholar must be good.

Starbuck (2005: 180), in an extensive statisticalanalysis, found “that there is much overlap in[the quality of] articles in different prestigestrata” of journals. Based on a comparison of thequality of articles with that of the journal inwhich they are published, Starbuck (2005: 197)discovered that over 50% (ranging from 29% to77%) of articles published “in the first quintile ofjournals do not belong among the highest-value20% of manuscripts.” “Although higher-prestigejournals publish more high value articles, edito-rial selection involves considerable randomness.Highly prestigious journals publish quite a fewlow-value articles, low-prestige journals publishsome excellent articles . . .” (Starbuck, 2005: 196).Starbuck (2005: 196) concluded that “Evaluatingarticles based primarily on which journal pub-lished them is more likely than not to yield in-correct assessments of the articles’ value.”

Based on an analysis of 7 years of citations toevery article in 34 top management journals pub-lished in 1993 and 1996, Singh et al. (2007: 327) drewthe same inescapable conclusion: “[U]sing journalranking . . . can lead to substantial misclassifica-tion of individual articles and, by extension, theperformance of the faculty members who authoredthem.” Type I and Type II errors are rampant.Based on Singh at al.’s (2007) use of the mediannumber of citations in the top 34 management

journals as a threshold, nearly half of the articlespublished in the top-34 management journalsfailed to meet the standards for being classifiedas top articles (48% in 1993 and 45% in 1996). If themean, rather than the median, number of cita-tions had been used, the proportion of non-toparticles published in the top-34 managementjournals would have risen to over two thirds (68%in 1993 and 69% in 1996). The consequences ofusing journal quality as a proxy for article qual-ity are a matter of concern for both the field andindividual scholars.

Based on their research, Singh et al. (2007: 319)warn that “both administrators and the manage-ment discipline will be well served by efforts toevaluate each article on its own merits rather thanabdicate this responsibility by using journal rank-ing as a proxy for quality.” This is a particularlyimportant recommendation today when an in-creasing number of universities either requirepublications in the top three to five journals in ascholar’s discipline or completely disregard publi-cations in journals outside of those few identifiedas “top” (Singh et al., 2007; Van Fleet, McWilliams,& Seigel, 2000; AACSB, 2003). Whereas there maybe reasons to use journal rank as one indicator ofarticle quality, there is no reason to use it as theonly or even the best measure.

Lawrence (2003: 261) unambiguously recom-mends that “we can all start to improve things bytoning down our obsession with the journal. Themost effective change by far would be . . . toplace much less trust in a quantitative audit thatreeks of false precision.” Lawrence (2002: 835)urges academia to “stop measuring success bywhere scientists publish and [to] use differentcriteria, such as whether the work has turned outto be original, illuminating and correct.” Star-buck (2005: 196) likewise concludes that thoseevaluating scholars for promotion and tenureneed to stop ignoring the randomness of articleplacement in journals, and more importantly,stop basing evaluations “on one myopic mea-sure.” Bennis and O’Toole (2005) similarly worrythat the current emphasis on journal rankings isdirectly responsible for retarding the publicationof relevant management knowledge. Scholarsseeking to publish in top journals “tend to tailortheir choice of topics, methods, and theories to theperceived tastes of these journals’ gatekeepers. Alikely result . . . is stagnation in the advancementof management knowledge and a disconnectionfrom the needs of the business community” (Bennis& O’Toole, 2005).

78 MarchAcademy of Management Learning & Education

Assessing Influence: Being Prolific Doesn’tGuarantee Impact

Do “we now consider the journal to be moreimportant than the scientific message”?

—Peter Lawrence (2003: 259)

The confusion between productivity (number ofpublications) and impact (citation counts) providesanother conundrum. The two dimensions are dif-ferent and yet they are rarely viewed as distinct orappropriately weighted in rankings. For example,of the ten most cited articles in JIBS between 1996and 2006, only two were written by IB scholarsidentified as most prolific by Xu et al. (2008). Pengand Zhou’s (2006) work on global strategy re-search and Harzing’s (2005) research on Austra-lian academics similarly concluded that beingprolific does not necessarily equate with havingan impact.

Most rankings also falsely assume that havingan impact is based on publications in top journals.To expose this illusion, we use our own records.Among Adler’s more than 100 publications, her sin-gle most cited work—with well over 2,000 citationsin Google Scholar—is not a journal article, but abook, International Dimensions of OrganizationalBehavior (2008c). Similarly, none of Harzing’s threemost cited publications are articles in A-listedjournals. All three—an edited textbook (with over300 citations), a research monograph (with over 150citations), and an article in IJHRM1 (with over 130citations)—would be rendered invisible by the cur-rently used, ISI-based assessment systems. Suchoccurrences, often falsely labeled as aberrations,again force us to question if the metrics we areusing are capable of accurately and equitably rec-ognizing significant research or supporting re-search that would matter most to society.

Choice of Time Period: Finding Logic Beyond theArbitrary

Another crucial aspect of rankings is the time pe-riod on which they are based. To date, there hasbeen little discussion and no consensus as to whatconstitutes an appropriate time period on which tobase rankings (especially when assessing univer-sities). Few studies provide a substantive justifica-tion for the time period(s) studied.

Several IB researchers (Inkpen & Beamish, 1994;Kumar & Kundu, 2004; Chan et al. 2006), for exam-

ple, have attempted to examine developments inthe field by splitting an overall time period intosubperiods. While potentially interesting as an in-dicator of change, and perhaps progress, such sub-period comparisons usually lack sufficient stabil-ity to provide valid results, primarily due torelatively few within-period observations. Thesecomparisons are further confounded by the manyscholars who change institutions within the stud-ied time frames.

Rankings are problematic if they are based onrelatively few observations (as when a single jour-nal is studied or the time period is short). Inkpenand Beamish’s (1994) institutional-ranking studyprovides an example of this potential instability.The authors chose to divide their 25-year study intotwo periods (1970–1982 and 1983–1994). The rank-ings produced by these unequal time periods weresubstantially different. Based on adjusted num-bers of publications, Colombia University ranked1st in the initial period, but only 27th in the laterperiod. The University of Wisconsin, which ranked3rd in the initial period, failed to have a singlepublication in the later period. The University ofSouth Carolina, which ranked 1st by a wide mar-gin in the later period, only tied for 8th in theearlier period. The University of Western Ontarioranked 2nd in the later period, but was 2nd to lastin the earlier period.

Inkpen and Beamish are not alone in havinguncovered instability. Chan et al. (2006), for exam-ple, in splitting their time frame into two 5-yearperiods (1995–1999 and 2000–2004), also discoveredsignificant variability. The University of SouthCarolina, for example, improved its ranking from30th in the initial period to 3rd in the subsequentperiod; The University of Western Ontario droppedfrom 3rd in the initial period to 36th in the ensuingperiod. The University of Texas—Austin droppeddramatically from 10th to 155th, while The Univer-sity of Hong Kong gained 100 places, rising from121st to 21st place. Such dramatic instability forcesus to question the extent to which such rankingsare meaningful.

Reviewing institutional rankings not only re-veals instability across arbitrarily set time peri-ods, it also exposes a more fundamental problem.Rankings reported as institutional generally onlyreflect the productivity of each university’s mostprolific scholar(s). To highlight just one of manyexamples, in Kumar and Kundu’s (2004) decade-long study, The University of Bradford, the highestranked non-North American institution, placed 6thout of 40 in the initial 5-year period (1991–1995), andyet failed to make the top 40 after 1995. Rather thanbeing an aberration, this striking result most likely

1 Although IJHRM is now ISI-listed, it was not at the time ofpublication.

2009 79Adler and Harzing

reflects prolific author Peter Buckley’s move fromBradford to Leeds in the second half of 1995. Notsurprising, given Buckley’s move, the University ofLeeds appeared from seemingly out of nowhere tobe ranked 12th in the later 5-year period.

Whereas these radically fluctuating results inpart reflect the expected variability within mostdisciplines, they more ominously expose the flawsin the underlying ranking systems. In most cases,rankings that purport to measure universities’ pro-ductivity are actually false aggregations from theindividual to the institutional level, rather than ameasure of the university’s current or future abilityto contribute to the discipline or society. Data withsuch extreme variability cannot reliably be used tomeaningfully rank anything, and certainly shouldnot be used to assess the worth, or lack thereof, ofone university relative to another.

Weighting the Data: A Confounded andContentious Business

“As so often happens with indicators ofperformance, the indicator has become the

target . . . All but forgotten in thedesperation to win the game is publication

as a means of communicating researchfindings for the public benefit.”

—Macdonald & Kam (2007: 702)

Those who construct ranking systems must notonly choose which categories of publications toinclude, but also decide on how to weight publica-tions based on journal prestige and authorshipcriteria.

Prestige: What Is Most Valuable?

Is publication in a field’s top-ranked journal(s)more valuable than publication in a journal oflesser rank? Some IB scholars, for example, arguethat an article in JIBS is more valuable, and there-fore, should be more heavily weighted, than arti-cles in other journals. Both Dubois and Reeb’s(2000) ranking and Harzing and van der Wal’s(2008a) citation-based analyses demonstrate thatJIBS is in a league by itself in terms of influence ininternational business. JIBS’s elite status is furtherconfirmed by its standing as the only IB journalrecognized as “mainstream” in the most prominentlists of A-level management journals (i.e., in theFT40 and UTD lists). Based on its 2007 ISI journalimpact factor, JIBS was ranked as one of the top-7business journals in 2008.

To date, no study has chosen to employ a grad-

uated, prominence-based weighting system fortheir rankings. Most current rankings continue tobe winner-take-all systems, with authors receivingequal credit for articles published in selected “A-list” journals and receiving no credit for articlespublished in nonlisted journals or for books, bookchapters, or conference proceedings. As a field, wemust ask if such binary metrics (on or off the A-list)enhance or detract from the discipline’s ability torecognize and support research that matters oreven to accurately and equitably assess the con-tributions of individual scholars and institutions.

Invisibility: Research Published in New,Innovative and Specialized Journals

Many subdisciplines are underrepresented in thedatabases used to calculate the most commonlyused, and supposedly “objective,” measure of jour-nal “quality”: the ISI journal impact factor (whichactually measures influence, not quality). This isparticularly true for journals in accounting, man-agement, marketing, strategy, and IB (Harzing &van der Wal, 2008b).

Perhaps more important, new (often highly inno-vative) journals are systematically excluded fromthe rankings. The ISI Social Science Citation Indexenforces a mandatory 3-year “waiting period” forall new journals. Once accepted for inclusion,there is a subsequent 3-year “study period.” Thus,at the earliest, a journal can receive its first officialimpact factor (IF) in the Journals Citation Report 6years after its inception.

Because articles published in new journals re-main invisible to most citation indices, they alsoremain invisible to almost all ranking systems.Such invisibility dramatically skews scholarship,as it implicitly encourages conservative researchthat asks familiar questions using accepted meth-odologies rather than research addressing new,often controversial questions that are investigatedusing innovative methodologies. This bias is par-ticularly unfortunate today, when understandingthe rapid, discontinuous changes that characterizebusiness and society demands constant innova-tion. The contention that much academic researchis rigorous but irrelevant is fostered by what theexisting ranking systems leave out.

The contention that much academicresearch is rigorous but irrelevant isfostered by what the existing rankingsystems leave out.

80 MarchAcademy of Management Learning & Education

The consequences of the current winner-takes-all system and its explicit bias against new jour-nals can be appreciated through the experience ofone of this article’s authors. Adler published anarticle in 2006 in Academy of Management Learn-ing and Education (AMLE). Launched in 2002, AMLEonly qualified to be included in the Thomson Re-uters ISI Journal Citation Report in 2008. AfterAdler’s article was selected as one of the top-3articles published by AMLE, she was “amused” tolearn that her university’s 2007 Merit Committeehad chosen not to credit her with the publication,and had docked her pay accordingly. And yet, the2007 impact factor for this new journal (which wasfirst released in June 2008) was 2.796, ranking it 7thin management, in the same league with suchconsistently A-listed journals as Strategic Man-agement Journal (2.829) and Administratively Sci-ence Quarterly (2.912).2 Only after presenting theMerit Committee with an “as if” ISI impact-factorfor this new and at that time as-yet-unrated journaldid the committee suddenly become capable of“seeing” the article and appreciating its value;only then did they grudgingly grant the legitimacyof a merit-based pay raise. Such systems make itdangerous, especially from the perspective of pay,promotion, tenure, grants, and prestige to publishresearch, no matter how good, in new and innova-tive journals. It is therefore difficult to understandhow such a metric could do anything other thanundermine the system’s ability to render accurateand fair assessments or to support research thatmatters.

Weighting Single Versus Multiauthored Articles:No, It Is Not All the Same

Another decision is whether to assign equalweight to authors of single- versus multiauthoredarticles. Whereas there are advantages to bothapproaches, bibliometricans now generally agreethat fractional equivalents are more equitable andappropriate (Gauffriau & Larsen, 2005). Fractionalequivalents are calculated by assigning 1/n, wheren is the number of authors, for each article anauthor has written. Fractional equivalents are nowfavored in most, but not all, ranking studies, and

an increasing number of universities recognizethat promotion-and-tenure decisions are most jus-tifiably based on fractional equivalents ratherthan simple article counting.

Overall, differences in productivity assess-ments among universities are generally smallerwhen fractional equivalents are used. In themost recent IB ranking, for example, Xu et al.’s(2008) use of fractional equivalents for authors,but total publication counts for institutions, illus-trates how such an unconventional choice canskew and potentially confound the rankings. Asshown in Figure 1, compared with Xu et al.’sapproach, using fractional weightings wouldhave significantly reduced the lead of 1st-rankedMichigan State University over 2nd-rankedLeeds University. The reason is evident: Michi-gan State has the highest average number ofauthors per article (2.5) among the top-10 rankeduniversities. Similarly, with fractional weight-ings, The University of Reading would move upfrom a tie for 4th to 3rd place, reflecting Read-ing’s tendency toward single-authored articles(average: 1.4 authors per article). More dramati-cally, The University of Miami would move upfrom 9th to 5th place, primarily due to the largenumber of single-authored publications by Mi-ami’s one prolific IB scholar, Yadong Luo.

Weighting the First Author:Recognizing Leadership

A similar decision is whether to give more weightto the first author of a multiauthored article. Mostranking studies, unfortunately, continue to system-atically ignore whether an author of a multiau-thored article is listed first. Authors (and their af-filiated universities) who are frequently listed laston multiauthored articles therefore could beviewed as prolific, even though they have neverprovided research leadership. At the individuallevel, this is particularly serious as all advances inscholarship, including frame-breaking research,depend on individuals assuming leadership.While it can be argued that this problem manifestsitself only at the individual level, as aggregatingand averaging across individuals at the institu-tional level mitigates the effect, this reasoning isflawed, as institutions vary widely in the averagenumber of authors per published article. Given thearbitrariness of current weighting decisions, wemust question how meaningful the resulting rank-ings are, or could ever be.

2 The journal impact factor, or JIF, is the mean number of cita-tions received in a particular year to articles published in ajournal in the preceding 2 years. Thomson Reuters ISI does notmake the exact formula for calculating JIFs public. AMLE’sreported JIF was within .033 of that of SMJ and .116 of ASQ,meaning that approximately only one in 30 articles in SMJ andone in 9 articles in ASQ received one additional citation. Manyobservers would consider such differences to be negligible.

2009 81Adler and Harzing

Aggregating to the Institutional Level: InstitutionalRankings Don’t Reflect What They Purport to Reflect

A final important decision, specifically for institu-tional rankings, is the way in which individualproductivity is aggregated to the institutionallevel. We have seen striking examples of how themove of a single prolific scholar, such as Luo orBuckley, can catapult a university into or out of thetop ranks. Of the 81 institutions listed in the mostrecent IB rankings, for example, less than a quarterhave more than one prolific author (Xu et al., 2008).The nomadic behavior that is commonplaceamong prolific scholars makes institutional rank-ings unstable at best and meaningless at worst. Inaddition to the problems caused by prolific no-mads, at least five other serious challengesthreaten the validity of institutional rankings.

Multiple Name Variants Undermine Rankings: ATransnational Challenge

Today, just at the time when scholars most need tounderstand research contributions from around theworld, name-variant biases systematically under-value the work of scholars located in non-Englishspeaking countries. Scholars working in languagesother than English frequently publish under multiplename variants of their university or department.Researchers affiliated with WirtschaftsuniversitatWien, for example, may publish under either theiruniversity’s German or English name (Vienna Uni-versity of Economics and Business Administration).

Ranking systems, however, often do not recognizethe two names as representing the same institution.Mangematin and Baden-Fuller’s (2008) recent rank-ing used a time-consuming manual verification pro-cess to ensure correct affiliation and aggregation.Most other rankings skip this inefficient process andrely on raw ISI or Google Scholar data. As a result,they risk seriously underestimating the contributionsof universities with multiple, non-Englishnames. The higher ranking of Asian and Euro-pean institutions in Mangematin and Baden-Fuller’s (2008) study is likely due in part to theirmore careful multilingual-affiliation-attributionmethodology. The challenge is to ensure thatscholarship from around the world is not onlyfairly and accurately reflected in the rankings,but also, and more important, that it is accessi-ble and recognized as valuable. Given today’sglobal business and societal dynamics, the illu-sion that only English-speaking scholars and in-stitutions produce valuable, trend-setting re-search is completely misleading, and ultimatelydysfunctional.

Who Gets Credit? Aggregating Across MultipleCampuses of the Same University

Are campus-by-campus or overall-university rank-ings more appropriate at universities with multiplecampuses? As yet, this question has no agreed-uponanswer. Xu et al.’s (2008) IB ranking provides an ex-ample of the complexity. Although Thomson Reuters

66

50

3329 29

26 25 24 23 2226.6 25.6

20.7

14.8

21.2

11.9 12.1 11.8

16.8

10.2

0

10

20

30

40

50

60

Michigan

State

Univ ersity

Univ ersity

of Leeds

Rutgers

Univ ersity

Temple

Univ ersity

Univ ersity

of Reading

Chinese

Univ ersity

of Hong

Kong

Uppsala

Univ ersity

Melbourne

Business

School

Univ ersity

of Miami

New York

Univ ersity

Institutions

Nu

mb

er o

f p

aper

s

70

total

fractional

FIGURE 1Top-10 Institutional Ranking Based on Total Versus Fractional Number of Articles

82 MarchAcademy of Management Learning & Education

ISI’s Web of Knowledge aggregates all University ofTexas campuses, Xu et al. (2008) separated them. Asa result, The University of Texas at Dallas ranked12th and The University of Texas at El Paso ranked16th. If these two campuses had been combined, TheUniversity of Texas would move up to 4th place. TheUniversity of Melbourne also appears twice in Xu etal.’s (2008) ranking, once as Melbourne BusinessSchool (MBS) and a second time as The University ofMelbourne (referring to Melbourne’s Department ofManagement and Marketing in the Faculty of Eco-nomics and Commerce). Whereas rankers could eas-ily make a case for separating the two departments,The University of Melbourne would obviously preferthat they be combined, arguing that the Universityhouses one major IB center with professors workingat two locations. Using the University’s preferred ap-proach, Melbourne would move up from 8th to ashared 4th place, and could rightfully claim to be thehighest ranked university in the Asia Pacific region.The inherent problem with such influential, and of-ten politicized, decisions is that they render the rank-ings as arbitrary, and thus meaningless.

Who Gets Credit? Scholars With MultipleAffiliations

Another aggregation conundrum is how to assigncredit for academics with multiple affiliations. Sur-prisingly, most rankings credit the full number ofarticles published to each institution with whichan author is affiliated. A more equitable systemthan double counting would favor the same type ofweighting scheme as is used to assign partialcredit to the multiple authors of a single article.However, focusing on how to equitably credit uni-versities with the productivity of scholars withmultiple affiliations masks an underlying, andmuch more serious, problem. For many universi-ties, the rankings have unfortunately—but, fromcertain perspectives, understandably—become anend in themselves. One need only observe the re-cent pattern of increasingly frequent “visiting pro-fessorships” to recognize this dysfunctional dy-namic. Some universities now offer extremelygenerous packages to highly productive scholarsto attract them as special visiting professors. Theonly proviso is that the visiting scholars agree tolist their new dual affiliation on all their publica-tions, even if they spend only a few weeks eachyear at the host university.

How does the ability to appropriate a scholar’sproductivity affect that scholar’s choice of researchtopic and focus? Does it support or undermine schol-arship that matters? Can “bought productivity”meaningfully be handled in the rankings? How

should the field respond if we see a further escala-tion of affiliations, in which particularly prolificscholars simultaneously hold five or ten simulta-neous appointments? Should guest-scholars whoonly spend a short time at a host university havetheir publications used to enhance that university’sranking, and thus its reputation? What algorithm forfractional weighting would be fair? Without address-ing these questions, institutional rankings will con-tinue to move toward meaninglessness (if they arenot already there), and individual scholars will in-creasingly be drawn toward research programs thatare most likely to produce the most enticing set ofmultiple affiliations. In Merton’s (1936: 897) terms, thisis a clear case of a dysfunctional “unintended con-sequence,” or in Kerr’s (1975: 769) terms of havingfallen prey to “folly” “of rewarding A [publications ina narrow set of top-listed journals] while hoping for B[scholarship that addresses the questions that mattermost to society]”.

Who Gets Credit? Current University or Universityat Time of Publication?

Another similar conundrum is how to assign uni-versity affiliation: based on current affiliation oron the scholar’s affiliation at the time a paper waswritten and/or published. Unlike most economicsrankings (Macri & Sinha, 2006), most IB rankingscredit the scholar’s university at the time of publi-cation. Using current affiliation purports to identifyinstitutions that are most likely to produce thegreatest academic impact in the future. However, itimplicitly assumes that prolific authors will ceaseto be nomads and will not move on to yet anotheruniversity. The choice of how to assign credit hasimportant implications for the majority of univer-sities with few prolific researchers. The Universityof Miami, for example, would not have achieved ahigh ranking from Xu et al. (2008), who chose to usecurrent affiliation, if Luo, Miami’s only prolific IBresearcher, had had his earlier publications, pro-duced while he was in Hawaii, counted towardHawaii instead of Miami. The central issue, ofcourse, is if such rankings, based on scholars’ no-madic behavior, are meaningful, let alone helpful.

Inclusivity: All Authors or Just the Most Prolific?

A final decision is whether to include all scholarsworking at a given university or just the most pro-lific. Whereas one could certainly make an argu-ment for focusing only on scholars who have pub-lished a substantial body of work, such a choicewould not reflect the breadth and stability of re-searchers that most universities need to maintain

2009 83Adler and Harzing

major research programs (and, by consequence,their place in the rankings). Xu et al.’s (2008) inclu-sion of only prolific authors highlights the risk ofsuch metrics producing highly idiosyncratic rank-ings. Although Wharton ranked 4th and INSEAD6th, based on the total number of articles pub-lished in JIBS for the assessment period, neitherinstitution appeared among the top 81 universitieswhen Xu et al. included only the publications ofprolific authors. Such variability underscores the factthat, given the “right” choice of journal(s), time peri-ods, and rules for weighting and inclusion, almostany university can be crowned a winner or relegatedto loser status. When any level of ranking becomespossible, no ranking remains meaningful.

DISCUSSION

“ . . . when we, as academics, plead power-lessness in choosing what we research . . .

because of incentive and reward systems. . . ,we dehumanize our careers and our lives.”

—Sara L. Rynes, Editor-in-ChiefAcademy of Management Journal (2007b: 747)

The proliferation of and increasing reliance on vol-atile, arbitrary assessment systems portends anominous future for academic rankings and theirability either to accurately and equitably assessindividual or institutional performance or to recog-nize and support scholarship that matters. Basedon the present situation, we recommend that aca-demia (a) institute a temporary moratorium on in-stitutional rankings; (b) attempt to better under-stand and subsequently address the macroleveldynamics that implicitly collude in keeping suchdysfunctional ranking systems in place; (c) rede-sign individual rankings to render them more glo-bally inclusive, accurate, and equitable; and (d)create environments that foster and appreciate ex-cellent scholarship on the questions that mattermost to business and society. In all four areas,scholars and institutional actors from around theworld are invited to offer their global and multidis-ciplinary insight and experience to help designand implement approaches that will be more ef-fective than the leftover 20th-century systems that,unfortunately, remain in place today.

A Temporary Moratorium on InstitutionalRankings

We recommend instituting an immediate, temporarymoratorium on institutional rankings. With universi-ties increasingly resorting to buying resumes and

paying significant publication bonuses (at times ex-ceeding $20,000 per A-listed journal article), aca-demia needs to guard against rewarding tactics de-signed primarily to score well on national andinternational rankings. To witness the dysfunction ofthe current system, one need only consider the aca-demic job market in the United Kingdom immedi-ately prior to the 2008 Research Assessment Exercise,when the scramble for rankings eclipsed scholarlypurpose. The UK job market was likened to a transfermarket for soccer players with top scholars rumoredto have accepted their next position even before join-ing their prior “new” institution. It is not just thatranking systems are inconsistent, volatile, and inmany ways inherently unfair; it is also that the mo-tivation systems they engender—including encour-aging blatant individual self-interest and a conse-quent lack of loyalty to any particular university orbroader societal mission—undermine the very es-sence of good scholarship. In addition, such motiva-tion systems lead universities to systematically andcorrosively undervalue the importance of teachingand learning. Given the rewards for A-listed publi-cations, they subtly, and not-so-subtly, pressure pro-fessors to minimize their commitment to teachingand maximize the time they spend on research ac-tivities that generate the highest reputational andfinancial rewards. Rather than continuing to defendthe current system, universities and granting agen-cies should invest the same time, money, reputation,and energy in designing systems that broadly en-courage learning and education through the creationand dissemination of research that matters. In thefollowing sections, we offer a way to understand andapproach this transformation.

It is not just that ranking systems areinconsistent, volatile, and in many waysinherently unfair; it is also that themotivation systems theyengender—including encouragingblatant individual self-interest and aconsequent lack of loyalty to anyparticular university or broader societalmission—undermine the very essence ofgood scholarship.

Understanding the Embedded and MutuallyReinforcing Nature of Academic RankingSystems: An Institutional Perspective

Given the blatant dysfunction of current rankingsystems, why do they continue to exist? The an-

84 MarchAcademy of Management Learning & Education

swer, of course, is that academic rankings are notisolated phenomena. They are embedded withinmutually reinforcing organizational and societalenvironments. Viewed through the lens of institu-tional theory (DiMaggio & Powell, 1983; Scott, 1987;Westney, 2005; and Zucker, 1987, among others),academic rankings can be understood to persistbecause the individuals and organizations thatrely on them “adopt patterns that are externallydefined as appropriate to their environments, andthat are reinforced in their interactions with otherorganizations” (Westney, 2005: 47).

According to institutional theory, “Organiza-tions, and individuals within organizations, aremoved toward isomorphism, the adoption of struc-tures and processes prevailing in other organiza-tions within the relevant environment” (Westney,2005: 48). Among the network of institutions sup-porting academic rankings, there appears to be avery high degree of isomorphism. Dominant insti-tutional players form what is referred to as a well-organized field in which each player is influencedto adhere to similar, mutually reinforcing types ofranking and assessment behaviors. Viewed fromthis perspective, the current system of academicrankings predictably emerges as an anthropolog-ical artifact from within a well-organized field.

Institutional theory allows us to appreciate thehighly entrenched, self-reinforcing network of in-fluences and relationships that embed the systemof academic rankings and render it, among otherthings, extremely difficult for isolated individu-als and organizations to change. Isomorphismabounds in the systems supporting academic rank-ings, with the behavior of each category of institu-tion insidiously determining the behavior of orga-nizations in most other categories. There is a veryhigh degree of institutional alignment supportingthe current pattern of academic rankings—analignment that, unfortunately, has evolved into aform of often subconscious, dysfunctional collu-sion. Major institutional players engage in mutu-ally reinforcing roles. The dominant dynamics rei-fying the current system include (a) the need forassessment systems to appear accurate, objective,and fair; (b) the desire for systems that will notoverwhelm the time constraints of the already ex-tremely busy adjudicators; (c) the quest for credi-bility and prestige along with the benefits eachbrings; and (d), given ever-intensifying market-driven competitive pressures, the need to publiclydistinguish oneself and one’s institution. Beloware a range of institutional examples highlightingthe ways in which the network of organizationsthat influence academic rankings has adopted“patterns that are externally defined as appropri-

ate to their environments, and that are reinforcedin their interactions with other organizations”(Westney, 2005: 47); that is, how the network ofinstitutions involved in academic rankings exhib-its the high levels of isomorphism commonly foundin highly collusive, well-organized fields.

Research-Granting Agencies

Primarily due to the very public nature of the fundsthey distribute, national and international re-search-granting agencies know that they must re-spond to the public’s insistence that their moneybe well spent. In most cases, they respond byclaiming “scientific” (meaning to them “quantifi-able”) objectivity—what March (1996: 286) wouldrefer to as “magic numbers.” The agencies alsoknow that they must be responsive to the demandsfor an efficient process from the tremendously busygrant adjudicators (most of whom serve as unpaidprofessional volunteers). Counting publications inA-listed journals meets both criteria; it is an easilyunderstood and quickly quantifiable metric thatappears objective to the broader public.

Further reinforcing the current system is the pro-clivity of granting agencies to select prior grantwinners to adjudicate current grant proposals—scholars who, not surprisingly, are most inclined tosupport the very rules that led each of them toreceive their own, frequently sizable, grants. Thewinners in a particular system, having reaped thebenefits of that system (in this case, the money,prestige, and career advancement that comes withreceiving research grants) are most likely to con-sciously or subconsciously internalize the system’sunderlying assumptions and values, and are,therefore, least likely to question the efficacy of thesystem they have publicly benefited from and haveagreed to uphold. More broadly, many scholarswho need continued funding for their own researchfear the consequences of rejection if they questionor deviate from established norms.

University Promotion-and-Tenure (P&T)Committees

P&T committees operate under pressures similar tothose influencing research-grant agencies. P&Tcommittees are generally staffed by “busy volun-teers” (professors who generally are not given ex-tra pay for the hours they spend reviewing theircolleagues’ files), who seek ways to evaluate can-didates that appear (to them and to others) objec-tive and fair and which do not consume inordinateamounts of time. Counting the number of publica-tions a candidate has in A-listed journals (as op-

2009 85Adler and Harzing

posed to the much more time-intensive and inher-ently subjective process of reading and evaluatingeach candidate’s entire portfolio of work) superfi-cially achieves both requirements. Assessingbooks is particularly problematic, as they are gen-erally too long to reasonably expect most commit-tee members to read. Moreover, even the pressesrecognized for publishing books with some of thebest scholarship are not consistent in their reviewand selection procedures. In addition, given thatin most cases, the same system anointed themembers of the P&T committee themselves withthe prestige, career security, and financial re-wards of promotion and tenure, they are unlikely tojudge the system they personally benefited from asillegitimate.

The isomorphism becomes equally evident whenone considers that many P&T committees, as wellas annual-merit-review and selection-and-hiringcommittees, include professors’ research-grantrecords as one of the indicators of academic per-formance. Having adhered to very similar underly-ing assumptions, most P&T committees have notrouble accepting the decisions of the research-granting agencies as legitimate input into theirown decision-making processes.

In its more pernicious form, senior faculty mem-bers support their own status and control the be-havior of junior colleagues by forming their ownA-lists that, not by coincidence, often include jour-nals in which they personally publish, serve aseditorial board members, and/or support the pub-lication records of favored candidates. The authorslearned of a revealing example of this apparentbehavior at a management school in a top-rankedresearch university in North America. At this par-ticular university, a journal that had had an impactfactor below 0.5 for the first 4 years of the decadewas included on the university’s newly created2008 A-list of management journals. A JIF below0.50 means that, on average, less than one in twoarticles in this newly A-listed journal had beencited in anyone’s publications in ISI-listed journalsin the 2 years following publication. Over the past2 years, the newly included journal’s JIF has risenmarkedly, peaking recently at �2.5. Thanks to thejournal’s higher JIF, it had achieved top ranking inits category in each of the past 2 years, after hav-ing hovered around 40th for much of the earlierperiod. Such variability is not only one of the in-herent problems with using the JIF to assess indi-viduals, journals, and universities, but also one ofthe reasons that such metrics are so vulnerable tomanipulation.

Whereas the above description initially appearsto present a success story (and therefore an appro-

priate choice for citation-based A-lists worldwide),closer inspection reveals that much of the increasein the newly listed journal’s JIF appears to havecome from an extremely rapid increase in journalself-citations; that is, from articles in the journalciting other articles in the same journal. While theaverage proportion of self-citations in the newlylisted journal was 13% in the earlier period, it roseto 65% in the later period. In comparison, an estab-lished, highly ranked journal in the same categoryhad a self-citation rate of 7% for the earlier periodand 13% for the later period. The newly listed jour-nal thus had five times the proportion of self-citations as the more established journal. Al-though the specific reason for the escalation inself-citations is not publicly available, such dra-matic increases are often symptomatic of the editorand/or editorial board members playing the rank-ings game; that is, strongly encouraging authorsduring the review process to cite other articles inthe same journal. Although researchers have yet todocument how pervasive the practice of requiringjournal self-citation is as a subtle or explicit pre-condition for publication, informal reports of suchbehavior are increasing. The fact that none of theresearch-committee members who selected thisparticular journal for the university’s new 2008 A-list questioned the process exposes the power ofthe current norms, the isomorphic pressures, andthe institutionalized vulnerability to manipulation.

Academic Journals

Journals also benefit from winning in the rankings,as being labeled as A-level confers prestige on thejournal and its editor. The flurry of congratulatorye-mail exchanged among editorial board memberswhen their journal receives a high JIF exposes theunderlying dynamic. Similar, but more disturbing,is the pressure exerted on editors to explain dropsin their journal’s JIF, even when their editorial pol-icy explicitly targets broader populations than justacademics. For example, the editor of Human Re-source Management, a journal whose goal is topublish high-quality research with an impact onpractice, found herself required to explain whyHRM’s JIF had dropped from a miscalculated (seeHarzing & van der Wal, 2008b) average of 2.0 in2002–2006 to 0.64 in 2007.

An equally important dynamic, given the pres-sure on scholars to publish in A-listed journals, isthat the journals achieving A-level status attractever increasing numbers of submissions. The largenumber of submissions relative to available spacefor publication leads to high rejection rates, whichis yet another measure frequently used to grant

86 MarchAcademy of Management Learning & Education

status to journals and their editors. Success, asrecognized by the avalanche of submissions, pre-sents editors with challenges similar to thosefaced by research-grant agencies and P&T commit-tees. How do editors and their editorial boardmembers appropriately assess the increasing vol-ume of submitted manuscripts? Given the publicnature of acceptance-versus-rejection decisions,their career-defining consequences for individualauthors, and status-defining consequences for theauthors’ universities, along with the unpaid (“pro-fessional volunteer”) nature of academic-journalreviewing, the pressure has mounted to use as-sessment methods that appear accurate, objective,and fair, but are not too time consuming. Unfortu-nately, as has been documented previously, theoutput of the current manuscript-reviewing systemis inconsistent (including relative to impact): Somearticles that are published in A-listed journals re-ceive relatively few citations, whereas a propor-tion of the articles rejected by A-listed journals andpublished in lower ranked journals eventually re-ceive a substantial number of citations. Thus theinput into P&T and research-grant assessments isseriously flawed.

In addition, and perhaps more serious in its con-sequences, since A-listed journal status is gener-ally conferred based on journal impact factor (aflawed measure of influence) and not on any mea-sure of quality, journal editors and their editorialboard members are tempted, if only subcon-sciously, to favor those articles that are most likelyto be cited (as opposed to those qualitativelydeemed to be the best or most important). Havingpersonally been overwhelmed by the number ofreview requests, including having once received24 requests in one month, we are sympathetic tothe pressures on journal editors and editorialboard members in terms of perceived objectivity,efficiency, quality, and impact. The field, however,needs to find better ways to address the chal-lenges these pressures present.

Academic Journal Publishers

Publishers also implicitly collude with the chasefor rankings, but based on slightly different pres-sures than those experienced by editors and edi-torial board members. For publishers, achievinghigh rankings is primarily a marketing tool. Manypublishers of highly ranked journals now promi-nently feature the ISI journal impact factor andranking on each journal’s website. Many publish-ers have also introduced the on-line listing of pre-publication versions of accepted papers (e.g.,Springer’s and Sage Publication’s Online First and

Palgrave MacMillan’s Advance Online Publica-tion). Although their stated aim is early researchdissemination, an important side benefit for pub-lishers is the potential for additional early cita-tions (and hence higher JIFs3). The name of Emer-ald’s recently introduced service (EarlyCite) isparticularly telling. Neither editors nor authorschallenge this practice as they too benefit from thehigher circulation generated by such marketingand prepublication in the form of greater exposureand, as a consequence, larger numbers of submis-sions and citations. There is little incentive for thewinners to challenge the current system, andthereby risk reducing the benefits it confers onthem. Why would a current A-listed journal, forexample, suggest moving from using Thomson Re-uters ISI’s more parochial ranking system toGoogle’s more globally inclusive system if thatvery choice could potentially threaten its ownstanding in the rankings? More threatening to thecurrent lucrative system of publishing academicjournals is the move by several leading universi-ties to open-source publishing. Could it be that,while the players in the present system are defend-ing current rankings, free access will substantiallyundermine their market and thus both their finan-cial and academic power?

Doctoral Programs and Academy-Based DoctoralConsortia

Like the elders of any tribe, academic elders passon the wisdom and “tricks” of the culture to thenext generation. Rankings provide easy shortcutsto transfer the norms and with them the implicitadvice that following the norms leads to success.Many young scholars grow to believe that individ-ual creativity, especially in the form of deviancefrom academia’s current norms, must be delayednot only until after having earned a doctorate andlanded one’s first faculty position, but, most pru-dently, until after having received tenure. Afterhaving worked for more than a decade perfectingtheir skills at earning academia’s rewards by ad-hering to its current assessment-based rankingnorms, it is unlikely that many scholars will risk

3 At present Thomson Reuters does not include citations to pre-publication versions in the calculation of its JIFs. However,given the publishing delays at most journals, nearly all authorsare able to change the reference from the prepublished to thepublished version in their paper’s final page proofs. As long asthe referring paper is published after the referenced paper(which is true in almost all cases), the citation will count in boththe JIF (which is based on citations in the preceding 2 years)and/or the immediacy index (which is calculated based oncitations in the same year).

2009 87Adler and Harzing

“throwing it all away” by deviating from tradi-tional expectations. The poignancy of this situa-tion is reflected in the fact that most people entermanagement doctoral programs fervently desiringto address questions that are of the utmost impor-tance to them, not merely to fill a gap in the extantliterature using rigorous, but conventional, means(Bartunek, Rynes, & Ireland, 2006; Vermeulen, 2005).Although to our knowledge no study similar tothose focusing on medical students has been con-ducted on management doctoral students, onewould suspect that similar dynamics apply. Stu-dents enter medical school passionately commit-ted to idealistic and humanitarian goals. Unfortu-nately, by the time they graduate, in all too manycases their aspirations have been reduced to at-tempting to survive a system that regularly makesinhuman demands on them (Becker & Geer, 1958;Greger, 1999).

Instead of socializing doctoral students into thecurrent chase for A-listed journal publications,why not attempt to fuel their natural desire tomake a difference? PhD programs that allow stu-dents to combine research training with workingon their own research agendas from the beginningof their doctoral studies (as is generally the case inAustralia, Canada, and Europe) might have an ad-vantage over programs that require students tocomplete years of course work prior to embarkingon their own research.

Accreditation and Ranking of Business Schools

In addition to the academic institutions describedabove, the accreditation bodies for businessschools, along with the news and business maga-zines that publish business-school rankings, allexperience isomorphic pressures to conform to thecurrent rankings-based processes. Beyond confer-ring prestige, rankings have become a key market-ing tool for business schools. Most businessschools assume that high rankings will increaseboth the quantity and quality of applicants alongwith enhancing their prospects for more andhigher profile donations. Given the increasingnumber of business schools worldwide, and theresulting intense competition among them for topstudents, professors, and funds, business-schoolrankings are perceived to matter (Khurana, 2007).

One has to question the metrics on which suchrankings are based. Do they encourage excellencein business schools’ fulfilling their unique purposein society? Do the increasing resources that busi-ness schools use to improve their rankings—suchas hiring more career-placement personnel, select-ing more media savvy deans, recruiting students

from low-salary countries and placing graduatesin higher salary parts of the world, and promotingpublication in A-listed journals (Segalla, 2008a)—act to undermine the university’s fundamental pur-pose? In the isomorphic world of academic rank-ing, actions such as these are the rationalresponses of business schools seeking to meet thedemands of the market (Segalla, 2008a; Khurana,2007). Beyond being rational, however, the ques-tion such behaviors raise is this: “In which ways, ifat all, do such rankings-maximizing behaviorshelp to recognize and support the research thatmatters most to society?”

How might rankings, assuming they continue, beused to more positively impact the programs theyevaluate? Could, as Segalla (2008b: 3–4) suggests,rankings measure business-school graduatesagainst such criteria as “the number [of graduates]who go to jail for accounting fraud, [the] percent-age [of graduates] who manage firms with highproduct safety recall rates, [the] number workingfor companies with the lowest carbon emissions,the percentage working for firms with high socialresponsibility scores, or the correlation betweenfirm bankruptcies and the alma mater of the CEO?[ . . . ] Many people, certainly investors, productsafety advocates, environmentalist[s], governmentattorneys, and other concerned citizens, might findthese questions much more pertinent” than manyof those currently used to assess and rank busi-ness schools.

Business-School Accreditation Bodies

Accreditation bodies such as AACSB also experi-ence pressure from the public to objectively differ-entiate between the best, good, and not-so-goodbusiness schools. As the cost of business schooleducation rises (dramatically so in the UnitedStates), public pressure for rankings that distin-guish between programs offering the most and theleast value-for-money, also rises. Accreditationbodies exert power over business schools, whichincreasingly feel the need for the public sanction-ing that such accreditation offers, by requiringthem to meet accreditation-board standards andmetrics. Adhering to the same isomorphic pres-sures as other institutional players in this field,accreditation bodies have adopted publication-counts, including publications in A-listed journals,as one of their seemingly objective measures. Thischoice is understandable, and yet not laudable,given that accreditation bodies have neither thetime nor the expertise to individually assess everyresearch paper published by each professor.

88 MarchAcademy of Management Learning & Education

Media-Driven Ranking Systems

The benefits to news magazines—such as Business-Week, the Economist, Financial Times, Forbes,McLean’s (Canada), U.S. News and World Report,and the Wall Street Journal, among a plethora ofothers—that publish, and thus publicize, academicrankings are straightforward (Sauder & Espeland,2006). Sales of the issues in which the rankings arepublished are often among the magazines’ high-est. As described previously, the public wants toknow and is willing to pay for what it perceives tobe expert opinions identifying those managementprograms that offer greater value-for-money. In ad-dition, as with any circulation-based dynamic, ad-vertisers are willing to pay more when news mag-azines achieve higher circulation. Thus both thepublic and the news magazines superficially re-ceive direct benefits from continuing the currentrankings-based assessment systems.

Individual Scholars

Scholars who seek the visible reputational andfinancial rewards that come from a successful re-search-based university career experience ex-tremely strong pressure to play by the rules. Actingalone, no individual scholar or institution is likelyto change such a well-aligned system. Coordi-nated action across individuals and institutionsholds a much greater possibility of being effectivein altering the highly embedded, self-reinforcingnetwork of influences and behaviors in which ac-ademic ranking and assessment has become en-trenched. The question facing academia today,therefore, is what form of co-created and/or coordi-nated action will allow the current rankings-focused system to reinvent itself into a system thatis most able to foster research that matters.

Moving Beyond Dysfunctional Academic-RankingSystems

Institutional fields that are as embedded and mu-tually reinforcing as that of academic rankings areknown to be extremely difficult to change. Basedon a review of previous research, Sauder (2008)summarizes four processes for bringing aboutchange in such well-defined fields. First, a jolt orexogenous shock “can disrupt the equilibrium of[the] field, [thus] creating opportunities for newfield-wide norms, boundaries, and hierarchies toemerge” (Sauder, 2008: 209). As recently as 1988,BusinessWeek’s publication of its first ranking ofbusiness schools based on customer—primarilystudent, alumni, and corporate recruiter—satisfac-

tion, rather than selected deans’ assessments,functioned as a strong exogenous shock to the field(Khurana, 2007). Almost immediately, businessschools started to reallocate resources in attemptsto achieve “customer” satisfaction (and thus highrankings; Khurana, 2007). In the 21st century, theexogenous shock to the academic-ranking systemmight be triggered by a dramatic increase in in-tense competition brought about by emergingeconomies or the impact of rapid (and relevant)advances in technology. The increasing promi-nence of Chinese business and scholars, for exam-ple, makes it extremely unlikely that today’s En-glish-language, American-dominated systems willremain unchanged throughout the 21st century.Likewise, the similarly rapid and discontinuousadvances in globally networked, primarily Inter-net-based interconnectivity make the continuanceof traditional journal-publishing systems andranking protocols (with their 6-year time-lag onadmittance to “the game”) almost unthinkable. Thequestion we invite the scholarly community to con-sider is “How can we take advantage of both pre-dicted and unpredictable ‘exogenous shocks’ toguide academia toward producing scholarshipthat is of most value to 21st-century society?”Whereas institutional theory suggests that exoge-nous shocks can unfreeze and change embeddedfields, it is relatively silent on the leadershipneeded to leverage such shocks in directions thatare most coincident with the fundamental purposeof universities.

According to institutional theory, a second po-tential source of fieldwide transformation is“changes in organizational logics on a field’s es-tablished practices and conventions” (Sauder,2008: 209). The choice to publish this article inAMLE, a highly respected journal with a broad andvery public commitment to advancing learningsystems that enhance the broader society, is, inpart, an attempt to begin to alter the field’s estab-lished practices and conventions. AMLE’s reader-ship includes a broad audience consisting of manyof the key stakeholders who have helped to main-tain the current system—the very people and insti-tutions that could be central to designing and im-plementing a new, more efficacious system. Nochange will occur unless the dialogue amongstakeholders is compelling enough to begin to al-ter the underlying organizational logic. AMLEtherefore explicitly invited a range of stakeholdersto initiate the discussion. Our invitation, however,reaches out to a much larger group beyond theinitial set of voices (our own included).

Institutional theory cites evolution as a thirdway in which embedded fields can change. Grad-

2009 89Adler and Harzing

ual destructuring and restructuring of fields may“be influenced by more general institutionalchanges over time” (Sauder, 2008: 209). “This com-prehensive view of field change . . . [suggests] howmodifications in regulative and normative ele-ments, institutional logics, and the types of actorsconstituting the field all contribute to a field’s evo-lution” (Sauder, 2008).

A fourth, and potentially very promising way inwhich fields can change is with the introduction ofa new and influential field-level player—what hasbeen labeled an institutional entrepreneur (Hardy& Maguire, 2008). Sauder (2008) documents how theintroduction of U.S. News and World Reports’ rank-ings of law schools significantly transformed theentrenched institutional field for law schools. Afterdecades of virtual “monopoly” by Thomson ReuterScientific in the field citation analysis, severalnew players have emerged on the scene. Elsevier’sScopus provides a new source of citation impactmeasurement that does not fully overlap withThomson Reuter’s Web of Knowledge (see Bosman,Mourik, Rasch, Sieverts, & Verhoeff, 2006). Moreimportant, the introduction of Google Scholar hasmade the assessment of academic impact moreinclusive of research in languages other than En-glish and of policy-oriented research. Free soft-ware programs such as Publish or Perish (Harzing,2008b), providing an easy way to calculate citationmetrics based on Google Scholar data, have madeimpact measurement accessible to everyone withan Internet connection. Although these develop-ments have not yet changed the way academicimpact is measured in most institutions, the initialsigns of change are evident. In France for instance,the National Center for Scientific Research (CNRS)has requested that all academic researchers pro-vide Google-Scholar-based impact data (usingPublish or Perish) in addition to impact data basedon Thomson Reuters Web of Knowledge data.What additional new institution-level “entrepre-neurs” might enter the field for ranking businessschools and scholars? How might such new play-ers be able to shift business schools back towardtheir fundamental purpose?

Beyond the influence of new players on the field,field-changing institutional entrepreneurship canbe exerted by existing players. As Hardy andMaguire (2008) have shown, even the most embed-ded fields can change if the appropriate rationale,resources, and relationships are established. Assummarized in this article and elsewhere, the ra-tionale for changing the current system has al-ready been clearly articulated by a wide range ofstakeholders, from individual authors, to deans,journal editors, the business press, and leading

CEOs. All have explicitly expressed the need formanagement research to be relevant to the prob-lems of the world. The question is, will the fieldmove beyond such eloquent rhetoric and commitresources to action? Will the Academy of Manage-ment, for example, invite leaders from each stake-holder group to a search conference designed toredefine and reoperationalize research quality asa multidimensional construct capable of distin-guishing well-done relevant research from itsshadow opposite? Will the field not only re-exam-ine the relationship it has to real-world problemsand opportunities, but also move to collectivelysupport research that matters? These and otherquestions need to be explored, not simply by acouple of authors from opposite sides of the world,but also by members of the entire community in-volved in the creation, dissemination, and use ofmanagement research.

Improving Individual Ranking Systems

Although individual rankings are slightly lessflawed than institutional rankings, they remainhighly problematic. To begin to create a more ro-bust system, designers need to base assessmentsystems on the quality and overall impact of eachscholar’s comprehensive body of work. Academiacannot continue to rely on a citation-index systemthat excludes most ideas presented in languagesother than English as well as most scholarshippublished in books, book chapters, conference pro-ceedings, nonlisted journals, and most new Inter-net-based outlets. Moreover, in management, as inother professional disciplines, impact needs to beassessed not only among scholars, but ratherwithin both the academic and professional com-munities of discourse. Although certainly not per-fect, Google Scholar provides a database that ismore inclusive and appropriate to the distinctivelyglobal environment of 21st century scholarshipthan does Thomson Reuters ISI (Harzing & van derWal, 2008a,b).

Whereas productivity (counting publications)and impact (counting citations) are easier to quan-tify than quality, quality is potentially more impor-tant. Rather than continuing to ignore quality alto-gether by inappropriately subsuming it intomeasures of individual- and journal-level impact,or continuing to hope that a single metric mightadequately reflect quality, scholars need to collec-tively generate a wider array of appropriate ap-proaches to recognize quality. Scholars fromaround the world are therefore invited to begin toidentify a range of indicators that could be used tomore accurately reflect the quality of their work.

90 MarchAcademy of Management Learning & Education

Scholars from around the world aretherefore invited to begin to identify arange of indicators that could be used tomore accurately reflect the quality oftheir work.

During the transition to a more robust system,we would applaud a proliferation of rankings us-ing varied assessment measures. The resultingabundance would support scholars in combatingthe current bureaucratization of science caused byan overreliance on a few limited metrics to deter-mine the supposed worth of all scholars and theirwork. If enough rankings and citation metrics arecreated, virtually everyone would be able to iden-tify the areas of their greatest strength and contri-bution.4 The field, however, needs to be careful notto draw strong conclusions from any studies basedon such transitional assessments and rankings.

Creating Environments That Support ResearchThat Matters

Former Academy of Management President BillStarbuck (2007: 24) captured the challenge facingacademia:

. . . researchers have exceptional capabili-ties, many years of training and freedom tochoose how to spend their time. Yet, only 5 to10 percent of them are trying to benefit some-one or something other than themselves. . . .[M]any more . . . could be making work moreenjoyable or productive, . . . facilitating moreequal distributions of resources within orga-nizations or around the globe, . . . mitigatingenvironmental degradation, . . . [and] invent-ing better ways to take account of long-termgoals in the short term.

What types of environments might universities de-sign to inspire and support scholars conductingresearch that has the highest potential to signifi-cantly advance knowledge and improve society?Would we see more scholastic coaching (the aca-demic equivalent of executive coaching; Morgan,Harkins, & Goldsmith, 2005; Detsky & Baerlocher,

2007), more globally networked research teams,and more multidisciplinary and innovative co-cre-ation? Would we see discussions among schol-ars—both those held visibly at conferences as wellas those kept hidden behind the anonymity of re-view processes—shifting from vocabularies ofevaluation and critique to vocabularies of appre-ciation and support? Would universities that offerthe most generative environments attract more topscholars than those that continue to entice candi-dates through more traditional means?

Rather than allowing assessment and rankingsystems to continue to consume disproportionateamounts of universities’ attention and resources,academia needs to shift to designing and imple-menting environments that purposefully encour-age research that matters. The new theoriesemerging from positive organization scholarship(POS) offer exciting and heretofore rarely tried ap-proaches to initiating such a transformation (Cam-eron, Dutton, & Quinn, 2003; Seligman, 2003). Schol-ars, for example, might use positive deviance(Spreitzer & Sonenshein, 2003) to identify the re-search groups worldwide that have had an excep-tionally positive impact on the practice and under-standing of management or have been recognizedfor the extremely high quality of their work. Onceidentified, other universities could seek to learnfrom these outstanding outliers—these positive ac-ademic deviants—in order to begin to magnify thestrengths within their own departments.

Similarly, using the positive lens of appreciativeinquiry (Cooperrider, Whitney, & Stavros, 2003),universities might invite scholars to identify thetimes when their research was going spectacularlywell; times when they felt best about the potentialfor their scholarship to significantly contribute totheir discipline and to society (Adler, 2008a,b).Then after identifying the conditions that allowedthem to reach such outstanding levels of perfor-mance, the community of scholars could collec-tively ascertain what it would take for them tomore regularly produce at such an extraordinarylevel. It is perhaps not a coincidence that the or-ganizational behavior department recognized asbest by the Financial Times over the past 5 years isdominated by an appreciative approach, ratherthan traditional ranking- and deficit-based evalu-ations.5 When Peter Drucker asserted that “the taskof leadership is to create an alignment of strengthsso as to make people’s weaknesses irrelevant,” he

4 The collated Journal Quality List (Harzing, 2008a; www.harzing.com/jql.htm) and the Publish or Perish citation analysisprogram (www. harzing.com/pop.htm) were created based onthe same philosophy: provide so many (journal) ranking andcitation metrics that everyone can find one that highlights hisor her greatest contribution.

5 The Organizational Behavior Department at the WeatherheadSchool of Management at Case Western Reserve Universitywas ranked number one for a composite of the last 5 years.

2009 91Adler and Harzing

did not suggest that his organizational advice wasapplicable to all institutions except academia.6

University leaders, including chancellors, presi-dents, research provosts, deans, and their equiva-lents, while still constrained by the current institu-tional environment, have unique opportunities toinitiate and leverage change. Taking advantage ofhis visible position of influence, the new DeputyVice Chancellor for Research at Melbourne Univer-sity—stem-cell researcher Peter Rathjen—pro-claimed that the main goal of university researchis to tackle society’s problems (Buckridge, 2008).Rather than emphasizing the university’s need toachieve top rankings, he underscored its researchmission: to contribute to “both the store of humanknowledge and the innovation that will underpineconomic advance[s].” He went on to implicitly ex-pose the dysfunctional institutional pressures onthe university, including the “challenging researchenvironment made more demanding by the threatof the now defunct Research Quality Framework(RQF).” Policies and statements such as these domuch to embed the university’s role within thewider societal context.

Whereas all ranking systems implicitly assumethat rankings-based competition motivates aca-demics to produce more and better scholarship, noone knows if such narrowly defined competitionactually fosters or inhibits good scholarship. Com-petitive pressure—especially to publish only, orprimarily, in A-listed journals—may, in fact, fosterattempts to boost scores on assessment metrics,but not necessarily to maximize the quality andsignificance of the underlying research. Might it bethat more generous, collaborative environmentsinspire and support higher quality research thando environments defined by rankings-based com-petition? Although we all intuitively suspect thatwe know the answer, our competitive, deficit-based culture continues to blind us to the conse-quences of the choices we have been making.

While no individual scholar can change theoverall system, each of us can make a contributionby, at the minimum, starting to change the framingof our research conversations from vocabularies ofindividual success to vocabularies of contributionand significance. Rather than asking colleagueswhere something was published, we could askhow their research has made a difference and whythey continue to be passionate about it. Perhaps atthe next Academy meetings, we could describe anewly met colleague as “a professor who has

made a significant contribution by showingthat. . . .” rather than repeating the often-usedshorthand-of-success and referring to the personas “the professor who has an AMJ, two AMRs andan ASQ.”

The Future of Scholarship:Reclaiming the Patrimony of Nalanda

The 21st century needs more international, integra-tive, interesting, and important research. There isno question that now is the moment in the historyof scholarship when we need to return to the fun-damental question and ask ourselves, individuallyand collectively, “What is our scholarship actuallycontributing?” Historically, the purpose of univer-sity-based scholarship was to ask important ques-tions in rigorous ways so that the results couldreliably guide society and future research. Tenure,in fact, was institutionalized as a way to supportscholars in asking important (and often controver-sial) questions and reporting even their most un-popular findings without jeopardizing their liveli-hood. As we create new approaches to assessinghow well individual scholars and universities aredoing at achieving that historic and worthwhilepurpose, we need to be careful not to entrap our-selves in simplistic reductionist approaches.Whether using metrics for counting publications orcitations, the question always remains: “Has thescholar asked an important question and investi-gated it in such a way that it has the potential toadvance societal understanding and well-being?”

REFERENCES

AACSB International (Association to Advance CollegiateSchools of Business) 2003. Ranking the research. Biz Ed,March/April: 11.

Adler, N. J. 2008a. Global business as an agent of world benefit:New international business perspectives leading positivechange. In A. Scherer & Guido Palazzo (Eds.), Handbook ofResearch on Corporate Citizenship: 374–401. Cheltenham,UK: Edward Elgar.

Adler, N. J. 2008b. International Business Scholarship: Contrib-uting to a Broader Definition of Success. In Jean Boddewyn(Ed.), International Business Scholarship: AIB Fellows on theFirst 50 Years and Beyond (Volume 13 of Research in GlobalStrategic Management: 333–343). UK: Emerald Group.

Adler, N. J. (with A. Gundersen). 2008c. International Dimensionsof Organizational Behavior. Mason, OH: Cengage.

Adler, N. J. 2006. The arts and leadership: Now that we can doanything, what will we do? Academy of ManagementLearning and Education, 5(4): 466–499.

Adler, N. J., & Bartholomew, S. 1992. Academic and professionalcommunities of discourse: Generating knowledge on tran-snational human resource management. Journal of Interna-tional Business Studies, 23(3): 551–569.

6 Conversation between Peter Drucker and David Cooperriderin Claremont, California, shortly before Drucker passed away.

92 MarchAcademy of Management Learning & Education

Bartunek, J. M., Rynes, S., & Ireland, D. I. 2006. What makesmanagement research interesting, and why does it matter?Academy of Management Journal, 49: 9–15

Becker, H. S., & Geer, B. 1958. The fate of idealism in medicalschool. American Sociological Review, 23(1): 50–56.

Bennis, W., & O’Toole, J. 2005. How business schools lost theirway. Harvard Business Review, 83(5): 96–106.

Bosman, J., Mourik, I. van, Rasch, M., Sieverts, E., & Verhoeff, H.2006. Scopus reviewed and compared. The coverage andfunctionality of the citation database Scopus, includingcomparisons with Web of Science and Google Scholar,Utrecht: Utrecht University Library, http://igitur-archive.library.uu.nl/DARLIN/2006-1220-200432/Scopus doorgelicht& vergeleken - translated.pdf, accessed 15 October 2008.

Boyacigiller, N., & Adler, N. J. 1991. The parochial dinosaur: Theorganizational sciences in a global context. Academy ofManagement Review, 16(2): 262–290.

Buckley, P. 2002. Is the international business agenda runningout of steam? Journal of International Business Studies,33(2): 365–373.

Buckley, P. J., & Lessard, D. R. 2005. Regaining the edge forinternational business research. Journal of InternationalBusiness Studies, 36(6): 595–599.

Buckridge, C. 2008. New research head puts focus on tacklingsociety’s problems. The University of Melbourne Voice, 2(4):17 March–14 April 2008.

Butler, K. C. 2006. Finance and the search for the big question ininternational business. Academy of International BusinessInsights, 6(3): 3–9.

Cameron, K. S., Dutton, J. E., & Quinn, R. E. (Eds.). 2003. Positiveorganizational scholarship. San Francisco: Berrett-Kohler.

Carroll, M. 2008. NIH and Harvard—It’s about values, Carroll-ogos : A blog about Law, Technology, and Music, Wednes-day, February 20, 2008 http://carrollogos.blogspot.com/2008/02/nih-and-harvard-its-about-values.html.

Chan, K. C., Fung, H., & Lai, P. 2006. International businessresearch: Trends and school rankings. International Busi-ness Review, 15(4): 317–338.

Cohen, P. 2008. At Harvard, A proposal to publish free on web.New York Times, February 12.

Cooperrider, D. L., Whitney, D., & Stavros, J. M. 2003. Apprecia-tive inquiry handbook. Bedford Heights, OH: LakeshorePublishers.

Detsky, A. S., & Baerlocher, M. O. 2007. Academic mentoring—How to give it and how to get it. Journal of the AmericanMedical Association, 297(19): 2134–2136.

DiMaggio, P. J., & Powell, W. W. 1983. The iron cage revisited:Institutional isomorphism and collective rationality in or-ganizational fields. American Sociological Review, 48(2):147–160.

Donaldson, L. 2005. For positive management theories whileretaining science. Academy of Management Learning andEducation, 4(1): 109–113.

Dubois, F. L., & Reeb, D. M. 2000. Ranking the internationalbusiness journals. Journal of International Business Stud-ies, 31(4): 689–704.

Franke, R., Edlund, T., & Oster, R. 1990. The development ofstrategic management: Journal quality and article impact.Strategic Management Journal, 11(3): 243–253.

Garten, J. E. 2006. Really old school. New York Times, Op-Ed,December 9.

Gauffriau, M., & Larsen, P. O. 2005. Counting methods are deci-sive for rankings based on publication and citation studies,Scientometrics, 64(1): 85–93.

Gioia, D. A., & Corley, K. G. 2002. Being good versus lookinggood: Business school rankings and the Circean transfor-mation from substance to image. Academy of ManagementLearning & Education, 1: 107–120.

Ghoshal, S. 2005. Bad management theories are destroyinggood management practices. Academy of ManagementLearning and Education, 4(1): 75–91.

Greger, M. 1999. Heart failure: Diary of a third year medicalstudent. Available on line at http://www.just-think-it.com/heartfailure.pdf.

Griffith, D. A., Cavusgil, S. T., & Xu, S. 2008. Emerging themes ininternational business research. Journal of InternationalBusiness Studies, 39(7): 1220–1235.

Gulati, R. 2007. Tent poles, tribalism, and boundary spanning:The rigor-relevance debate in management research.Academy of Management Journal, 50(4): 775–782.

Hardy, C., & Maguire, S. 2008. Institutional entrepreneurship. InR. Greenwood, C. Oliver, K. Sahlin, & R. Suddaby (Eds.), TheSage handbook of organizational institutionalism: 198–217.Thousand Oaks, CA: Sage.

Harzing, A. W. K. 2005. Australian research output in economics& business: High volume, low impact? Australian Journal ofManagement, 30(2): 183–200.

Harzing, A. W. K. 2008a. Journal quality list. 31st ed., 31 May 2008,downloaded from www.harzing.com 1 August 2008.

Harzing, A. W. K. 2008b. Publish or Perish, version 2.5.3171,http://www.harzing.com/pop.htm, accessed 15 October 2008.

Harzing, A. W. K., & van der Wal, R. 2008a. Google Scholar as anew source for citation analysis. Ethics in Science andEnvironmental Politics, 8(1): 62–71.

Harzing, A. W. K., & van der Wal, R. 2008b. Comparing theGoogle Scholar H-index with the ISI Journal Impact Factor,harzing.com white paper, first version, 3 July 2008, http://www.harzing.com/h_indexjournals.htm.

Inkpen, A. C., & Beamish, P. W. 1994. An analysis of twenty-fiveyears of research in the Journal of International BusinessStudies. Journal of International Business Studies, 25(4): 703–713.

Johnson, J., & Podsakoff, P. 1994. Journals of influence in the fieldof management: An analysis using Salancik’s index in adependency network. Academy of Management Journal,37(5): 1392–1407.

Kerr, S. 1975. On the folly of rewarding A, while hoping for B.Academy of Management Journal, 18(4): 769–783.

Kirkman, B., & Law, K. 2005. International management researchin AMJ: Our past, present and future. Academy of Manage-ment Journal, 48(3): 377–386.

Khurana, R. 2007. From higher aims to hired hands: The socialtransformation of business schools and the unfulfilledpromise of management as a profession. Princeton, NJ:Princeton University Press.

Kumar, V., & Kundu, S. K. 2004. Ranking the international busi-ness schools: Faculty publication as the measure. Manage-ment International Review, 44: 213–228.

2009 93Adler and Harzing

Lanchester, J. 2008. Log on. Tune out. Review of Lee Siegel’sAgainst the Machine: Being Human in the Age of the Elec-tronic Mob. New York Times, February 3.

Lawrence, P. A. 2008. Lost in publication: How measurementharms science. Ethics in Science and Environmental Poli-tics, 8, published on-line January 31: 1–3.

Lawrence, P. A. 2003. The politics of publication: Authors, re-viewers, and editors must act to protect the quality of re-search. Nature, 422 (March 20): 259–261.

Lawrence, P. A. 2002. Rank injustice: The misallocation of creditis endemic in science. Nature, 415 (February 21): 835–836.

MacDonald, S., & Kam, J. 2007. Aardvark et al.: Quality journalsand gamesmanship in management studies. Journal ofInformation Science, 33(6): 702–717.

Macri, J., & Sinha, D. 2006. Rankings methodology for interna-tional comparisons of institutions and individuals: Anapplication to economics in Australia and New Zealand.Journal of Economic Surveys, 20(1): 111–156.

Mangematin, V., & Baden-Fuller, C. 2008. Global contests in theproduction of business knowledge: Regional centres andindividual business schools. Long Range Planning, 41(1):117–139.

March, J. G. 1996. Continuity and change in theories of organi-zational action. Administrative Science Quarterly, 41(2):278–287.

Merton, R. K. 1936. The unanticipated consequences of purpo-sive social action. American Sociological Review, 1(6): 894–904.

McGahan, A. M. 2007. Academic research that matters to man-agers: On zebras, dogs, lemmings, hammers, and turnips.Academy of Management Journal, 50(4): 749–753.

McGrath, R. G. 2007. No longer a stepchild: How the manage-ment field can come into its own. Academy of ManagementJournal, 50(6): 1365–1378.

Morgan, H., Harkins, P., & Goldsmith, M. (Eds.) 2005. The art andpractice of leadership coaching: 131–137. NY: John Wiley &Sons.

Morgeson, F. P., & Nahrgang, J. D. 2008. Same as it ever was:Recognizing stability in the Business Week Rankings. Acad-emy of Management Learning & Education, 7: 26–41.

Morrison, A. J. & Inkpen, A. C. 1991. An analysis of significantcontributions to the international business literature. Jour-nal of International Business Studies, 22(1): 143–153.

Murphy, P. 1996. Determining measures of the quality and im-pact of journals. Commissioned Report No. 49, AustralianGovernment Publishing Service, Canberra.

Peng, M. W. 2004. Identifying the big question in internationalbusiness research. Journal of International Business Stud-ies, 35(2): 99–108.

Peng, M. W., & Zhou, J. Q. 2006. Most cited articles and authorsin global strategy research. Journal of International Man-agement, 12(4): 490–508.

Pfeffer, J. 2005. Why do bad management theories persist? Acomment on Ghoshal. Academy of Management Learningand Education, 4(1): 96–100.

Pfeffer, J., & Fong, C. 2002. The end of business schools? Lesssuccess than meets the eye. Academy of ManagementLearning and Education, 1(1): 78–95.

Podsakoff, P. M., MacKenzie, S. M., Bachrach, D. G., & Podsakoff,

N. P. 2005. The influence of management journals in the1980s and 1990s. Strategic Management Journal, 26(5): 473–488.

Rynes, S. L. 2007a. Academy of Management Journal editor’sforum on citations: Editor’s forward. Academy of Manage-ment Journal, 50(3): 489–490.

Rynes, S. L. 2007b. Academy of Management Journal editor’sforum on research with relevance to practice. Editor’s fore-word: Carrying Sumantra Ghoshal’s Torch: Creating MorePositive, Relevant, and Ecologically Valid Research. Acad-emy of Management Journal, 50(4): 745–747.

Scott, W. R. 1987. The adolescence of institutional theory. Ad-ministrative Science Quarterly, 32: 493–511.

Sauder, M. 2008. Interlopers and field change: The entry of U.S.news into the field of legal education. Administrative Sci-ence Quarterly, 53: 209–234.

Sauder, M., & W. Espeland, W. 2006. Strength in numbers? Acomparison of law and business school rankings. IndianaLaw Journal, 81: 205–227.

Segalla, M. 2008a. Editorial: Publishing in the right place orpublishing the right thing. European Journal of Interna-tional Management, 2: 122–127.

Segalla, M. 2008b. Publishing in the right place or publishingthe right thing: Journal targeting and citations strategies forpromotion and tenure committees. Unpublished workingpaper (expanded version of Segalla 2008a). HEC School ofManagement, Paris, January 20.

Seligman, M. E. P. 2003. Positive psychology: Fundamental as-sumptions. Psychologist, 126–127.

Singh, G., Haddad, K. M., & Chow, C. W. 2007. Are articles in“top” management journals necessarily of higher quality?Journal of Management Inquiry, 16(4): 319–331.

Skapinker, M. 2008. Why business ignores business schools. Finan-cial Times, January 7: 18:18, updated January 8 6:35 at http://www.ft.com/cms/s/0/215022b8-bd2c-11dc-b7e6-0000779fd2ac.html.

Spreitzer, G., & Sonenshein, S. 2003. Positive Deviance andExtraordinary Organizing in Cameron, K. S., Dutton, J. E., &Quinn, R. E. (Eds.), Positive organizational scholarship: 207–224. San Francisco: Berrett-Kohler.

Starbuck, W. H. 2005. How much better are the most-prestigiousjournals? The statistics of academic publication. Organiza-tion Science, 16(2): 180–200.

Starbuck, W. H. 2007. Living in mythical spaces. OrganizationalStudies, 28(1): 21–25

Swanson, E. 2004. Publishing in the majors: A comparison ofaccounting, finance, management, and marketing. Contem-porary Accounting Research, 21(1): 223–255.

Testa, J. nd. The Thomson Scientific journal selection process.http://www.thomsonreuters.com/business_units/scientific/free/essays/journalselection/, accessed 15 October 2008.

Tsui, A. S. 2007. From homogenization to pluralism: Interna-tional research in the academy and beyond. Academy ofManagement Journal, 50 (6): 1353–1364.

Tung, R. L., & Witteloostuijn, A. van 2008. What makes a studysufficiently international? Journal of International BusinessStudies, 39(2): 180–183.

Tushman, M., & O’Reilly, C. III 2007. Research and relevance:Implications of Pasteur’s quadrant for doctoral programsand faculty development. Academy of Management Jour-nal, 50(4): 769–774.

94 MarchAcademy of Management Learning & Education

Van Fleet, D., McWilliams, A., & Siegel, D. 2000. A theoreticaland empirical analysis of journal rankings: The case offormal lists. Journal of Management, 26(5): 839–861.

Vermeulen, F. 2005. On rigor and relevance: Fostering dialecticprogress in management research. Academy of Manage-ment Journal, 48(6): 978–982.

Vermeulen, F. 2007. I shall not remain insignificant: Adding asecond loop to matter more. Academy of Management Jour-nal, 50(4): 754–761.

Werner, S., & Brouthers, L. E. 2002. How international is man-agement? Journal of International Business Studies, 33(3):583–591.

Westney, E. D. 2005. Institutional theory and the multinationalcorporation. In Sumantra Ghoshal and D. Eleanor Westney(Eds.), Organization theory and the multinational corpora-tion: 47–67. NY: Palgrave MacMillan.

Xu, S., Yalcinkaya, G., & Seggie, S. H. 2008. Prolific authors andinstitutions in leading international business journals. AsiaPacific Journal of Management, 25, published on-line No-vember 2007.

Zemsky, R. 2008. The rain man cometh—Again. Academy ofManagement Perspectives, 22(1): 5–14.

Zucker, L. G. 1987. Institutional theories of organization. AnnualReview of Sociology, 13: 443–464.

Nancy J. Adler (PhD, UCLA) is theS. Bronfman Chair in Manage-ment at McGill University, Mon-treal, Canada. She conducts re-search and consults on globalleadership and arts and leader-ship. Dr. Adler is a Fellow of theAcademy of Management, theAcademy of International Busi-ness, and the Royal Society ofCanada. Nancy is also a visualartist.

Anne-Wil Harzing (PhD, Univer-sity of Bradford) is professor atthe University of Melbourne,Australia. Her research interestsinclude international HRM, HQ-subsidiary relationships, cross-cultural management, and therole of language in internationalbusiness. Since 1999 she main-tains an extensive website (www.harzing.com) with resources forinternational management andacademic publishing.

2009 95Adler and Harzing