7
Learned Publishing, 21, 85–91 doi: 10.1087/095315108X288901 Trends in the use of ISI citation databases for evaluation James Pringle LEARNED PUBLISHING VOL. 21 NO. 2 APRIL 2008 M easuring performance in an age of democratization Today’s researchers build their careers in an intensely out- comes-driven and metrics-oriented academic world, and it has never been more important to use care and good judgment in evalua- tion. A growing chorus of voices from the research community decries the over-use of research metrics and appeals for greater bal- ance in evaluative judgments. Some strike a passionate tone of rage against ‘fashion, the management cult and the politics of our time, all of which favor numerical evaluation of “performance” and reward compliance’. 1 Thomson Scientific, the producer of the ISI citation databases, continually joins this chorus and reminds the research community that citation metrics should support, but never substitute for, informed peer judgment in research evaluation. Yet such efforts often seem an exercise in sweeping back the tide. The factors driving this trend are immen- sely powerful and global in nature. Among them are: Funding pressures, evidenced by increased rejection rates of grant proposals in some countries. Efforts by university administrators to derive objective measures for promotion and tenure decisions free of bias and ‘old boy’ networks. Pressures on these same administrators to upgrade the reputation and quality of their programs to compete effectively in global academic markets – and to demon- strate that they have done so. Global competition in the sciences, leading to national policies designed to increase global competitiveness and to demon- strate national achievement. Pressures in the journal publishing indus- try, where sluggish revenue growth rates and pricing pressures heighten the com- Trends in the use of ISI citation databases for evaluation 85 LEARNED PUBLISHING VOL. 21 NO. 2 APRIL 2008 Trends in the use of ISI citation databases for evaluation James PRINGLE Thomson Scientific © James Pringle, 2008 ABSTRACT. This paper explores the factors shaping the current uses of the ISI citation databases in evaluation both of journals and of individual scholars and their institutions. Given the intense focus on outcomes evaluation, in a context of increasing ‘democratization’ of metrics in today’s digital world, it is easy to lose focus on the appropriate ways to use these resources, and misuse can result. James Pringle

Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

Learned Publishing 21 85ndash91doi 101087095315108X288901

Trends in the use of ISI citation databases for evaluationJames PringleLEARNED PUBLISHING VOL 21 NO 2 APRIL 2008

Measuring performance in anage of democratization

Todayrsquos researchers build theircareers in an intensely out-

comes-driven and metrics-oriented academicworld and it has never been more importantto use care and good judgment in evalua-tion A growing chorus of voices from theresearch community decries the over-use ofresearch metrics and appeals for greater bal-ance in evaluative judgments Some strike apassionate tone of rage against lsquofashion themanagement cult and the politics of ourtime all of which favor numerical evaluationof ldquoperformancerdquo and reward compliancersquo1Thomson Scientific the producer of the ISIcitation databases continually joins thischorus and reminds the research communitythat citation metrics should support butnever substitute for informed peer judgmentin research evaluation Yet such efforts oftenseem an exercise in sweeping back the tide

The factors driving this trend are immen-sely powerful and global in nature Amongthem are

Funding pressures evidenced by increasedrejection rates of grant proposals in somecountries

Efforts by university administrators toderive objective measures for promotionand tenure decisions free of bias and lsquooldboyrsquo networks

Pressures on these same administrators toupgrade the reputation and quality oftheir programs to compete effectively inglobal academic markets ndash and to demon-strate that they have done so

Global competition in the sciences leadingto national policies designed to increaseglobal competitiveness and to demon-strate national achievement

Pressures in the journal publishing indus-try where sluggish revenue growth ratesand pricing pressures heighten the com-

Trends in the use of ISI citation databases for evaluation 85

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

Trends in the use

of ISI citation

databases for

evaluationJames PRINGLEThomson Scientific

copy James Pringle 2008

ABSTRACT This paper explores the factorsshaping the current uses of the ISI citationdatabases in evaluation both of journals and ofindividual scholars and their institutions Given theintense focus on outcomes evaluation in a contextof increasing lsquodemocratizationrsquo of metrics in todayrsquosdigital world it is easy to lose focus on theappropriate ways to use these resources and misusecan result

James Pringle

petitive need to demonstrate journalreputations in order to protect subscrip-tions and to ensure continued participa-tion by top authors

The interplay of these forces is takingplace in a context of increased availability ofdata with which to measure performancein the digital world The first decade of the21st century has seen what might be calledthe lsquodemocratizationrsquo of citation metricsEvery researcher at virtually every researchinstitution of significance has access to theISI citation databases in the form of the Webof Sciencereg ISI citation data is comple-mented by a growing number of othercitation databases including some that areopenly available over the Internet such asthe Astrophysics Data System (ADS) andGoogle Scholar Many of these researchers areeager to test new ways of calculating citationmetrics and are quick to comment abouttheir findings globally in interconnectedWeb communities

In earlier decades citation metrics werethe preserve of a limited group of specialistsin lsquobibliometricsrsquo and lsquoscientometricsrsquo whoshared their expertise with each other andwith pockets of high-level analysts in gov-ernment departments and major publishinghouses Today citation metrics are widelyused and debated by a broad public ofadministrators analysts editors librariansand individual researchers

The power of the trends underlying out-comes evaluation and the ubiquity ofcitation data in a lsquodemocratizedrsquo digitalworld portend that use of quantitative cita-tion data to measure academic performancewill only increase in the coming yearsAppreciating these trends helps frame theissue of proper use of citation metrics todayand emphasizes that it is more critical thanever to understand what constitutes gooduse of citation data in research evaluation

Citations as community-generated content

What attracts such a wide audience andmakes debates about citation metrics solively is the special character of citations Todescribe this special character in a way thatis compatible with todayrsquos lsquoWeb 20rsquo worldwe might say that the citations found in the

ISI citation databases represent lsquocommu-nity-generatedrsquo content created by theresearch community as it engages in formalscholarly communication via peer-reviewedjournals

As journals migrate to the Internet theyare becoming centers where new types ofuser-generated content can be capturedincluding usage data social tags inboundweb links and community votes Whilethese types of data may at some point proveto be complementary indicators to citationdata as yet their meaning is only beginningto be understood Citation data as an indi-cator of influence in the scholarly com-munication process in contrast have beenexplored and studied for decades For thisreason there is currently no comprehensivealternative to citation data for quantitativeassessment of research outputs and it is notsurprising that all parties interested in met-rics turn to these data

The ISI citation databases are widely usedfor these purposes because their continu-ously growing corpus of journal literaturebased on careful selection criteria theirnon-duplicative coverage and their focus onserious scholarly journals represent a clearlydefined universe within which to measureand study this formal community-generatedcontent Other citation resources may showsimilar citation patterns2 and indeed giventhe many different types of documentsindexed within them (articles books con-ference abstracts working papers preprintsstudent term papers) may eventually showhigher absolute numbers of citations thandoes the Web of Science However suchresources are more difficult to utilize forevaluative purposes because it is not alwaysclear what is being counted and their datastructures may not lend themselves as effec-tively to extraction of comparative namejournal and institutional address data As aresult one cannot be sure what is actuallybeing compared to what

The absence of lsquobest practicesrsquo

ISI citation data are most conspicuouslyused for journal ranking They are also usedfor many other evaluative purposes involv-ing individuals universities and university

86 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

the citationsfound in theISI citation

databasesrepresent

lsquocommunity-generatedrsquo

content createdby the research

community

programs national assessment programsand inter-country comparisons At the levelof national and international policy suchorganizations as the National Science Foun-dation (NSF) the European Union (EU)and the Observatoire des Sciences et desTechniques (OST) have used ISI citationdata for many years to chart the progress ofthe sciences in their regions A recent studyby the NSF available online provides a goodexample of this type of use3 Where andhow citation metrics are used in nationalassessment programs generally follows thestructure of scientific funding on a countrybasis Where central funding of universitiesis more important as in a number of Euro-pean countries citation metrics are widelyused at the national level In the USA wherefunding patterns are more diverse there isless emphasis on citations in national assess-ment and more activity at the localuniversity level Assessments of individualsfor promotion and tenure decisions may beconducted as an extension of a national pro-gram or as an individual departmentalactivity with many variations in practice

In this period of experiment within a moredemocratic digital world there are no recog-nized lsquobest practicesrsquo Some analysts areseeking to develop such guidelines but theseare as yet not widely employed or agreedupon4 In the absence of clear guidelines itis no wonder that citation metrics are some-times subject to misuse

What constitutes misuse however is notalways black and white Citation data canbe used effectively in many different waysto answer many different questions It isimportant to use such data in a way thatcontributes to a deeper understanding andbetter judgment about a question of interestBecause the nature of the questions askedcan be so varied the charge of lsquomisusersquo mustbe leveled with an understanding of the spe-cific use to which the data are being applied

Several years ago an analyst for ThomsonScientific proposed a set of rules for applyingcitation analysis to evaluation5 This presen-tation remains unpublished but I willextract from it two key lsquorulesrsquo that particu-larly stand out as relevant to the presentanalysis and I will call them the lsquogoldenrulesrsquo

First lsquoconsider whether the available datacan address the questionrsquo Too often auser of citation data attracted to theirease of quantification treats citation anal-ysis like a hammer in search of nailsrather than determining whether the dataare available and to rephrase slightly ap-propriate to the question asked

Second lsquocompare like with likersquo This fun-damental rule of citation analysis pervadesevery evaluative exercise but is oftenhonored only in the breach Wheneverone hears the question lsquois a high journalimpact factor always betterrsquo or when anassistant professor in economics is judgedless worthy because she lsquohas fewer cita-tionsrsquo than her colleague in immunologythis rule has been violated

The journal impact factor debate

Both rules are in play in the most active areaof debate in citation metrics that concern-ing the journal impact factor (JIF) The JIF isthe best-known example of citation metricsused for evaluation It is a journal-level met-ric designed for one purpose ndash to comparethe citation impact of one journal with otherjournals Because it has become so wellknown lsquoimpact factorrsquo has almost come tostand as shorthand for any use of citationmetrics of which one disapproves Yet mostof the issues raised concerning the JIF inevaluation when these are not simply dataissues affecting a limited number of casesresult from a failure to follow the two rules

Trends in the use of ISI citation databases for evaluation 87

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

The journal impact factor

The journal impact factor (JIF) oftenknown simply as the impact factor (IF) isa ratio between citations and recentcitable items published It is calculated asfollows

A = total cites in 2006B = 2006 cites to articles published in

2004ndash5 (a subset of A)C = number of articles (lsquocitable itemsrsquo

excluding editorials letters newsitems and meeting abstracts)published in 2004ndash5

D = BC = 2006 impact factor

in the absenceof clearguidelines it isno wonder thatcitation metricsare sometimessubject tomisuse

listed above and concern uses of the JIF forpurposes other than that for which it wasdesigned

There is no shortage of criticism of thisparticular metric ndash criticism largely born ofits success in becoming widely used andavailable ndash and thus part of lsquodemocratizationrsquoin the digital world While the JIF is a goodgeneral comparative measure of journal per-formance it has limitations as does anystatistical measure which have been volu-minously documented in recent years

Initially critics pointed to factors affectingthe calculation of the JIF itself such as a dif-ference between items counted in numeratorand denominator and the possibility thatself-citation may artificially inflate the JIFof particular journals67 These limitationsexist but appear to affect only a very smallpercentage of journals in any year89

Other critics point out structural limitssuch as the perceived short time window ofthe calculation It is readily demonstrablethat journals in different disciplines have dif-ferent citation patterns This difficulty iseasily overcome by comparing like with likendash comparing journals with others in thesame field rather than with journals fromanother field In this vein refinements of theJIF have been proposed for cross-disciplinarystudies that take into account these differ-ing patterns Calculations such as lsquorank-normalizedrsquo impact factors10 or includingweightings based on citation time periods11

can be used to correct for this variation Thenumber of such lsquocorrectiversquo measures is con-stantly growing

Another line of criticism charges thatover-use of lsquoimpact factorsrsquo has shiftedbehavior among authors and publishers inundesirable ways This line of criticism hasbecome a popular journalistic theme12 How-ever given the fundamental trends alreadymentioned it is likely that this criticismactually applies to any possible quantitativeranking system used to compare journalsPressure to publish in top journals derivesfrom an intensifying outcomes-based cul-ture not from a particular measure It ishighly debatable for example whether Sci-ence and Nature would have lower rejectionrates if the JIF did not exist Furthermorestudies of actual editorial behavior under-

taken to influence the JIF for particularjournals show that most editors sought toraise their journalsrsquo impact factors over timeby applying sound editorial practices andcultivating top authors not by editorialmanipulation of citing behaviors13

Variants on the JIF and proposed alter-natives to it are appearing at an increasingrate which a statistical study if one wereconducted might possibly find to be runningas high as 1ndash2 new journal metrics permonth Each such new metric tends to behailed as the ultimate solution to the prob-lems of journal ranking It is questionablehowever whether the sheer multiplication ofmetrics will somehow lead us to a fundamen-tal change in practice or whether theadoption of new metrics will fundamentallychange our view of the overall ranking ofjournals

Some new journal metrics employ com-plex methodologies for uncertain results Forexample the Eigenfactor is a metric thatuses the journal citation matrices presentedin the Journal Citation Reportsreg to rank jour-nals based on their position within theoverall network of journal citation relation-ships1114 It relies on a PageRank-likealgorithm that weights citations based onthe citation counts of the citing journal in acomplex vector algorithm Such a math-ematical approach has been proven toachieve more relevant web searching butwhether it also generates better evaluativerankings is only beginning to be testedAllowing a citation from a highly cited jour-nal to lsquocount morersquo than a citation from aless-cited journal may prove to be highlycontroversial in principle once it is morewidely understood

Likewise the prospect of usage metricsseems to promise a lsquocounterweightrsquo to theJIF1215 This idea merits consideration butthe meaning of a download currentlyremains much less well understood than thatof a citation and so the meaning of suchmetrics is still to be determined until theyhave been similarly battle-tested For somedisciplines in which publications are ori-ented toward practitioners who read butrarely cite usage may in fact prove to be auseful complement to citation in journalranking Elsewhere its role is less clear

88 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

some newjournal metrics

employ complexmethodologiesfor uncertain

results

These new approaches are orientedtowards specific issues in journal rankingHowever they do not address the issue ofbroader use of the JIF in research evaluationIt is here that the two rules listed aboveapply most directly The use of the JIF inthese contexts tends to take one of two mainforms

An indicator of success achieved in hav-ing an article accepted by a prestigiousjournal

A surrogate for a more carefully deriveddirect measure of citation impact

While the first use may have some utilitythe second appears difficult to justify

The first use ndash as an indicator that theauthor has succeeded in having an articleaccepted in a prestigious journal ndash has somejustification There is a hierarchy of journalswithin subject areas and this hierarchybroadly corresponds to impact rankings soin a formal communication system based onpeer review acceptance of an article forpublication in one of these journals is animportant scholarly achievement Rewardsbased on this achievement can be seen asencouraging scholars to aim high in theirpublishing goals The data may be appropri-ate to the question being asked if one iscomparing achievement within a given fieldHowever comparing like with like our sec-ond rule is more difficult to apply acrossdisciplines since JIFs vary greatly across dis-ciplines Rank in category is very importantand can be addressed in several ways Oneway is to create a quadrant division amongjournals within disciplines and then com-pare publication in quadrants across disci-plines16

But the use of the JIF does not end thereIt is also used as a surrogate for more directmeasures in such concepts as a lsquototal impactfactorrsquo or other calculations in which theJIF stands in for article performance and fur-ther calculations are performed on lists ofJIFs It is very hard to see how such data somanipulated are appropriate for any ques-tion related to evaluation or comparison

In response to issues raised in the per-ceived misuse of the JIF in evaluation callsfor the use of lsquoarticle-level metricsrsquo havebeen made and the most popular of such

metrics at the moment is the lsquoh-indexrsquo17

This metric and an increasing number ofother article-level metrics are readily calcu-lated in the Web of Science and even in someof the openly accessible citation resources18

Although article-level metrics appear tobe more appropriate for questions aboutindividual performance than the JIF theymay easily violate the rule that calls for us tolsquocompare like with likersquo at least until thesemetrics are better understood A simple listof h-indexes without reference to time peri-ods or the consistency of the underlyingdata may prove to be much more dangerousand deceptive than any proposed misuse ofthe JIF Examples of this unbounded anduncorrected use are easily found See forexample a recent list published in Retro-virology19 Furthermore in todayrsquos highlycollaborative research environments eventhe concept of deriving a metric reflectingindividual performance from a set of co-authored papers without some assignmentof shares or roles raises the issue of whetherthe data are in fact appropriate to the ques-tions being asked of them

lsquoHorses for coursesrsquo in research evaluation

Evaluative uses of citation data are mosteffective when the citation data and metricsare carefully designed for the particular pur-pose and rely on appropriate benchmark andbaseline data for comparisons For this rea-son Thomson Scientific provides manydifferent metrics for different purposes andcontinues to develop new ones as specificnew evaluative needs arise The many part-ners and consultants who add value andresell ISI citation data generally do likewise

Trends in the use of ISI citation databases for evaluation 89

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

h-index

The h-index was first proposed by JEHirsch in 2005 He defines it as followslsquoA scientist has index h if h of his or herNp papers have h citations and the other(Npndash h) papers have leh citations eachrsquo17

This metric is useful because it discountsthe disproportionate weight of highlycited papers or papers that have not yetbeen cited

comparing likewith like ismore difficultacrossdisciplinessince JIFs varygreatly acrossdisciplines

A few examples from the Thomson portfoliowill illustrate these points

In terms of journal level metrics Thom-sonrsquos Journal Citation Reportsreg annuallyprovides many different metrics the JIF ajournal-level Immediacy Index to show howrapidly journals accumulate citations in theirfirst year cited and citing lsquohalf lifelsquo indica-tors and 10-year charts of citations to showhow citations accumulate over periods oftime a chart of self-citations to show thepercentages of citations to the same journaland a Journal Relatedness Index20 to show thejournals most closely associated with a givenjournal in terms of citations plus basic sta-tistics and rank in category and the full setof article article type and citation countsthat support these calculations Each ofthese metrics helps answer specific questionsabout the journal and the user is encour-aged to explore the data and use itappropriately instead of simply grabbing theJIF and using it as the universal answer toany evaluative question

More advanced journal-level analysis issupported by more customized and flexibleproducts most notably the Journal Perfor-mance Indicatorstrade which allow calculationof JIF-like metrics for the userrsquos chosen timeperiods and a more fine-grained analysis bydocument types and field-specific baselinesMore locally focused workflow-orientedanalysis is supported by the Journal UseReportstrade which enable users to comparejournal-level citation metrics with the publi-cation citation and usage data for their owninstitutions

For the most part these tools do not sup-port research evaluation because of thelimitations of journal-level metrics outlinedearlier For research evaluation Thomson

Scientific recognizes the need for customdata sets and for comparative data appropri-ate to the specific evaluative task andprovides these for both standard and customdata sets as needed Such metrics are builtup from the individual article level andinclude among many others

Simple counts of cites papers and citesper paper providing basic statistics andbaselines over a variety of time periods aswell as standard statistical measures suchas mean median and mode

Second-generation citation counts (cita-tions from articles that cite the lsquocitingarticlesrsquo of the original article) normalizedby field which measure of the long termimpact of a paper

Time series data covering both standardciting and cited periods (one-year five-year 25-year etc) and user-defined timeperiods

Percent shares of papers or citationswithin a broader category (such as a fieldcountry or institution)

Field baselines representing the meanimpact (cites per paper) for papers in afield (usually a journal set) defined for aspecific citedciting time window

Expected citation counts which predicthow often a paper is expected to be citedbased on its year journal and article type

The variety of these metrics and data setsreflects the recognition of the two lsquogoldenrulesrsquo of citation data as applied to evalua-tion ndash that data be appropriate to thequestions asked and that all comparativequestions rely on comparisons of like withlike When these rules are kept in mind andthose who use citation data take the timeand patience to apply them carefully cita-tion data can be used productively andhelpfully in evaluative processes

As Eugene Garfield the founder of ISIand inventor of the ISI citation databaseshas remarked lsquoIn 1955 it did not occur tome that ldquoimpactrdquo would become so contro-versialrsquo9 The pressures of outcomes-basedfunding decisions and personal rewards sys-tems along with the ready availability ofcitation data in a more democratized digitalworld will continue to fuel this controversyIn view of these powerful trends it is more

90 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

Cited half life

The number of publication years from thecurrent Journal Citation Reports year thataccount for 50 of the citations receivedby the journal This metric is usefulbecause it helps show the speed withwhich citations accrue to the articles in ajournal and the continued use and valueof older articles

citation datacan be usedproductively

and helpfully inevaluativeprocesses

important than ever to remind ourselvesthat citation metrics must be used carefullyand should only be used to inform not tosupplant peer review and other forms ofqualitative evaluation

References

1 Lawrence P 2007 The mismeasurement of scienceCurrent Biology 17 R583ndash5httpdxdoiorg101016jcub200706014

2 Pauly D and Stergiou K 2005 Equivalence of resultsfrom two citation analyses Thomson ISIrsquos CitationIndex and Googlersquos Scholar service Ethics in Scienceand Environmental Politics 2005 33ndash5 Available athttpwwwint-rescomarticlesesep2005E65pdf

3 Hill D Rapoport A Lehming R and Bell RChanging US Output of Scientific Articles 1988ndash2003Arlington VA National Science Foundation Divi-sion of Science Resources Statistics 2007 Availableat httpwwwnsfgovstatisticsnsf07320

4 Moed H Citation Analysis in Research EvaluationAmsterdam Springer 2005

5 Pendlebury D The ISI Database and BibliometricsUses amp Abuses in Evaluating Research Unpublishedpresentation at Thomson ISI Symposium Tokyo2002

6 Moed H and Vanleeuwen T 1995 Improving theaccuracy of Institute for Scientific Informationrsquos Jour-nal Impact Factors Journal of the American Society forInformation Science 46 461ndash7httpdxdoiorg101002(SICI)1097-4571(199507)466lt461AID-ASI5gt30CO2-G

7 Seglen P 1997 Citations and journal impact factorsquestionable indicators of research quality Allergy 521050ndash6httpdxdoiorg101111j1398ndash99951997tb00175x

8 McVeigh M Journal Self-Citation in the Journal Cita-tion Reports ndash Science Edition (2002) Philadelphia PAThomson Corporation 2004 Available at httpwwwthomsonscientificcommediapresentrepessayspdfselfcitationsinjcrpdf

9 Garfield E The Agony and the Ecstasy ndash The Historyand Meaning of the Journal Impact Factor InternationalCongress on Peer Review And Biomedical Publica-tion Chicago 16 September 2005 Available athttpwwwgarfieldlibraryupennedupapersjifchicago2005pdf

10 Pudovkin A and Garfield E Rank-normalizedimpact factor a way to compare journal performanceacross subject categories Proceedings of the 67thASISampT Annual Meeting Providence RI American

Society for Information Systems amp Technology 2004Available at httpwwwgarfieldlibraryupennedupapersasistranknormalization2004pdf

11 Sombatsompop N Markpin T and PremkamolnetrN 2004 A modified method for calculating theimpact factors of journals in ISI Journal CitationReports Polymer Science Category in 1997ndash2001Scientometrics 60 217ndash35httpdxdoiorg101023BSCIE000002779498854f6

12 Monastersky R 2005 The number thatrsquos devouringscience Chronicle of Higher Education 52 A12ndash17Available at httpchroniclecomfreev52i0808a01201htm

13 Chew M Villanueva E and Van Der Weyden M2007 Life and times of the impact factor retrospect-ive analysis of trends for seven medical journals(1994ndash2005) and their Editorsrsquo views Journal of theRoyal Society of Medicine 100 142ndash50httpdxdoiorg101258jrsm1003142

14 Bergstrom C httpwwweigenfactororgmethodshtm

15 Shepherd P Final Report on the Investigation into theFeasibility of Developing and Implementing Journal UsageFactors UK Serials Group 2007 httpwwwuksgorgusagefactorsfinal

16 Magri M-H and Solari A 1996 The SCI journalcitation reports a potential tool for studying journalsScientometrics 35 93ndash117httpdxdoiorg101007BF02018235

17 Hirsch J 2005 An index to quantify an individualrsquosscientific research output Proceedings of the NationalAcademy of Science 102 16569ndash72httpdxdoiorg101073pnas0507655102

18 Harzing A-W httpwwwharzingcomresourceshtmpophtm Accessed October 2007

19 Jeang K 2007 Impact factor H index peer compari-sons and Retrovirology is it time to individualize cita-tion metrics Retrovirology 4 42httpdxdoiorg1011861742ndash4690ndash4ndash42

20 Pudovkin A and Garfield E 2002 Algorithmic pro-cedure for finding semantically related journals Jour-nal of the American Society for Information Science andTechnology 53 1113ndash19httpdxdoiorg101002asi10153

James PringleVice President Product DevelopmentThomson Scientific3501 Market StreetPhiladelphia PA 19104 USAEmail jamespringlethomsoncom

Trends in the use of ISI citation databases for evaluation 91

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

citation metricsmust be usedcarefully andshould only beused to informnot to supplantpeer review

Page 2: Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

petitive need to demonstrate journalreputations in order to protect subscrip-tions and to ensure continued participa-tion by top authors

The interplay of these forces is takingplace in a context of increased availability ofdata with which to measure performancein the digital world The first decade of the21st century has seen what might be calledthe lsquodemocratizationrsquo of citation metricsEvery researcher at virtually every researchinstitution of significance has access to theISI citation databases in the form of the Webof Sciencereg ISI citation data is comple-mented by a growing number of othercitation databases including some that areopenly available over the Internet such asthe Astrophysics Data System (ADS) andGoogle Scholar Many of these researchers areeager to test new ways of calculating citationmetrics and are quick to comment abouttheir findings globally in interconnectedWeb communities

In earlier decades citation metrics werethe preserve of a limited group of specialistsin lsquobibliometricsrsquo and lsquoscientometricsrsquo whoshared their expertise with each other andwith pockets of high-level analysts in gov-ernment departments and major publishinghouses Today citation metrics are widelyused and debated by a broad public ofadministrators analysts editors librariansand individual researchers

The power of the trends underlying out-comes evaluation and the ubiquity ofcitation data in a lsquodemocratizedrsquo digitalworld portend that use of quantitative cita-tion data to measure academic performancewill only increase in the coming yearsAppreciating these trends helps frame theissue of proper use of citation metrics todayand emphasizes that it is more critical thanever to understand what constitutes gooduse of citation data in research evaluation

Citations as community-generated content

What attracts such a wide audience andmakes debates about citation metrics solively is the special character of citations Todescribe this special character in a way thatis compatible with todayrsquos lsquoWeb 20rsquo worldwe might say that the citations found in the

ISI citation databases represent lsquocommu-nity-generatedrsquo content created by theresearch community as it engages in formalscholarly communication via peer-reviewedjournals

As journals migrate to the Internet theyare becoming centers where new types ofuser-generated content can be capturedincluding usage data social tags inboundweb links and community votes Whilethese types of data may at some point proveto be complementary indicators to citationdata as yet their meaning is only beginningto be understood Citation data as an indi-cator of influence in the scholarly com-munication process in contrast have beenexplored and studied for decades For thisreason there is currently no comprehensivealternative to citation data for quantitativeassessment of research outputs and it is notsurprising that all parties interested in met-rics turn to these data

The ISI citation databases are widely usedfor these purposes because their continu-ously growing corpus of journal literaturebased on careful selection criteria theirnon-duplicative coverage and their focus onserious scholarly journals represent a clearlydefined universe within which to measureand study this formal community-generatedcontent Other citation resources may showsimilar citation patterns2 and indeed giventhe many different types of documentsindexed within them (articles books con-ference abstracts working papers preprintsstudent term papers) may eventually showhigher absolute numbers of citations thandoes the Web of Science However suchresources are more difficult to utilize forevaluative purposes because it is not alwaysclear what is being counted and their datastructures may not lend themselves as effec-tively to extraction of comparative namejournal and institutional address data As aresult one cannot be sure what is actuallybeing compared to what

The absence of lsquobest practicesrsquo

ISI citation data are most conspicuouslyused for journal ranking They are also usedfor many other evaluative purposes involv-ing individuals universities and university

86 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

the citationsfound in theISI citation

databasesrepresent

lsquocommunity-generatedrsquo

content createdby the research

community

programs national assessment programsand inter-country comparisons At the levelof national and international policy suchorganizations as the National Science Foun-dation (NSF) the European Union (EU)and the Observatoire des Sciences et desTechniques (OST) have used ISI citationdata for many years to chart the progress ofthe sciences in their regions A recent studyby the NSF available online provides a goodexample of this type of use3 Where andhow citation metrics are used in nationalassessment programs generally follows thestructure of scientific funding on a countrybasis Where central funding of universitiesis more important as in a number of Euro-pean countries citation metrics are widelyused at the national level In the USA wherefunding patterns are more diverse there isless emphasis on citations in national assess-ment and more activity at the localuniversity level Assessments of individualsfor promotion and tenure decisions may beconducted as an extension of a national pro-gram or as an individual departmentalactivity with many variations in practice

In this period of experiment within a moredemocratic digital world there are no recog-nized lsquobest practicesrsquo Some analysts areseeking to develop such guidelines but theseare as yet not widely employed or agreedupon4 In the absence of clear guidelines itis no wonder that citation metrics are some-times subject to misuse

What constitutes misuse however is notalways black and white Citation data canbe used effectively in many different waysto answer many different questions It isimportant to use such data in a way thatcontributes to a deeper understanding andbetter judgment about a question of interestBecause the nature of the questions askedcan be so varied the charge of lsquomisusersquo mustbe leveled with an understanding of the spe-cific use to which the data are being applied

Several years ago an analyst for ThomsonScientific proposed a set of rules for applyingcitation analysis to evaluation5 This presen-tation remains unpublished but I willextract from it two key lsquorulesrsquo that particu-larly stand out as relevant to the presentanalysis and I will call them the lsquogoldenrulesrsquo

First lsquoconsider whether the available datacan address the questionrsquo Too often auser of citation data attracted to theirease of quantification treats citation anal-ysis like a hammer in search of nailsrather than determining whether the dataare available and to rephrase slightly ap-propriate to the question asked

Second lsquocompare like with likersquo This fun-damental rule of citation analysis pervadesevery evaluative exercise but is oftenhonored only in the breach Wheneverone hears the question lsquois a high journalimpact factor always betterrsquo or when anassistant professor in economics is judgedless worthy because she lsquohas fewer cita-tionsrsquo than her colleague in immunologythis rule has been violated

The journal impact factor debate

Both rules are in play in the most active areaof debate in citation metrics that concern-ing the journal impact factor (JIF) The JIF isthe best-known example of citation metricsused for evaluation It is a journal-level met-ric designed for one purpose ndash to comparethe citation impact of one journal with otherjournals Because it has become so wellknown lsquoimpact factorrsquo has almost come tostand as shorthand for any use of citationmetrics of which one disapproves Yet mostof the issues raised concerning the JIF inevaluation when these are not simply dataissues affecting a limited number of casesresult from a failure to follow the two rules

Trends in the use of ISI citation databases for evaluation 87

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

The journal impact factor

The journal impact factor (JIF) oftenknown simply as the impact factor (IF) isa ratio between citations and recentcitable items published It is calculated asfollows

A = total cites in 2006B = 2006 cites to articles published in

2004ndash5 (a subset of A)C = number of articles (lsquocitable itemsrsquo

excluding editorials letters newsitems and meeting abstracts)published in 2004ndash5

D = BC = 2006 impact factor

in the absenceof clearguidelines it isno wonder thatcitation metricsare sometimessubject tomisuse

listed above and concern uses of the JIF forpurposes other than that for which it wasdesigned

There is no shortage of criticism of thisparticular metric ndash criticism largely born ofits success in becoming widely used andavailable ndash and thus part of lsquodemocratizationrsquoin the digital world While the JIF is a goodgeneral comparative measure of journal per-formance it has limitations as does anystatistical measure which have been volu-minously documented in recent years

Initially critics pointed to factors affectingthe calculation of the JIF itself such as a dif-ference between items counted in numeratorand denominator and the possibility thatself-citation may artificially inflate the JIFof particular journals67 These limitationsexist but appear to affect only a very smallpercentage of journals in any year89

Other critics point out structural limitssuch as the perceived short time window ofthe calculation It is readily demonstrablethat journals in different disciplines have dif-ferent citation patterns This difficulty iseasily overcome by comparing like with likendash comparing journals with others in thesame field rather than with journals fromanother field In this vein refinements of theJIF have been proposed for cross-disciplinarystudies that take into account these differ-ing patterns Calculations such as lsquorank-normalizedrsquo impact factors10 or includingweightings based on citation time periods11

can be used to correct for this variation Thenumber of such lsquocorrectiversquo measures is con-stantly growing

Another line of criticism charges thatover-use of lsquoimpact factorsrsquo has shiftedbehavior among authors and publishers inundesirable ways This line of criticism hasbecome a popular journalistic theme12 How-ever given the fundamental trends alreadymentioned it is likely that this criticismactually applies to any possible quantitativeranking system used to compare journalsPressure to publish in top journals derivesfrom an intensifying outcomes-based cul-ture not from a particular measure It ishighly debatable for example whether Sci-ence and Nature would have lower rejectionrates if the JIF did not exist Furthermorestudies of actual editorial behavior under-

taken to influence the JIF for particularjournals show that most editors sought toraise their journalsrsquo impact factors over timeby applying sound editorial practices andcultivating top authors not by editorialmanipulation of citing behaviors13

Variants on the JIF and proposed alter-natives to it are appearing at an increasingrate which a statistical study if one wereconducted might possibly find to be runningas high as 1ndash2 new journal metrics permonth Each such new metric tends to behailed as the ultimate solution to the prob-lems of journal ranking It is questionablehowever whether the sheer multiplication ofmetrics will somehow lead us to a fundamen-tal change in practice or whether theadoption of new metrics will fundamentallychange our view of the overall ranking ofjournals

Some new journal metrics employ com-plex methodologies for uncertain results Forexample the Eigenfactor is a metric thatuses the journal citation matrices presentedin the Journal Citation Reportsreg to rank jour-nals based on their position within theoverall network of journal citation relation-ships1114 It relies on a PageRank-likealgorithm that weights citations based onthe citation counts of the citing journal in acomplex vector algorithm Such a math-ematical approach has been proven toachieve more relevant web searching butwhether it also generates better evaluativerankings is only beginning to be testedAllowing a citation from a highly cited jour-nal to lsquocount morersquo than a citation from aless-cited journal may prove to be highlycontroversial in principle once it is morewidely understood

Likewise the prospect of usage metricsseems to promise a lsquocounterweightrsquo to theJIF1215 This idea merits consideration butthe meaning of a download currentlyremains much less well understood than thatof a citation and so the meaning of suchmetrics is still to be determined until theyhave been similarly battle-tested For somedisciplines in which publications are ori-ented toward practitioners who read butrarely cite usage may in fact prove to be auseful complement to citation in journalranking Elsewhere its role is less clear

88 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

some newjournal metrics

employ complexmethodologiesfor uncertain

results

These new approaches are orientedtowards specific issues in journal rankingHowever they do not address the issue ofbroader use of the JIF in research evaluationIt is here that the two rules listed aboveapply most directly The use of the JIF inthese contexts tends to take one of two mainforms

An indicator of success achieved in hav-ing an article accepted by a prestigiousjournal

A surrogate for a more carefully deriveddirect measure of citation impact

While the first use may have some utilitythe second appears difficult to justify

The first use ndash as an indicator that theauthor has succeeded in having an articleaccepted in a prestigious journal ndash has somejustification There is a hierarchy of journalswithin subject areas and this hierarchybroadly corresponds to impact rankings soin a formal communication system based onpeer review acceptance of an article forpublication in one of these journals is animportant scholarly achievement Rewardsbased on this achievement can be seen asencouraging scholars to aim high in theirpublishing goals The data may be appropri-ate to the question being asked if one iscomparing achievement within a given fieldHowever comparing like with like our sec-ond rule is more difficult to apply acrossdisciplines since JIFs vary greatly across dis-ciplines Rank in category is very importantand can be addressed in several ways Oneway is to create a quadrant division amongjournals within disciplines and then com-pare publication in quadrants across disci-plines16

But the use of the JIF does not end thereIt is also used as a surrogate for more directmeasures in such concepts as a lsquototal impactfactorrsquo or other calculations in which theJIF stands in for article performance and fur-ther calculations are performed on lists ofJIFs It is very hard to see how such data somanipulated are appropriate for any ques-tion related to evaluation or comparison

In response to issues raised in the per-ceived misuse of the JIF in evaluation callsfor the use of lsquoarticle-level metricsrsquo havebeen made and the most popular of such

metrics at the moment is the lsquoh-indexrsquo17

This metric and an increasing number ofother article-level metrics are readily calcu-lated in the Web of Science and even in someof the openly accessible citation resources18

Although article-level metrics appear tobe more appropriate for questions aboutindividual performance than the JIF theymay easily violate the rule that calls for us tolsquocompare like with likersquo at least until thesemetrics are better understood A simple listof h-indexes without reference to time peri-ods or the consistency of the underlyingdata may prove to be much more dangerousand deceptive than any proposed misuse ofthe JIF Examples of this unbounded anduncorrected use are easily found See forexample a recent list published in Retro-virology19 Furthermore in todayrsquos highlycollaborative research environments eventhe concept of deriving a metric reflectingindividual performance from a set of co-authored papers without some assignmentof shares or roles raises the issue of whetherthe data are in fact appropriate to the ques-tions being asked of them

lsquoHorses for coursesrsquo in research evaluation

Evaluative uses of citation data are mosteffective when the citation data and metricsare carefully designed for the particular pur-pose and rely on appropriate benchmark andbaseline data for comparisons For this rea-son Thomson Scientific provides manydifferent metrics for different purposes andcontinues to develop new ones as specificnew evaluative needs arise The many part-ners and consultants who add value andresell ISI citation data generally do likewise

Trends in the use of ISI citation databases for evaluation 89

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

h-index

The h-index was first proposed by JEHirsch in 2005 He defines it as followslsquoA scientist has index h if h of his or herNp papers have h citations and the other(Npndash h) papers have leh citations eachrsquo17

This metric is useful because it discountsthe disproportionate weight of highlycited papers or papers that have not yetbeen cited

comparing likewith like ismore difficultacrossdisciplinessince JIFs varygreatly acrossdisciplines

A few examples from the Thomson portfoliowill illustrate these points

In terms of journal level metrics Thom-sonrsquos Journal Citation Reportsreg annuallyprovides many different metrics the JIF ajournal-level Immediacy Index to show howrapidly journals accumulate citations in theirfirst year cited and citing lsquohalf lifelsquo indica-tors and 10-year charts of citations to showhow citations accumulate over periods oftime a chart of self-citations to show thepercentages of citations to the same journaland a Journal Relatedness Index20 to show thejournals most closely associated with a givenjournal in terms of citations plus basic sta-tistics and rank in category and the full setof article article type and citation countsthat support these calculations Each ofthese metrics helps answer specific questionsabout the journal and the user is encour-aged to explore the data and use itappropriately instead of simply grabbing theJIF and using it as the universal answer toany evaluative question

More advanced journal-level analysis issupported by more customized and flexibleproducts most notably the Journal Perfor-mance Indicatorstrade which allow calculationof JIF-like metrics for the userrsquos chosen timeperiods and a more fine-grained analysis bydocument types and field-specific baselinesMore locally focused workflow-orientedanalysis is supported by the Journal UseReportstrade which enable users to comparejournal-level citation metrics with the publi-cation citation and usage data for their owninstitutions

For the most part these tools do not sup-port research evaluation because of thelimitations of journal-level metrics outlinedearlier For research evaluation Thomson

Scientific recognizes the need for customdata sets and for comparative data appropri-ate to the specific evaluative task andprovides these for both standard and customdata sets as needed Such metrics are builtup from the individual article level andinclude among many others

Simple counts of cites papers and citesper paper providing basic statistics andbaselines over a variety of time periods aswell as standard statistical measures suchas mean median and mode

Second-generation citation counts (cita-tions from articles that cite the lsquocitingarticlesrsquo of the original article) normalizedby field which measure of the long termimpact of a paper

Time series data covering both standardciting and cited periods (one-year five-year 25-year etc) and user-defined timeperiods

Percent shares of papers or citationswithin a broader category (such as a fieldcountry or institution)

Field baselines representing the meanimpact (cites per paper) for papers in afield (usually a journal set) defined for aspecific citedciting time window

Expected citation counts which predicthow often a paper is expected to be citedbased on its year journal and article type

The variety of these metrics and data setsreflects the recognition of the two lsquogoldenrulesrsquo of citation data as applied to evalua-tion ndash that data be appropriate to thequestions asked and that all comparativequestions rely on comparisons of like withlike When these rules are kept in mind andthose who use citation data take the timeand patience to apply them carefully cita-tion data can be used productively andhelpfully in evaluative processes

As Eugene Garfield the founder of ISIand inventor of the ISI citation databaseshas remarked lsquoIn 1955 it did not occur tome that ldquoimpactrdquo would become so contro-versialrsquo9 The pressures of outcomes-basedfunding decisions and personal rewards sys-tems along with the ready availability ofcitation data in a more democratized digitalworld will continue to fuel this controversyIn view of these powerful trends it is more

90 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

Cited half life

The number of publication years from thecurrent Journal Citation Reports year thataccount for 50 of the citations receivedby the journal This metric is usefulbecause it helps show the speed withwhich citations accrue to the articles in ajournal and the continued use and valueof older articles

citation datacan be usedproductively

and helpfully inevaluativeprocesses

important than ever to remind ourselvesthat citation metrics must be used carefullyand should only be used to inform not tosupplant peer review and other forms ofqualitative evaluation

References

1 Lawrence P 2007 The mismeasurement of scienceCurrent Biology 17 R583ndash5httpdxdoiorg101016jcub200706014

2 Pauly D and Stergiou K 2005 Equivalence of resultsfrom two citation analyses Thomson ISIrsquos CitationIndex and Googlersquos Scholar service Ethics in Scienceand Environmental Politics 2005 33ndash5 Available athttpwwwint-rescomarticlesesep2005E65pdf

3 Hill D Rapoport A Lehming R and Bell RChanging US Output of Scientific Articles 1988ndash2003Arlington VA National Science Foundation Divi-sion of Science Resources Statistics 2007 Availableat httpwwwnsfgovstatisticsnsf07320

4 Moed H Citation Analysis in Research EvaluationAmsterdam Springer 2005

5 Pendlebury D The ISI Database and BibliometricsUses amp Abuses in Evaluating Research Unpublishedpresentation at Thomson ISI Symposium Tokyo2002

6 Moed H and Vanleeuwen T 1995 Improving theaccuracy of Institute for Scientific Informationrsquos Jour-nal Impact Factors Journal of the American Society forInformation Science 46 461ndash7httpdxdoiorg101002(SICI)1097-4571(199507)466lt461AID-ASI5gt30CO2-G

7 Seglen P 1997 Citations and journal impact factorsquestionable indicators of research quality Allergy 521050ndash6httpdxdoiorg101111j1398ndash99951997tb00175x

8 McVeigh M Journal Self-Citation in the Journal Cita-tion Reports ndash Science Edition (2002) Philadelphia PAThomson Corporation 2004 Available at httpwwwthomsonscientificcommediapresentrepessayspdfselfcitationsinjcrpdf

9 Garfield E The Agony and the Ecstasy ndash The Historyand Meaning of the Journal Impact Factor InternationalCongress on Peer Review And Biomedical Publica-tion Chicago 16 September 2005 Available athttpwwwgarfieldlibraryupennedupapersjifchicago2005pdf

10 Pudovkin A and Garfield E Rank-normalizedimpact factor a way to compare journal performanceacross subject categories Proceedings of the 67thASISampT Annual Meeting Providence RI American

Society for Information Systems amp Technology 2004Available at httpwwwgarfieldlibraryupennedupapersasistranknormalization2004pdf

11 Sombatsompop N Markpin T and PremkamolnetrN 2004 A modified method for calculating theimpact factors of journals in ISI Journal CitationReports Polymer Science Category in 1997ndash2001Scientometrics 60 217ndash35httpdxdoiorg101023BSCIE000002779498854f6

12 Monastersky R 2005 The number thatrsquos devouringscience Chronicle of Higher Education 52 A12ndash17Available at httpchroniclecomfreev52i0808a01201htm

13 Chew M Villanueva E and Van Der Weyden M2007 Life and times of the impact factor retrospect-ive analysis of trends for seven medical journals(1994ndash2005) and their Editorsrsquo views Journal of theRoyal Society of Medicine 100 142ndash50httpdxdoiorg101258jrsm1003142

14 Bergstrom C httpwwweigenfactororgmethodshtm

15 Shepherd P Final Report on the Investigation into theFeasibility of Developing and Implementing Journal UsageFactors UK Serials Group 2007 httpwwwuksgorgusagefactorsfinal

16 Magri M-H and Solari A 1996 The SCI journalcitation reports a potential tool for studying journalsScientometrics 35 93ndash117httpdxdoiorg101007BF02018235

17 Hirsch J 2005 An index to quantify an individualrsquosscientific research output Proceedings of the NationalAcademy of Science 102 16569ndash72httpdxdoiorg101073pnas0507655102

18 Harzing A-W httpwwwharzingcomresourceshtmpophtm Accessed October 2007

19 Jeang K 2007 Impact factor H index peer compari-sons and Retrovirology is it time to individualize cita-tion metrics Retrovirology 4 42httpdxdoiorg1011861742ndash4690ndash4ndash42

20 Pudovkin A and Garfield E 2002 Algorithmic pro-cedure for finding semantically related journals Jour-nal of the American Society for Information Science andTechnology 53 1113ndash19httpdxdoiorg101002asi10153

James PringleVice President Product DevelopmentThomson Scientific3501 Market StreetPhiladelphia PA 19104 USAEmail jamespringlethomsoncom

Trends in the use of ISI citation databases for evaluation 91

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

citation metricsmust be usedcarefully andshould only beused to informnot to supplantpeer review

Page 3: Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

programs national assessment programsand inter-country comparisons At the levelof national and international policy suchorganizations as the National Science Foun-dation (NSF) the European Union (EU)and the Observatoire des Sciences et desTechniques (OST) have used ISI citationdata for many years to chart the progress ofthe sciences in their regions A recent studyby the NSF available online provides a goodexample of this type of use3 Where andhow citation metrics are used in nationalassessment programs generally follows thestructure of scientific funding on a countrybasis Where central funding of universitiesis more important as in a number of Euro-pean countries citation metrics are widelyused at the national level In the USA wherefunding patterns are more diverse there isless emphasis on citations in national assess-ment and more activity at the localuniversity level Assessments of individualsfor promotion and tenure decisions may beconducted as an extension of a national pro-gram or as an individual departmentalactivity with many variations in practice

In this period of experiment within a moredemocratic digital world there are no recog-nized lsquobest practicesrsquo Some analysts areseeking to develop such guidelines but theseare as yet not widely employed or agreedupon4 In the absence of clear guidelines itis no wonder that citation metrics are some-times subject to misuse

What constitutes misuse however is notalways black and white Citation data canbe used effectively in many different waysto answer many different questions It isimportant to use such data in a way thatcontributes to a deeper understanding andbetter judgment about a question of interestBecause the nature of the questions askedcan be so varied the charge of lsquomisusersquo mustbe leveled with an understanding of the spe-cific use to which the data are being applied

Several years ago an analyst for ThomsonScientific proposed a set of rules for applyingcitation analysis to evaluation5 This presen-tation remains unpublished but I willextract from it two key lsquorulesrsquo that particu-larly stand out as relevant to the presentanalysis and I will call them the lsquogoldenrulesrsquo

First lsquoconsider whether the available datacan address the questionrsquo Too often auser of citation data attracted to theirease of quantification treats citation anal-ysis like a hammer in search of nailsrather than determining whether the dataare available and to rephrase slightly ap-propriate to the question asked

Second lsquocompare like with likersquo This fun-damental rule of citation analysis pervadesevery evaluative exercise but is oftenhonored only in the breach Wheneverone hears the question lsquois a high journalimpact factor always betterrsquo or when anassistant professor in economics is judgedless worthy because she lsquohas fewer cita-tionsrsquo than her colleague in immunologythis rule has been violated

The journal impact factor debate

Both rules are in play in the most active areaof debate in citation metrics that concern-ing the journal impact factor (JIF) The JIF isthe best-known example of citation metricsused for evaluation It is a journal-level met-ric designed for one purpose ndash to comparethe citation impact of one journal with otherjournals Because it has become so wellknown lsquoimpact factorrsquo has almost come tostand as shorthand for any use of citationmetrics of which one disapproves Yet mostof the issues raised concerning the JIF inevaluation when these are not simply dataissues affecting a limited number of casesresult from a failure to follow the two rules

Trends in the use of ISI citation databases for evaluation 87

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

The journal impact factor

The journal impact factor (JIF) oftenknown simply as the impact factor (IF) isa ratio between citations and recentcitable items published It is calculated asfollows

A = total cites in 2006B = 2006 cites to articles published in

2004ndash5 (a subset of A)C = number of articles (lsquocitable itemsrsquo

excluding editorials letters newsitems and meeting abstracts)published in 2004ndash5

D = BC = 2006 impact factor

in the absenceof clearguidelines it isno wonder thatcitation metricsare sometimessubject tomisuse

listed above and concern uses of the JIF forpurposes other than that for which it wasdesigned

There is no shortage of criticism of thisparticular metric ndash criticism largely born ofits success in becoming widely used andavailable ndash and thus part of lsquodemocratizationrsquoin the digital world While the JIF is a goodgeneral comparative measure of journal per-formance it has limitations as does anystatistical measure which have been volu-minously documented in recent years

Initially critics pointed to factors affectingthe calculation of the JIF itself such as a dif-ference between items counted in numeratorand denominator and the possibility thatself-citation may artificially inflate the JIFof particular journals67 These limitationsexist but appear to affect only a very smallpercentage of journals in any year89

Other critics point out structural limitssuch as the perceived short time window ofthe calculation It is readily demonstrablethat journals in different disciplines have dif-ferent citation patterns This difficulty iseasily overcome by comparing like with likendash comparing journals with others in thesame field rather than with journals fromanother field In this vein refinements of theJIF have been proposed for cross-disciplinarystudies that take into account these differ-ing patterns Calculations such as lsquorank-normalizedrsquo impact factors10 or includingweightings based on citation time periods11

can be used to correct for this variation Thenumber of such lsquocorrectiversquo measures is con-stantly growing

Another line of criticism charges thatover-use of lsquoimpact factorsrsquo has shiftedbehavior among authors and publishers inundesirable ways This line of criticism hasbecome a popular journalistic theme12 How-ever given the fundamental trends alreadymentioned it is likely that this criticismactually applies to any possible quantitativeranking system used to compare journalsPressure to publish in top journals derivesfrom an intensifying outcomes-based cul-ture not from a particular measure It ishighly debatable for example whether Sci-ence and Nature would have lower rejectionrates if the JIF did not exist Furthermorestudies of actual editorial behavior under-

taken to influence the JIF for particularjournals show that most editors sought toraise their journalsrsquo impact factors over timeby applying sound editorial practices andcultivating top authors not by editorialmanipulation of citing behaviors13

Variants on the JIF and proposed alter-natives to it are appearing at an increasingrate which a statistical study if one wereconducted might possibly find to be runningas high as 1ndash2 new journal metrics permonth Each such new metric tends to behailed as the ultimate solution to the prob-lems of journal ranking It is questionablehowever whether the sheer multiplication ofmetrics will somehow lead us to a fundamen-tal change in practice or whether theadoption of new metrics will fundamentallychange our view of the overall ranking ofjournals

Some new journal metrics employ com-plex methodologies for uncertain results Forexample the Eigenfactor is a metric thatuses the journal citation matrices presentedin the Journal Citation Reportsreg to rank jour-nals based on their position within theoverall network of journal citation relation-ships1114 It relies on a PageRank-likealgorithm that weights citations based onthe citation counts of the citing journal in acomplex vector algorithm Such a math-ematical approach has been proven toachieve more relevant web searching butwhether it also generates better evaluativerankings is only beginning to be testedAllowing a citation from a highly cited jour-nal to lsquocount morersquo than a citation from aless-cited journal may prove to be highlycontroversial in principle once it is morewidely understood

Likewise the prospect of usage metricsseems to promise a lsquocounterweightrsquo to theJIF1215 This idea merits consideration butthe meaning of a download currentlyremains much less well understood than thatof a citation and so the meaning of suchmetrics is still to be determined until theyhave been similarly battle-tested For somedisciplines in which publications are ori-ented toward practitioners who read butrarely cite usage may in fact prove to be auseful complement to citation in journalranking Elsewhere its role is less clear

88 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

some newjournal metrics

employ complexmethodologiesfor uncertain

results

These new approaches are orientedtowards specific issues in journal rankingHowever they do not address the issue ofbroader use of the JIF in research evaluationIt is here that the two rules listed aboveapply most directly The use of the JIF inthese contexts tends to take one of two mainforms

An indicator of success achieved in hav-ing an article accepted by a prestigiousjournal

A surrogate for a more carefully deriveddirect measure of citation impact

While the first use may have some utilitythe second appears difficult to justify

The first use ndash as an indicator that theauthor has succeeded in having an articleaccepted in a prestigious journal ndash has somejustification There is a hierarchy of journalswithin subject areas and this hierarchybroadly corresponds to impact rankings soin a formal communication system based onpeer review acceptance of an article forpublication in one of these journals is animportant scholarly achievement Rewardsbased on this achievement can be seen asencouraging scholars to aim high in theirpublishing goals The data may be appropri-ate to the question being asked if one iscomparing achievement within a given fieldHowever comparing like with like our sec-ond rule is more difficult to apply acrossdisciplines since JIFs vary greatly across dis-ciplines Rank in category is very importantand can be addressed in several ways Oneway is to create a quadrant division amongjournals within disciplines and then com-pare publication in quadrants across disci-plines16

But the use of the JIF does not end thereIt is also used as a surrogate for more directmeasures in such concepts as a lsquototal impactfactorrsquo or other calculations in which theJIF stands in for article performance and fur-ther calculations are performed on lists ofJIFs It is very hard to see how such data somanipulated are appropriate for any ques-tion related to evaluation or comparison

In response to issues raised in the per-ceived misuse of the JIF in evaluation callsfor the use of lsquoarticle-level metricsrsquo havebeen made and the most popular of such

metrics at the moment is the lsquoh-indexrsquo17

This metric and an increasing number ofother article-level metrics are readily calcu-lated in the Web of Science and even in someof the openly accessible citation resources18

Although article-level metrics appear tobe more appropriate for questions aboutindividual performance than the JIF theymay easily violate the rule that calls for us tolsquocompare like with likersquo at least until thesemetrics are better understood A simple listof h-indexes without reference to time peri-ods or the consistency of the underlyingdata may prove to be much more dangerousand deceptive than any proposed misuse ofthe JIF Examples of this unbounded anduncorrected use are easily found See forexample a recent list published in Retro-virology19 Furthermore in todayrsquos highlycollaborative research environments eventhe concept of deriving a metric reflectingindividual performance from a set of co-authored papers without some assignmentof shares or roles raises the issue of whetherthe data are in fact appropriate to the ques-tions being asked of them

lsquoHorses for coursesrsquo in research evaluation

Evaluative uses of citation data are mosteffective when the citation data and metricsare carefully designed for the particular pur-pose and rely on appropriate benchmark andbaseline data for comparisons For this rea-son Thomson Scientific provides manydifferent metrics for different purposes andcontinues to develop new ones as specificnew evaluative needs arise The many part-ners and consultants who add value andresell ISI citation data generally do likewise

Trends in the use of ISI citation databases for evaluation 89

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

h-index

The h-index was first proposed by JEHirsch in 2005 He defines it as followslsquoA scientist has index h if h of his or herNp papers have h citations and the other(Npndash h) papers have leh citations eachrsquo17

This metric is useful because it discountsthe disproportionate weight of highlycited papers or papers that have not yetbeen cited

comparing likewith like ismore difficultacrossdisciplinessince JIFs varygreatly acrossdisciplines

A few examples from the Thomson portfoliowill illustrate these points

In terms of journal level metrics Thom-sonrsquos Journal Citation Reportsreg annuallyprovides many different metrics the JIF ajournal-level Immediacy Index to show howrapidly journals accumulate citations in theirfirst year cited and citing lsquohalf lifelsquo indica-tors and 10-year charts of citations to showhow citations accumulate over periods oftime a chart of self-citations to show thepercentages of citations to the same journaland a Journal Relatedness Index20 to show thejournals most closely associated with a givenjournal in terms of citations plus basic sta-tistics and rank in category and the full setof article article type and citation countsthat support these calculations Each ofthese metrics helps answer specific questionsabout the journal and the user is encour-aged to explore the data and use itappropriately instead of simply grabbing theJIF and using it as the universal answer toany evaluative question

More advanced journal-level analysis issupported by more customized and flexibleproducts most notably the Journal Perfor-mance Indicatorstrade which allow calculationof JIF-like metrics for the userrsquos chosen timeperiods and a more fine-grained analysis bydocument types and field-specific baselinesMore locally focused workflow-orientedanalysis is supported by the Journal UseReportstrade which enable users to comparejournal-level citation metrics with the publi-cation citation and usage data for their owninstitutions

For the most part these tools do not sup-port research evaluation because of thelimitations of journal-level metrics outlinedearlier For research evaluation Thomson

Scientific recognizes the need for customdata sets and for comparative data appropri-ate to the specific evaluative task andprovides these for both standard and customdata sets as needed Such metrics are builtup from the individual article level andinclude among many others

Simple counts of cites papers and citesper paper providing basic statistics andbaselines over a variety of time periods aswell as standard statistical measures suchas mean median and mode

Second-generation citation counts (cita-tions from articles that cite the lsquocitingarticlesrsquo of the original article) normalizedby field which measure of the long termimpact of a paper

Time series data covering both standardciting and cited periods (one-year five-year 25-year etc) and user-defined timeperiods

Percent shares of papers or citationswithin a broader category (such as a fieldcountry or institution)

Field baselines representing the meanimpact (cites per paper) for papers in afield (usually a journal set) defined for aspecific citedciting time window

Expected citation counts which predicthow often a paper is expected to be citedbased on its year journal and article type

The variety of these metrics and data setsreflects the recognition of the two lsquogoldenrulesrsquo of citation data as applied to evalua-tion ndash that data be appropriate to thequestions asked and that all comparativequestions rely on comparisons of like withlike When these rules are kept in mind andthose who use citation data take the timeand patience to apply them carefully cita-tion data can be used productively andhelpfully in evaluative processes

As Eugene Garfield the founder of ISIand inventor of the ISI citation databaseshas remarked lsquoIn 1955 it did not occur tome that ldquoimpactrdquo would become so contro-versialrsquo9 The pressures of outcomes-basedfunding decisions and personal rewards sys-tems along with the ready availability ofcitation data in a more democratized digitalworld will continue to fuel this controversyIn view of these powerful trends it is more

90 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

Cited half life

The number of publication years from thecurrent Journal Citation Reports year thataccount for 50 of the citations receivedby the journal This metric is usefulbecause it helps show the speed withwhich citations accrue to the articles in ajournal and the continued use and valueof older articles

citation datacan be usedproductively

and helpfully inevaluativeprocesses

important than ever to remind ourselvesthat citation metrics must be used carefullyand should only be used to inform not tosupplant peer review and other forms ofqualitative evaluation

References

1 Lawrence P 2007 The mismeasurement of scienceCurrent Biology 17 R583ndash5httpdxdoiorg101016jcub200706014

2 Pauly D and Stergiou K 2005 Equivalence of resultsfrom two citation analyses Thomson ISIrsquos CitationIndex and Googlersquos Scholar service Ethics in Scienceand Environmental Politics 2005 33ndash5 Available athttpwwwint-rescomarticlesesep2005E65pdf

3 Hill D Rapoport A Lehming R and Bell RChanging US Output of Scientific Articles 1988ndash2003Arlington VA National Science Foundation Divi-sion of Science Resources Statistics 2007 Availableat httpwwwnsfgovstatisticsnsf07320

4 Moed H Citation Analysis in Research EvaluationAmsterdam Springer 2005

5 Pendlebury D The ISI Database and BibliometricsUses amp Abuses in Evaluating Research Unpublishedpresentation at Thomson ISI Symposium Tokyo2002

6 Moed H and Vanleeuwen T 1995 Improving theaccuracy of Institute for Scientific Informationrsquos Jour-nal Impact Factors Journal of the American Society forInformation Science 46 461ndash7httpdxdoiorg101002(SICI)1097-4571(199507)466lt461AID-ASI5gt30CO2-G

7 Seglen P 1997 Citations and journal impact factorsquestionable indicators of research quality Allergy 521050ndash6httpdxdoiorg101111j1398ndash99951997tb00175x

8 McVeigh M Journal Self-Citation in the Journal Cita-tion Reports ndash Science Edition (2002) Philadelphia PAThomson Corporation 2004 Available at httpwwwthomsonscientificcommediapresentrepessayspdfselfcitationsinjcrpdf

9 Garfield E The Agony and the Ecstasy ndash The Historyand Meaning of the Journal Impact Factor InternationalCongress on Peer Review And Biomedical Publica-tion Chicago 16 September 2005 Available athttpwwwgarfieldlibraryupennedupapersjifchicago2005pdf

10 Pudovkin A and Garfield E Rank-normalizedimpact factor a way to compare journal performanceacross subject categories Proceedings of the 67thASISampT Annual Meeting Providence RI American

Society for Information Systems amp Technology 2004Available at httpwwwgarfieldlibraryupennedupapersasistranknormalization2004pdf

11 Sombatsompop N Markpin T and PremkamolnetrN 2004 A modified method for calculating theimpact factors of journals in ISI Journal CitationReports Polymer Science Category in 1997ndash2001Scientometrics 60 217ndash35httpdxdoiorg101023BSCIE000002779498854f6

12 Monastersky R 2005 The number thatrsquos devouringscience Chronicle of Higher Education 52 A12ndash17Available at httpchroniclecomfreev52i0808a01201htm

13 Chew M Villanueva E and Van Der Weyden M2007 Life and times of the impact factor retrospect-ive analysis of trends for seven medical journals(1994ndash2005) and their Editorsrsquo views Journal of theRoyal Society of Medicine 100 142ndash50httpdxdoiorg101258jrsm1003142

14 Bergstrom C httpwwweigenfactororgmethodshtm

15 Shepherd P Final Report on the Investigation into theFeasibility of Developing and Implementing Journal UsageFactors UK Serials Group 2007 httpwwwuksgorgusagefactorsfinal

16 Magri M-H and Solari A 1996 The SCI journalcitation reports a potential tool for studying journalsScientometrics 35 93ndash117httpdxdoiorg101007BF02018235

17 Hirsch J 2005 An index to quantify an individualrsquosscientific research output Proceedings of the NationalAcademy of Science 102 16569ndash72httpdxdoiorg101073pnas0507655102

18 Harzing A-W httpwwwharzingcomresourceshtmpophtm Accessed October 2007

19 Jeang K 2007 Impact factor H index peer compari-sons and Retrovirology is it time to individualize cita-tion metrics Retrovirology 4 42httpdxdoiorg1011861742ndash4690ndash4ndash42

20 Pudovkin A and Garfield E 2002 Algorithmic pro-cedure for finding semantically related journals Jour-nal of the American Society for Information Science andTechnology 53 1113ndash19httpdxdoiorg101002asi10153

James PringleVice President Product DevelopmentThomson Scientific3501 Market StreetPhiladelphia PA 19104 USAEmail jamespringlethomsoncom

Trends in the use of ISI citation databases for evaluation 91

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

citation metricsmust be usedcarefully andshould only beused to informnot to supplantpeer review

Page 4: Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

listed above and concern uses of the JIF forpurposes other than that for which it wasdesigned

There is no shortage of criticism of thisparticular metric ndash criticism largely born ofits success in becoming widely used andavailable ndash and thus part of lsquodemocratizationrsquoin the digital world While the JIF is a goodgeneral comparative measure of journal per-formance it has limitations as does anystatistical measure which have been volu-minously documented in recent years

Initially critics pointed to factors affectingthe calculation of the JIF itself such as a dif-ference between items counted in numeratorand denominator and the possibility thatself-citation may artificially inflate the JIFof particular journals67 These limitationsexist but appear to affect only a very smallpercentage of journals in any year89

Other critics point out structural limitssuch as the perceived short time window ofthe calculation It is readily demonstrablethat journals in different disciplines have dif-ferent citation patterns This difficulty iseasily overcome by comparing like with likendash comparing journals with others in thesame field rather than with journals fromanother field In this vein refinements of theJIF have been proposed for cross-disciplinarystudies that take into account these differ-ing patterns Calculations such as lsquorank-normalizedrsquo impact factors10 or includingweightings based on citation time periods11

can be used to correct for this variation Thenumber of such lsquocorrectiversquo measures is con-stantly growing

Another line of criticism charges thatover-use of lsquoimpact factorsrsquo has shiftedbehavior among authors and publishers inundesirable ways This line of criticism hasbecome a popular journalistic theme12 How-ever given the fundamental trends alreadymentioned it is likely that this criticismactually applies to any possible quantitativeranking system used to compare journalsPressure to publish in top journals derivesfrom an intensifying outcomes-based cul-ture not from a particular measure It ishighly debatable for example whether Sci-ence and Nature would have lower rejectionrates if the JIF did not exist Furthermorestudies of actual editorial behavior under-

taken to influence the JIF for particularjournals show that most editors sought toraise their journalsrsquo impact factors over timeby applying sound editorial practices andcultivating top authors not by editorialmanipulation of citing behaviors13

Variants on the JIF and proposed alter-natives to it are appearing at an increasingrate which a statistical study if one wereconducted might possibly find to be runningas high as 1ndash2 new journal metrics permonth Each such new metric tends to behailed as the ultimate solution to the prob-lems of journal ranking It is questionablehowever whether the sheer multiplication ofmetrics will somehow lead us to a fundamen-tal change in practice or whether theadoption of new metrics will fundamentallychange our view of the overall ranking ofjournals

Some new journal metrics employ com-plex methodologies for uncertain results Forexample the Eigenfactor is a metric thatuses the journal citation matrices presentedin the Journal Citation Reportsreg to rank jour-nals based on their position within theoverall network of journal citation relation-ships1114 It relies on a PageRank-likealgorithm that weights citations based onthe citation counts of the citing journal in acomplex vector algorithm Such a math-ematical approach has been proven toachieve more relevant web searching butwhether it also generates better evaluativerankings is only beginning to be testedAllowing a citation from a highly cited jour-nal to lsquocount morersquo than a citation from aless-cited journal may prove to be highlycontroversial in principle once it is morewidely understood

Likewise the prospect of usage metricsseems to promise a lsquocounterweightrsquo to theJIF1215 This idea merits consideration butthe meaning of a download currentlyremains much less well understood than thatof a citation and so the meaning of suchmetrics is still to be determined until theyhave been similarly battle-tested For somedisciplines in which publications are ori-ented toward practitioners who read butrarely cite usage may in fact prove to be auseful complement to citation in journalranking Elsewhere its role is less clear

88 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

some newjournal metrics

employ complexmethodologiesfor uncertain

results

These new approaches are orientedtowards specific issues in journal rankingHowever they do not address the issue ofbroader use of the JIF in research evaluationIt is here that the two rules listed aboveapply most directly The use of the JIF inthese contexts tends to take one of two mainforms

An indicator of success achieved in hav-ing an article accepted by a prestigiousjournal

A surrogate for a more carefully deriveddirect measure of citation impact

While the first use may have some utilitythe second appears difficult to justify

The first use ndash as an indicator that theauthor has succeeded in having an articleaccepted in a prestigious journal ndash has somejustification There is a hierarchy of journalswithin subject areas and this hierarchybroadly corresponds to impact rankings soin a formal communication system based onpeer review acceptance of an article forpublication in one of these journals is animportant scholarly achievement Rewardsbased on this achievement can be seen asencouraging scholars to aim high in theirpublishing goals The data may be appropri-ate to the question being asked if one iscomparing achievement within a given fieldHowever comparing like with like our sec-ond rule is more difficult to apply acrossdisciplines since JIFs vary greatly across dis-ciplines Rank in category is very importantand can be addressed in several ways Oneway is to create a quadrant division amongjournals within disciplines and then com-pare publication in quadrants across disci-plines16

But the use of the JIF does not end thereIt is also used as a surrogate for more directmeasures in such concepts as a lsquototal impactfactorrsquo or other calculations in which theJIF stands in for article performance and fur-ther calculations are performed on lists ofJIFs It is very hard to see how such data somanipulated are appropriate for any ques-tion related to evaluation or comparison

In response to issues raised in the per-ceived misuse of the JIF in evaluation callsfor the use of lsquoarticle-level metricsrsquo havebeen made and the most popular of such

metrics at the moment is the lsquoh-indexrsquo17

This metric and an increasing number ofother article-level metrics are readily calcu-lated in the Web of Science and even in someof the openly accessible citation resources18

Although article-level metrics appear tobe more appropriate for questions aboutindividual performance than the JIF theymay easily violate the rule that calls for us tolsquocompare like with likersquo at least until thesemetrics are better understood A simple listof h-indexes without reference to time peri-ods or the consistency of the underlyingdata may prove to be much more dangerousand deceptive than any proposed misuse ofthe JIF Examples of this unbounded anduncorrected use are easily found See forexample a recent list published in Retro-virology19 Furthermore in todayrsquos highlycollaborative research environments eventhe concept of deriving a metric reflectingindividual performance from a set of co-authored papers without some assignmentof shares or roles raises the issue of whetherthe data are in fact appropriate to the ques-tions being asked of them

lsquoHorses for coursesrsquo in research evaluation

Evaluative uses of citation data are mosteffective when the citation data and metricsare carefully designed for the particular pur-pose and rely on appropriate benchmark andbaseline data for comparisons For this rea-son Thomson Scientific provides manydifferent metrics for different purposes andcontinues to develop new ones as specificnew evaluative needs arise The many part-ners and consultants who add value andresell ISI citation data generally do likewise

Trends in the use of ISI citation databases for evaluation 89

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

h-index

The h-index was first proposed by JEHirsch in 2005 He defines it as followslsquoA scientist has index h if h of his or herNp papers have h citations and the other(Npndash h) papers have leh citations eachrsquo17

This metric is useful because it discountsthe disproportionate weight of highlycited papers or papers that have not yetbeen cited

comparing likewith like ismore difficultacrossdisciplinessince JIFs varygreatly acrossdisciplines

A few examples from the Thomson portfoliowill illustrate these points

In terms of journal level metrics Thom-sonrsquos Journal Citation Reportsreg annuallyprovides many different metrics the JIF ajournal-level Immediacy Index to show howrapidly journals accumulate citations in theirfirst year cited and citing lsquohalf lifelsquo indica-tors and 10-year charts of citations to showhow citations accumulate over periods oftime a chart of self-citations to show thepercentages of citations to the same journaland a Journal Relatedness Index20 to show thejournals most closely associated with a givenjournal in terms of citations plus basic sta-tistics and rank in category and the full setof article article type and citation countsthat support these calculations Each ofthese metrics helps answer specific questionsabout the journal and the user is encour-aged to explore the data and use itappropriately instead of simply grabbing theJIF and using it as the universal answer toany evaluative question

More advanced journal-level analysis issupported by more customized and flexibleproducts most notably the Journal Perfor-mance Indicatorstrade which allow calculationof JIF-like metrics for the userrsquos chosen timeperiods and a more fine-grained analysis bydocument types and field-specific baselinesMore locally focused workflow-orientedanalysis is supported by the Journal UseReportstrade which enable users to comparejournal-level citation metrics with the publi-cation citation and usage data for their owninstitutions

For the most part these tools do not sup-port research evaluation because of thelimitations of journal-level metrics outlinedearlier For research evaluation Thomson

Scientific recognizes the need for customdata sets and for comparative data appropri-ate to the specific evaluative task andprovides these for both standard and customdata sets as needed Such metrics are builtup from the individual article level andinclude among many others

Simple counts of cites papers and citesper paper providing basic statistics andbaselines over a variety of time periods aswell as standard statistical measures suchas mean median and mode

Second-generation citation counts (cita-tions from articles that cite the lsquocitingarticlesrsquo of the original article) normalizedby field which measure of the long termimpact of a paper

Time series data covering both standardciting and cited periods (one-year five-year 25-year etc) and user-defined timeperiods

Percent shares of papers or citationswithin a broader category (such as a fieldcountry or institution)

Field baselines representing the meanimpact (cites per paper) for papers in afield (usually a journal set) defined for aspecific citedciting time window

Expected citation counts which predicthow often a paper is expected to be citedbased on its year journal and article type

The variety of these metrics and data setsreflects the recognition of the two lsquogoldenrulesrsquo of citation data as applied to evalua-tion ndash that data be appropriate to thequestions asked and that all comparativequestions rely on comparisons of like withlike When these rules are kept in mind andthose who use citation data take the timeand patience to apply them carefully cita-tion data can be used productively andhelpfully in evaluative processes

As Eugene Garfield the founder of ISIand inventor of the ISI citation databaseshas remarked lsquoIn 1955 it did not occur tome that ldquoimpactrdquo would become so contro-versialrsquo9 The pressures of outcomes-basedfunding decisions and personal rewards sys-tems along with the ready availability ofcitation data in a more democratized digitalworld will continue to fuel this controversyIn view of these powerful trends it is more

90 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

Cited half life

The number of publication years from thecurrent Journal Citation Reports year thataccount for 50 of the citations receivedby the journal This metric is usefulbecause it helps show the speed withwhich citations accrue to the articles in ajournal and the continued use and valueof older articles

citation datacan be usedproductively

and helpfully inevaluativeprocesses

important than ever to remind ourselvesthat citation metrics must be used carefullyand should only be used to inform not tosupplant peer review and other forms ofqualitative evaluation

References

1 Lawrence P 2007 The mismeasurement of scienceCurrent Biology 17 R583ndash5httpdxdoiorg101016jcub200706014

2 Pauly D and Stergiou K 2005 Equivalence of resultsfrom two citation analyses Thomson ISIrsquos CitationIndex and Googlersquos Scholar service Ethics in Scienceand Environmental Politics 2005 33ndash5 Available athttpwwwint-rescomarticlesesep2005E65pdf

3 Hill D Rapoport A Lehming R and Bell RChanging US Output of Scientific Articles 1988ndash2003Arlington VA National Science Foundation Divi-sion of Science Resources Statistics 2007 Availableat httpwwwnsfgovstatisticsnsf07320

4 Moed H Citation Analysis in Research EvaluationAmsterdam Springer 2005

5 Pendlebury D The ISI Database and BibliometricsUses amp Abuses in Evaluating Research Unpublishedpresentation at Thomson ISI Symposium Tokyo2002

6 Moed H and Vanleeuwen T 1995 Improving theaccuracy of Institute for Scientific Informationrsquos Jour-nal Impact Factors Journal of the American Society forInformation Science 46 461ndash7httpdxdoiorg101002(SICI)1097-4571(199507)466lt461AID-ASI5gt30CO2-G

7 Seglen P 1997 Citations and journal impact factorsquestionable indicators of research quality Allergy 521050ndash6httpdxdoiorg101111j1398ndash99951997tb00175x

8 McVeigh M Journal Self-Citation in the Journal Cita-tion Reports ndash Science Edition (2002) Philadelphia PAThomson Corporation 2004 Available at httpwwwthomsonscientificcommediapresentrepessayspdfselfcitationsinjcrpdf

9 Garfield E The Agony and the Ecstasy ndash The Historyand Meaning of the Journal Impact Factor InternationalCongress on Peer Review And Biomedical Publica-tion Chicago 16 September 2005 Available athttpwwwgarfieldlibraryupennedupapersjifchicago2005pdf

10 Pudovkin A and Garfield E Rank-normalizedimpact factor a way to compare journal performanceacross subject categories Proceedings of the 67thASISampT Annual Meeting Providence RI American

Society for Information Systems amp Technology 2004Available at httpwwwgarfieldlibraryupennedupapersasistranknormalization2004pdf

11 Sombatsompop N Markpin T and PremkamolnetrN 2004 A modified method for calculating theimpact factors of journals in ISI Journal CitationReports Polymer Science Category in 1997ndash2001Scientometrics 60 217ndash35httpdxdoiorg101023BSCIE000002779498854f6

12 Monastersky R 2005 The number thatrsquos devouringscience Chronicle of Higher Education 52 A12ndash17Available at httpchroniclecomfreev52i0808a01201htm

13 Chew M Villanueva E and Van Der Weyden M2007 Life and times of the impact factor retrospect-ive analysis of trends for seven medical journals(1994ndash2005) and their Editorsrsquo views Journal of theRoyal Society of Medicine 100 142ndash50httpdxdoiorg101258jrsm1003142

14 Bergstrom C httpwwweigenfactororgmethodshtm

15 Shepherd P Final Report on the Investigation into theFeasibility of Developing and Implementing Journal UsageFactors UK Serials Group 2007 httpwwwuksgorgusagefactorsfinal

16 Magri M-H and Solari A 1996 The SCI journalcitation reports a potential tool for studying journalsScientometrics 35 93ndash117httpdxdoiorg101007BF02018235

17 Hirsch J 2005 An index to quantify an individualrsquosscientific research output Proceedings of the NationalAcademy of Science 102 16569ndash72httpdxdoiorg101073pnas0507655102

18 Harzing A-W httpwwwharzingcomresourceshtmpophtm Accessed October 2007

19 Jeang K 2007 Impact factor H index peer compari-sons and Retrovirology is it time to individualize cita-tion metrics Retrovirology 4 42httpdxdoiorg1011861742ndash4690ndash4ndash42

20 Pudovkin A and Garfield E 2002 Algorithmic pro-cedure for finding semantically related journals Jour-nal of the American Society for Information Science andTechnology 53 1113ndash19httpdxdoiorg101002asi10153

James PringleVice President Product DevelopmentThomson Scientific3501 Market StreetPhiladelphia PA 19104 USAEmail jamespringlethomsoncom

Trends in the use of ISI citation databases for evaluation 91

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

citation metricsmust be usedcarefully andshould only beused to informnot to supplantpeer review

Page 5: Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

These new approaches are orientedtowards specific issues in journal rankingHowever they do not address the issue ofbroader use of the JIF in research evaluationIt is here that the two rules listed aboveapply most directly The use of the JIF inthese contexts tends to take one of two mainforms

An indicator of success achieved in hav-ing an article accepted by a prestigiousjournal

A surrogate for a more carefully deriveddirect measure of citation impact

While the first use may have some utilitythe second appears difficult to justify

The first use ndash as an indicator that theauthor has succeeded in having an articleaccepted in a prestigious journal ndash has somejustification There is a hierarchy of journalswithin subject areas and this hierarchybroadly corresponds to impact rankings soin a formal communication system based onpeer review acceptance of an article forpublication in one of these journals is animportant scholarly achievement Rewardsbased on this achievement can be seen asencouraging scholars to aim high in theirpublishing goals The data may be appropri-ate to the question being asked if one iscomparing achievement within a given fieldHowever comparing like with like our sec-ond rule is more difficult to apply acrossdisciplines since JIFs vary greatly across dis-ciplines Rank in category is very importantand can be addressed in several ways Oneway is to create a quadrant division amongjournals within disciplines and then com-pare publication in quadrants across disci-plines16

But the use of the JIF does not end thereIt is also used as a surrogate for more directmeasures in such concepts as a lsquototal impactfactorrsquo or other calculations in which theJIF stands in for article performance and fur-ther calculations are performed on lists ofJIFs It is very hard to see how such data somanipulated are appropriate for any ques-tion related to evaluation or comparison

In response to issues raised in the per-ceived misuse of the JIF in evaluation callsfor the use of lsquoarticle-level metricsrsquo havebeen made and the most popular of such

metrics at the moment is the lsquoh-indexrsquo17

This metric and an increasing number ofother article-level metrics are readily calcu-lated in the Web of Science and even in someof the openly accessible citation resources18

Although article-level metrics appear tobe more appropriate for questions aboutindividual performance than the JIF theymay easily violate the rule that calls for us tolsquocompare like with likersquo at least until thesemetrics are better understood A simple listof h-indexes without reference to time peri-ods or the consistency of the underlyingdata may prove to be much more dangerousand deceptive than any proposed misuse ofthe JIF Examples of this unbounded anduncorrected use are easily found See forexample a recent list published in Retro-virology19 Furthermore in todayrsquos highlycollaborative research environments eventhe concept of deriving a metric reflectingindividual performance from a set of co-authored papers without some assignmentof shares or roles raises the issue of whetherthe data are in fact appropriate to the ques-tions being asked of them

lsquoHorses for coursesrsquo in research evaluation

Evaluative uses of citation data are mosteffective when the citation data and metricsare carefully designed for the particular pur-pose and rely on appropriate benchmark andbaseline data for comparisons For this rea-son Thomson Scientific provides manydifferent metrics for different purposes andcontinues to develop new ones as specificnew evaluative needs arise The many part-ners and consultants who add value andresell ISI citation data generally do likewise

Trends in the use of ISI citation databases for evaluation 89

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

h-index

The h-index was first proposed by JEHirsch in 2005 He defines it as followslsquoA scientist has index h if h of his or herNp papers have h citations and the other(Npndash h) papers have leh citations eachrsquo17

This metric is useful because it discountsthe disproportionate weight of highlycited papers or papers that have not yetbeen cited

comparing likewith like ismore difficultacrossdisciplinessince JIFs varygreatly acrossdisciplines

A few examples from the Thomson portfoliowill illustrate these points

In terms of journal level metrics Thom-sonrsquos Journal Citation Reportsreg annuallyprovides many different metrics the JIF ajournal-level Immediacy Index to show howrapidly journals accumulate citations in theirfirst year cited and citing lsquohalf lifelsquo indica-tors and 10-year charts of citations to showhow citations accumulate over periods oftime a chart of self-citations to show thepercentages of citations to the same journaland a Journal Relatedness Index20 to show thejournals most closely associated with a givenjournal in terms of citations plus basic sta-tistics and rank in category and the full setof article article type and citation countsthat support these calculations Each ofthese metrics helps answer specific questionsabout the journal and the user is encour-aged to explore the data and use itappropriately instead of simply grabbing theJIF and using it as the universal answer toany evaluative question

More advanced journal-level analysis issupported by more customized and flexibleproducts most notably the Journal Perfor-mance Indicatorstrade which allow calculationof JIF-like metrics for the userrsquos chosen timeperiods and a more fine-grained analysis bydocument types and field-specific baselinesMore locally focused workflow-orientedanalysis is supported by the Journal UseReportstrade which enable users to comparejournal-level citation metrics with the publi-cation citation and usage data for their owninstitutions

For the most part these tools do not sup-port research evaluation because of thelimitations of journal-level metrics outlinedearlier For research evaluation Thomson

Scientific recognizes the need for customdata sets and for comparative data appropri-ate to the specific evaluative task andprovides these for both standard and customdata sets as needed Such metrics are builtup from the individual article level andinclude among many others

Simple counts of cites papers and citesper paper providing basic statistics andbaselines over a variety of time periods aswell as standard statistical measures suchas mean median and mode

Second-generation citation counts (cita-tions from articles that cite the lsquocitingarticlesrsquo of the original article) normalizedby field which measure of the long termimpact of a paper

Time series data covering both standardciting and cited periods (one-year five-year 25-year etc) and user-defined timeperiods

Percent shares of papers or citationswithin a broader category (such as a fieldcountry or institution)

Field baselines representing the meanimpact (cites per paper) for papers in afield (usually a journal set) defined for aspecific citedciting time window

Expected citation counts which predicthow often a paper is expected to be citedbased on its year journal and article type

The variety of these metrics and data setsreflects the recognition of the two lsquogoldenrulesrsquo of citation data as applied to evalua-tion ndash that data be appropriate to thequestions asked and that all comparativequestions rely on comparisons of like withlike When these rules are kept in mind andthose who use citation data take the timeand patience to apply them carefully cita-tion data can be used productively andhelpfully in evaluative processes

As Eugene Garfield the founder of ISIand inventor of the ISI citation databaseshas remarked lsquoIn 1955 it did not occur tome that ldquoimpactrdquo would become so contro-versialrsquo9 The pressures of outcomes-basedfunding decisions and personal rewards sys-tems along with the ready availability ofcitation data in a more democratized digitalworld will continue to fuel this controversyIn view of these powerful trends it is more

90 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

Cited half life

The number of publication years from thecurrent Journal Citation Reports year thataccount for 50 of the citations receivedby the journal This metric is usefulbecause it helps show the speed withwhich citations accrue to the articles in ajournal and the continued use and valueof older articles

citation datacan be usedproductively

and helpfully inevaluativeprocesses

important than ever to remind ourselvesthat citation metrics must be used carefullyand should only be used to inform not tosupplant peer review and other forms ofqualitative evaluation

References

1 Lawrence P 2007 The mismeasurement of scienceCurrent Biology 17 R583ndash5httpdxdoiorg101016jcub200706014

2 Pauly D and Stergiou K 2005 Equivalence of resultsfrom two citation analyses Thomson ISIrsquos CitationIndex and Googlersquos Scholar service Ethics in Scienceand Environmental Politics 2005 33ndash5 Available athttpwwwint-rescomarticlesesep2005E65pdf

3 Hill D Rapoport A Lehming R and Bell RChanging US Output of Scientific Articles 1988ndash2003Arlington VA National Science Foundation Divi-sion of Science Resources Statistics 2007 Availableat httpwwwnsfgovstatisticsnsf07320

4 Moed H Citation Analysis in Research EvaluationAmsterdam Springer 2005

5 Pendlebury D The ISI Database and BibliometricsUses amp Abuses in Evaluating Research Unpublishedpresentation at Thomson ISI Symposium Tokyo2002

6 Moed H and Vanleeuwen T 1995 Improving theaccuracy of Institute for Scientific Informationrsquos Jour-nal Impact Factors Journal of the American Society forInformation Science 46 461ndash7httpdxdoiorg101002(SICI)1097-4571(199507)466lt461AID-ASI5gt30CO2-G

7 Seglen P 1997 Citations and journal impact factorsquestionable indicators of research quality Allergy 521050ndash6httpdxdoiorg101111j1398ndash99951997tb00175x

8 McVeigh M Journal Self-Citation in the Journal Cita-tion Reports ndash Science Edition (2002) Philadelphia PAThomson Corporation 2004 Available at httpwwwthomsonscientificcommediapresentrepessayspdfselfcitationsinjcrpdf

9 Garfield E The Agony and the Ecstasy ndash The Historyand Meaning of the Journal Impact Factor InternationalCongress on Peer Review And Biomedical Publica-tion Chicago 16 September 2005 Available athttpwwwgarfieldlibraryupennedupapersjifchicago2005pdf

10 Pudovkin A and Garfield E Rank-normalizedimpact factor a way to compare journal performanceacross subject categories Proceedings of the 67thASISampT Annual Meeting Providence RI American

Society for Information Systems amp Technology 2004Available at httpwwwgarfieldlibraryupennedupapersasistranknormalization2004pdf

11 Sombatsompop N Markpin T and PremkamolnetrN 2004 A modified method for calculating theimpact factors of journals in ISI Journal CitationReports Polymer Science Category in 1997ndash2001Scientometrics 60 217ndash35httpdxdoiorg101023BSCIE000002779498854f6

12 Monastersky R 2005 The number thatrsquos devouringscience Chronicle of Higher Education 52 A12ndash17Available at httpchroniclecomfreev52i0808a01201htm

13 Chew M Villanueva E and Van Der Weyden M2007 Life and times of the impact factor retrospect-ive analysis of trends for seven medical journals(1994ndash2005) and their Editorsrsquo views Journal of theRoyal Society of Medicine 100 142ndash50httpdxdoiorg101258jrsm1003142

14 Bergstrom C httpwwweigenfactororgmethodshtm

15 Shepherd P Final Report on the Investigation into theFeasibility of Developing and Implementing Journal UsageFactors UK Serials Group 2007 httpwwwuksgorgusagefactorsfinal

16 Magri M-H and Solari A 1996 The SCI journalcitation reports a potential tool for studying journalsScientometrics 35 93ndash117httpdxdoiorg101007BF02018235

17 Hirsch J 2005 An index to quantify an individualrsquosscientific research output Proceedings of the NationalAcademy of Science 102 16569ndash72httpdxdoiorg101073pnas0507655102

18 Harzing A-W httpwwwharzingcomresourceshtmpophtm Accessed October 2007

19 Jeang K 2007 Impact factor H index peer compari-sons and Retrovirology is it time to individualize cita-tion metrics Retrovirology 4 42httpdxdoiorg1011861742ndash4690ndash4ndash42

20 Pudovkin A and Garfield E 2002 Algorithmic pro-cedure for finding semantically related journals Jour-nal of the American Society for Information Science andTechnology 53 1113ndash19httpdxdoiorg101002asi10153

James PringleVice President Product DevelopmentThomson Scientific3501 Market StreetPhiladelphia PA 19104 USAEmail jamespringlethomsoncom

Trends in the use of ISI citation databases for evaluation 91

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

citation metricsmust be usedcarefully andshould only beused to informnot to supplantpeer review

Page 6: Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

A few examples from the Thomson portfoliowill illustrate these points

In terms of journal level metrics Thom-sonrsquos Journal Citation Reportsreg annuallyprovides many different metrics the JIF ajournal-level Immediacy Index to show howrapidly journals accumulate citations in theirfirst year cited and citing lsquohalf lifelsquo indica-tors and 10-year charts of citations to showhow citations accumulate over periods oftime a chart of self-citations to show thepercentages of citations to the same journaland a Journal Relatedness Index20 to show thejournals most closely associated with a givenjournal in terms of citations plus basic sta-tistics and rank in category and the full setof article article type and citation countsthat support these calculations Each ofthese metrics helps answer specific questionsabout the journal and the user is encour-aged to explore the data and use itappropriately instead of simply grabbing theJIF and using it as the universal answer toany evaluative question

More advanced journal-level analysis issupported by more customized and flexibleproducts most notably the Journal Perfor-mance Indicatorstrade which allow calculationof JIF-like metrics for the userrsquos chosen timeperiods and a more fine-grained analysis bydocument types and field-specific baselinesMore locally focused workflow-orientedanalysis is supported by the Journal UseReportstrade which enable users to comparejournal-level citation metrics with the publi-cation citation and usage data for their owninstitutions

For the most part these tools do not sup-port research evaluation because of thelimitations of journal-level metrics outlinedearlier For research evaluation Thomson

Scientific recognizes the need for customdata sets and for comparative data appropri-ate to the specific evaluative task andprovides these for both standard and customdata sets as needed Such metrics are builtup from the individual article level andinclude among many others

Simple counts of cites papers and citesper paper providing basic statistics andbaselines over a variety of time periods aswell as standard statistical measures suchas mean median and mode

Second-generation citation counts (cita-tions from articles that cite the lsquocitingarticlesrsquo of the original article) normalizedby field which measure of the long termimpact of a paper

Time series data covering both standardciting and cited periods (one-year five-year 25-year etc) and user-defined timeperiods

Percent shares of papers or citationswithin a broader category (such as a fieldcountry or institution)

Field baselines representing the meanimpact (cites per paper) for papers in afield (usually a journal set) defined for aspecific citedciting time window

Expected citation counts which predicthow often a paper is expected to be citedbased on its year journal and article type

The variety of these metrics and data setsreflects the recognition of the two lsquogoldenrulesrsquo of citation data as applied to evalua-tion ndash that data be appropriate to thequestions asked and that all comparativequestions rely on comparisons of like withlike When these rules are kept in mind andthose who use citation data take the timeand patience to apply them carefully cita-tion data can be used productively andhelpfully in evaluative processes

As Eugene Garfield the founder of ISIand inventor of the ISI citation databaseshas remarked lsquoIn 1955 it did not occur tome that ldquoimpactrdquo would become so contro-versialrsquo9 The pressures of outcomes-basedfunding decisions and personal rewards sys-tems along with the ready availability ofcitation data in a more democratized digitalworld will continue to fuel this controversyIn view of these powerful trends it is more

90 James Pringle

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

Cited half life

The number of publication years from thecurrent Journal Citation Reports year thataccount for 50 of the citations receivedby the journal This metric is usefulbecause it helps show the speed withwhich citations accrue to the articles in ajournal and the continued use and valueof older articles

citation datacan be usedproductively

and helpfully inevaluativeprocesses

important than ever to remind ourselvesthat citation metrics must be used carefullyand should only be used to inform not tosupplant peer review and other forms ofqualitative evaluation

References

1 Lawrence P 2007 The mismeasurement of scienceCurrent Biology 17 R583ndash5httpdxdoiorg101016jcub200706014

2 Pauly D and Stergiou K 2005 Equivalence of resultsfrom two citation analyses Thomson ISIrsquos CitationIndex and Googlersquos Scholar service Ethics in Scienceand Environmental Politics 2005 33ndash5 Available athttpwwwint-rescomarticlesesep2005E65pdf

3 Hill D Rapoport A Lehming R and Bell RChanging US Output of Scientific Articles 1988ndash2003Arlington VA National Science Foundation Divi-sion of Science Resources Statistics 2007 Availableat httpwwwnsfgovstatisticsnsf07320

4 Moed H Citation Analysis in Research EvaluationAmsterdam Springer 2005

5 Pendlebury D The ISI Database and BibliometricsUses amp Abuses in Evaluating Research Unpublishedpresentation at Thomson ISI Symposium Tokyo2002

6 Moed H and Vanleeuwen T 1995 Improving theaccuracy of Institute for Scientific Informationrsquos Jour-nal Impact Factors Journal of the American Society forInformation Science 46 461ndash7httpdxdoiorg101002(SICI)1097-4571(199507)466lt461AID-ASI5gt30CO2-G

7 Seglen P 1997 Citations and journal impact factorsquestionable indicators of research quality Allergy 521050ndash6httpdxdoiorg101111j1398ndash99951997tb00175x

8 McVeigh M Journal Self-Citation in the Journal Cita-tion Reports ndash Science Edition (2002) Philadelphia PAThomson Corporation 2004 Available at httpwwwthomsonscientificcommediapresentrepessayspdfselfcitationsinjcrpdf

9 Garfield E The Agony and the Ecstasy ndash The Historyand Meaning of the Journal Impact Factor InternationalCongress on Peer Review And Biomedical Publica-tion Chicago 16 September 2005 Available athttpwwwgarfieldlibraryupennedupapersjifchicago2005pdf

10 Pudovkin A and Garfield E Rank-normalizedimpact factor a way to compare journal performanceacross subject categories Proceedings of the 67thASISampT Annual Meeting Providence RI American

Society for Information Systems amp Technology 2004Available at httpwwwgarfieldlibraryupennedupapersasistranknormalization2004pdf

11 Sombatsompop N Markpin T and PremkamolnetrN 2004 A modified method for calculating theimpact factors of journals in ISI Journal CitationReports Polymer Science Category in 1997ndash2001Scientometrics 60 217ndash35httpdxdoiorg101023BSCIE000002779498854f6

12 Monastersky R 2005 The number thatrsquos devouringscience Chronicle of Higher Education 52 A12ndash17Available at httpchroniclecomfreev52i0808a01201htm

13 Chew M Villanueva E and Van Der Weyden M2007 Life and times of the impact factor retrospect-ive analysis of trends for seven medical journals(1994ndash2005) and their Editorsrsquo views Journal of theRoyal Society of Medicine 100 142ndash50httpdxdoiorg101258jrsm1003142

14 Bergstrom C httpwwweigenfactororgmethodshtm

15 Shepherd P Final Report on the Investigation into theFeasibility of Developing and Implementing Journal UsageFactors UK Serials Group 2007 httpwwwuksgorgusagefactorsfinal

16 Magri M-H and Solari A 1996 The SCI journalcitation reports a potential tool for studying journalsScientometrics 35 93ndash117httpdxdoiorg101007BF02018235

17 Hirsch J 2005 An index to quantify an individualrsquosscientific research output Proceedings of the NationalAcademy of Science 102 16569ndash72httpdxdoiorg101073pnas0507655102

18 Harzing A-W httpwwwharzingcomresourceshtmpophtm Accessed October 2007

19 Jeang K 2007 Impact factor H index peer compari-sons and Retrovirology is it time to individualize cita-tion metrics Retrovirology 4 42httpdxdoiorg1011861742ndash4690ndash4ndash42

20 Pudovkin A and Garfield E 2002 Algorithmic pro-cedure for finding semantically related journals Jour-nal of the American Society for Information Science andTechnology 53 1113ndash19httpdxdoiorg101002asi10153

James PringleVice President Product DevelopmentThomson Scientific3501 Market StreetPhiladelphia PA 19104 USAEmail jamespringlethomsoncom

Trends in the use of ISI citation databases for evaluation 91

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

citation metricsmust be usedcarefully andshould only beused to informnot to supplantpeer review

Page 7: Trendsintheuse M ofISIcitation databasesfor evaluationtefkos.comminfo.rutgers.edu/Courses/e530/Readings... · place in a context of increased availability of data with which to measure

important than ever to remind ourselvesthat citation metrics must be used carefullyand should only be used to inform not tosupplant peer review and other forms ofqualitative evaluation

References

1 Lawrence P 2007 The mismeasurement of scienceCurrent Biology 17 R583ndash5httpdxdoiorg101016jcub200706014

2 Pauly D and Stergiou K 2005 Equivalence of resultsfrom two citation analyses Thomson ISIrsquos CitationIndex and Googlersquos Scholar service Ethics in Scienceand Environmental Politics 2005 33ndash5 Available athttpwwwint-rescomarticlesesep2005E65pdf

3 Hill D Rapoport A Lehming R and Bell RChanging US Output of Scientific Articles 1988ndash2003Arlington VA National Science Foundation Divi-sion of Science Resources Statistics 2007 Availableat httpwwwnsfgovstatisticsnsf07320

4 Moed H Citation Analysis in Research EvaluationAmsterdam Springer 2005

5 Pendlebury D The ISI Database and BibliometricsUses amp Abuses in Evaluating Research Unpublishedpresentation at Thomson ISI Symposium Tokyo2002

6 Moed H and Vanleeuwen T 1995 Improving theaccuracy of Institute for Scientific Informationrsquos Jour-nal Impact Factors Journal of the American Society forInformation Science 46 461ndash7httpdxdoiorg101002(SICI)1097-4571(199507)466lt461AID-ASI5gt30CO2-G

7 Seglen P 1997 Citations and journal impact factorsquestionable indicators of research quality Allergy 521050ndash6httpdxdoiorg101111j1398ndash99951997tb00175x

8 McVeigh M Journal Self-Citation in the Journal Cita-tion Reports ndash Science Edition (2002) Philadelphia PAThomson Corporation 2004 Available at httpwwwthomsonscientificcommediapresentrepessayspdfselfcitationsinjcrpdf

9 Garfield E The Agony and the Ecstasy ndash The Historyand Meaning of the Journal Impact Factor InternationalCongress on Peer Review And Biomedical Publica-tion Chicago 16 September 2005 Available athttpwwwgarfieldlibraryupennedupapersjifchicago2005pdf

10 Pudovkin A and Garfield E Rank-normalizedimpact factor a way to compare journal performanceacross subject categories Proceedings of the 67thASISampT Annual Meeting Providence RI American

Society for Information Systems amp Technology 2004Available at httpwwwgarfieldlibraryupennedupapersasistranknormalization2004pdf

11 Sombatsompop N Markpin T and PremkamolnetrN 2004 A modified method for calculating theimpact factors of journals in ISI Journal CitationReports Polymer Science Category in 1997ndash2001Scientometrics 60 217ndash35httpdxdoiorg101023BSCIE000002779498854f6

12 Monastersky R 2005 The number thatrsquos devouringscience Chronicle of Higher Education 52 A12ndash17Available at httpchroniclecomfreev52i0808a01201htm

13 Chew M Villanueva E and Van Der Weyden M2007 Life and times of the impact factor retrospect-ive analysis of trends for seven medical journals(1994ndash2005) and their Editorsrsquo views Journal of theRoyal Society of Medicine 100 142ndash50httpdxdoiorg101258jrsm1003142

14 Bergstrom C httpwwweigenfactororgmethodshtm

15 Shepherd P Final Report on the Investigation into theFeasibility of Developing and Implementing Journal UsageFactors UK Serials Group 2007 httpwwwuksgorgusagefactorsfinal

16 Magri M-H and Solari A 1996 The SCI journalcitation reports a potential tool for studying journalsScientometrics 35 93ndash117httpdxdoiorg101007BF02018235

17 Hirsch J 2005 An index to quantify an individualrsquosscientific research output Proceedings of the NationalAcademy of Science 102 16569ndash72httpdxdoiorg101073pnas0507655102

18 Harzing A-W httpwwwharzingcomresourceshtmpophtm Accessed October 2007

19 Jeang K 2007 Impact factor H index peer compari-sons and Retrovirology is it time to individualize cita-tion metrics Retrovirology 4 42httpdxdoiorg1011861742ndash4690ndash4ndash42

20 Pudovkin A and Garfield E 2002 Algorithmic pro-cedure for finding semantically related journals Jour-nal of the American Society for Information Science andTechnology 53 1113ndash19httpdxdoiorg101002asi10153

James PringleVice President Product DevelopmentThomson Scientific3501 Market StreetPhiladelphia PA 19104 USAEmail jamespringlethomsoncom

Trends in the use of ISI citation databases for evaluation 91

L E A R N E D P U B L I S H I N G V O L 2 1 N O 2 A P R I L 2 0 0 8

citation metricsmust be usedcarefully andshould only beused to informnot to supplantpeer review