4
METRICS · An Overview of ACRLMetrics by Christopher Stewart Available online 23 December 2010 F or the academic library practitioner, secondary data analysis is a powerful tool for a variety of research purposes. While primary data collection is a commonly used tool by academic research- ers, few library practitioners have the time to conduct primary research for large samples and populations. We leave the tasks of primary data collection to our professional associations, government agencies, and other organizations. Two widely used sources of primary data for higher education include the Carnegie Classification for the Advancement of Teaching's institutional classifications and the National Center for Education Statistics (NCES) Academic Libraries Survey (ALS). Collected biennially, NCES offers descriptive statistics on about 3700 academic libraries in the 50 states and the District of Columbia.1 For 125 of the largest research libraries in North America, the Association of Research Libraries Statistics program produces a series of publications presenting quantitative and qualitative data that describe the collections, expenditures, staffing, and service activities for ARL member libraries.2 For general peer analysis as well as more involved statistical work, secondary data analysis using existing sources of quantitative data available about academic libraries can very useful for library practitioners. As with many sources of primary data, the researcher may be limited by the scope and, in some cases, inappropriateness of certain databases for certain research ques- tions.3 Despite these limitations, most large banks of quantitative data on academic libraries have something to offer most researchers, even if the work requires combining data from disparate resources and, often, post processing of data for statistical analysis. In this column, we will explore a new tool for research on academic libraries: ACRLMetrics. Announced at the 2010 American Library Association Annual Conference and released shortly thereafter, ACRLMetrics is a new on- line service providing access to ACRL and NCES academic library statistics.4 Coverage includes NCES and ACRL survey data from 2000 and 1998 respectively to the latest survey periods. ACRL has sourced this new service to the company Counting Opinions. Counting Opinions' current CE\O Carl Thompson designed the LibPAS platform on which ACRLMetrics is deployed. 5 According to company informa- tion, the LibPAS platform is used by Counting Opinions clients, including Cornell and Rutgers Universities, to emphasize statistics that demonstrate the impact the library has on the university community.6 If one is to judge from language used on the homepage of the ACRLMetrics site, the service is clearly aimed at academic library practitioners supporting strategic planning and advocacy, budget presentations, benchmarking, and trend analysis. 7 While the ACRLMetrics service offers access to NCES Academic Library Survey data through its reporting interfaces, it is beyond the scope of one column to explore how the service deals with NCES data. For the purposes of the current discussion, only ACRLMetrics' treatment of data from the annual ACRL Academic Library Trends and Statistics Survey will be explored. This survey, sent annually to thousands of institutions in the U.S. and Canada, consists of introductory/informa- tional questions, followed by 58 survey questions across seven sections. 8 Between 1998 and 2009, the average number of institutions reporting to the ACRL survey is approximately 1300. ACRLMetrics is a fee-based, subscription service that offers a discounted rate for libraries that have responded to the ACRL survey (as well as of this writing, other introductory pricing discounts). Before exploring the various reporting functionalities of ACRLMetrics, it is useful to examine the representativeness of the dataset as a whole. The most recent year for ACRL survey data available through ACRLMetrics is 2009, making this an appropriate year to use as an example. Because the ACRLMetrics service derives its data from the ACRL survey, selecting variables to search is relatively intuitive, especially for those who have participated in the ACRL survey. A simple search of reporting institutionsshows that there were 1652 respondents to the latest ACRL survey, far greater than the multi-year average. To determine how representative the sample set (survey respondents) is versus the entire population of institutions, however, it is necessary to exclude the small number (24) of Canadian institutions who responded to the survey. This step requires the user to extract the list of respondents into an Excel spreadsheet, which is an option available through the reporting interface. Before that step is taken, however, the Carnegie Classificationvariable must be selected as an additional variable. The Carnegie Classification scheme used by ACRLMetrics is very general and does not take advantage of the more detailed institutional categories offered by the current Carnegie Classifications. ACRLMetrics delineates institutions into four basic Carnegie categories: Associates, Bachelors, Masters/Professional, and Doctorate. While this does not allow for the kind of detailed reporting and peer analysis using Carnegie Classifications as a framework that some researchers may require, it suffices for a general analysis of expected values. For all three variables, Reporting Institution, Carnegie Classification, and Country, ACRLMetrics delivers on-screen reports that can be sorted by column. However, the sorting is not carried through to the Excel spreadsheet output, requiring the searcher to organize and sort data again after exporting to Excel. After removing Canadian institutions and a sizable number of institutions missing Carnegie Classification identifiers, the remaining sample consists of 1448 institutions. Table 1 shows representativeness of the sample (institu- tions responding to the survey) side by side with expected percentage values based on the entire population of U.S. higher education. The researcher should be aware that only two of the four Carnegie groups are evenly represented in the sample: Bachelors and Masters/ Professional. Therefore, ACRLMetrics may be better suited for comparing Christopher Stewart is Dean of Libraries, Illinois Institute of Technology, 35 West 33rd Street, Chicago, IL 60616-3793, USA <[email protected]>. The Journal of Academic Librarianship, Volume 37, Number 1, pages 7376 January 2011 73

An Overview of ACRLMetrics

Embed Size (px)

Citation preview

Page 1: An Overview of ACRLMetrics

METRICS

· An Overview of ACRLMetrics

by Christopher StewartAvailable online 23 December 2010

For the academic library practitioner, secondary data analysis is apowerful tool for a variety of research purposes. While primarydata collection is a commonly used tool by academic research-

ers, few library practitioners have the time to conduct primaryresearch for large samples and populations. We leave the tasks ofprimary data collection to our professional associations, governmentagencies, and other organizations. Twowidely used sources of primarydata for higher education include the Carnegie Classification for theAdvancement of Teaching's institutional classifications and theNational Center for Education Statistics (NCES) Academic LibrariesSurvey (ALS). Collected biennially, NCES offers “descriptive statisticson about 3700 academic libraries in the 50 states and the District ofColumbia.”1 For 125 of the largest research libraries in North America,the Association of Research Libraries Statistics program produces aseries of publications presenting quantitative and qualitative data that“describe the collections, expenditures, staffing, and service activitiesfor ARL member libraries.”2 For general peer analysis as well as moreinvolved statistical work, secondary data analysis using existingsources of quantitative data available about academic libraries canvery useful for library practitioners. As with many sources of primarydata, the researcher may be limited by the scope and, in some cases,“inappropriateness of certain databases for certain research ques-tions.”3 Despite these limitations,most large banks of quantitative dataon academic libraries have something to offermost researchers, even ifthework requires combining data fromdisparate resources and, often,post processing of data for statistical analysis. In this column, we willexplore a new tool for research on academic libraries: ACRLMetrics.

Announced at the 2010 American Library Association AnnualConference and released shortly thereafter, ACRLMetrics is a “new on-line service providing access to ACRL and NCES academic librarystatistics.”4 Coverage includes NCES and ACRL survey data from 2000and 1998 respectively to the latest survey periods. ACRL has sourcedthis new service to the company Counting Opinions. CountingOpinions' current CE\O Carl Thompson designed the LibPAS platformon which ACRLMetrics is deployed.5 According to company informa-tion, the LibPAS platform is used by Counting Opinions clients,including Cornell and Rutgers Universities, to “emphasize statisticsthat demonstrate the impact the library has on the universitycommunity.”6 If one is to judge from language used on the homepageof the ACRLMetrics site, the service is clearly aimed at academic librarypractitioners supporting strategic planning and advocacy, budgetpresentations, benchmarking, and trend analysis.7

hristopher Stewart is Dean of Libraries, Illinois Institute of Technology,5 West 33rd Street, Chicago, IL 60616-3793, [email protected]>.

C3<

The Journal of Academic Librarianship, Volume 37, Number 1, pages 73–76

While the ACRLMetrics service offers access to NCES Academic LibrarySurvey data through its reporting interfaces, it is beyond the scope ofone column to explore how the service deals with NCES data. For thepurposes of the current discussion, only ACRLMetrics' treatment ofdata from the annual ACRL Academic Library Trends and StatisticsSurvey will be explored. This survey, sent annually to thousands ofinstitutions in the U.S. and Canada, consists of introductory/informa-tional questions, followed by 58 survey questions across sevensections.8 Between 1998 and 2009, the average number of institutionsreporting to the ACRL survey is approximately 1300. ACRLMetrics is afee-based, subscription service that offers a discounted rate forlibraries that have responded to the ACRL survey (as well as of thiswriting, other introductory pricing discounts).

Before exploring the various reporting functionalities ofACRLMetrics, it is useful to examine the representativeness of thedataset as a whole. The most recent year for ACRL survey data availablethrough ACRLMetrics is 2009, making this an appropriate year to use asan example. Because the ACRLMetrics service derives its data from theACRL survey, selecting variables to search is relatively intuitive,especially for those who have participated in the ACRL survey. A simplesearch of “reporting institutions” shows that there were 1652respondents to the latest ACRL survey, far greater than the multi-yearaverage. To determine how representative the sample set (surveyrespondents) is versus the entire population of institutions, however, itis necessary to exclude the small number (24) of Canadian institutionswho responded to the survey. This step requires the user to extract thelist of respondents into an Excel spreadsheet, which is an optionavailable through the reporting interface. Before that step is taken,however, the “Carnegie Classification” variable must be selected as anadditional variable. The Carnegie Classification scheme used byACRLMetrics is very general and does not take advantage of the moredetailed institutional categories offered by the current CarnegieClassifications. ACRLMetrics delineates institutions into four basicCarnegie categories: Associates, Bachelors, Masters/Professional, andDoctorate. While this does not allow for the kind of detailed reportingand peer analysis using Carnegie Classifications as a framework thatsome researchers may require, it suffices for a general analysis ofexpected values. For all three variables, Reporting Institution, CarnegieClassification, and Country, ACRLMetrics delivers on-screen reports thatcan be sorted by column. However, the sorting is not carried through tothe Excel spreadsheet output, requiring the searcher to organize andsort data again after exporting to Excel. After removing Canadianinstitutions and a sizable number of institutions missing CarnegieClassification identifiers, the remaining sample consists of 1448institutions. Table 1 shows representativeness of the sample (institu-tions responding to the survey) side by side with expected percentagevalues based on the entire population of U.S. higher education. Theresearcher should be aware that only two of the four Carnegie groupsare evenly represented in the sample: Bachelors and Masters/Professional. Therefore, ACRLMetricsmaybebetter suited for comparing

January 2011 73

Page 2: An Overview of ACRLMetrics

Table 1Representativeness of institutions respondingto 2009 ACRL survey to overall population of

U.S. higher education

Carnegieclassification

Numberof survey

respondents

Percentageof institutionsin response

sample(n=1448)

Percentage ofinstitutions inpopulation ofU.S. highereducation(N=3456)

Associates 348 24% 50%

Bachelors 328 23% 22%

Masters/professional

477 33% 19%

Doctorate 295 20% 8%

TSample library tren

and IL

PI name2009-01

12 months

33. Initial Circulation Transactions 18,975

34. Total Circulation Transactions 27,619

35. Total Items Loaned (ILL) 7,495

36. Total Items Borrowed (ILL) 26,500

Table 3Table report for 2009 library expenditures andILL borrowing for selected AITU institutions

Location

Total libraryresources

expendituresper enrolled

student

ILL borrowedper FT

undergraduatestudent

California Instituteof Technology

$1,065 3.46901

Massachusetts Instituteof Technology

$947 1.98793

Case Western Reserve University $722 4.19683

Worcester Polytechnic Institute $659 1.02589

Rensselaer Polytechnic Institute $507 0.67173

Carnegie Mellon University $497 2.1938

74 The Journal of Academic Librarianship

Drexel University $247 2.32273

Rochester Instituteof Technology

$197 0.50933

Stevens Institute ofInstitute Technology

$126 0.72373

specific groups (large and small) of peer institutions and garneringcomparative, trend, and ranking data for one's own institution. I willfocus the rest of my analysis of ACRLMetrics in these areas.

ACRLMetrics offers three primary methods for generatingreports of data from the ACRL Survey: Library Trend Report, RankingReport, and user-generated reports created through the Add Reportfeature. Library Trend Reports “highlight the trend of results for theselected Collection, Period(s) and Data Elements”9 and allow the userto select a range of ACRL survey years by selecting a variety of“groups” as variables for the report. These groups include data forcollections, expenditures, networked resources, services, and otherareas of inquiry from the ACRL survey. The Library Trend report showsa summary trend of the data for the selected periods and thepercentage difference (gap) of results. The Library Trend Report ishighly useful for deriving data for one's own institutional responsesfrom the ACRL survey going back to 1998–1999. Results generated arefor the subscriber/researcher's institution only, which is always thedefault institution in the ACRLMetrics subscription for a giveninstitution. Table 2 shows selected data for circulation from a LibraryTrend report. Percentage increases are given between years. Forexample, in this sample output, it is shown that total circulationtransactions increased 9.1% between FY08 and FY09, while inter-library loan lending increased by a dramatic 66%. Since this is mylibrary, I can speak directly to the increase in inter-library loan lendingas the result of efforts to improve access to our collections byinstitutions in the Consortia of Academic and Research Libraries inIllinois (CARLI), and to improve our borrowing to lending ratio. TheLibrary Trend Report enables ACRL survey data to be shown over anextended time period, which can be an effective tool for collectiondevelopment and forming circulation policy. Conversely, selecting forthe latest year only is a good way for the user to get a quick summary

able 2d report for circulation

L 2007–2009

Diff2008-00

12 months Diff2007-00

12 months

9.10% 17,391 18% 14,736

9.10% 25,310 18.40% 21,370

66.60% 4,499 27.30% 3,535

25.50% 21,122 36% 15,526

of an institution's most current ACRL survey data in an easy to readreport in tabular format.

The Ranking Report includes results for one time period (goingback to 1998-99) and is meant to depict the relative ranking for aninstitution among all survey respondents according to selectedmeasures from the ACRL Survey. As with the Library Trends Report,measures are organized into group categories of data from the ACRLsurvey. Groups include expenditures, collections, networkedresources, various public services metrics, and several others.Minimum, average, and maximum values are presented forquantitative data, with the subscriber's institution's data beingshown in red. For example, for the variable “Presentations toGroups,” a subset of the “Personnel and Public Services Personnel,”there was a minimum value of three presentations for institutionsresponding to this question, an average of 233 presentations, and amaximum of 1503 presentations. Table 3 shows Ranking ReportQuery for “Personnel and Public Services Personnel” with thequerying institution's results in relation to the average for ACRLsurvey respondents completing this question. Ranking Reports alsooffers a graph report option that lists the default institution'sposition along a line of ordinal data points. While the RankingReport is an excellent way of determining the subscriber institu-tion's position among all ACRL survey respondents, additional

Page 3: An Overview of ACRLMetrics

sorting options such as the ability to layer Carnegie Classificationswould enable the user to view aggregate data for more discreetsubsets of similar institutions, which would make the rankingsmore meaningful. In order to analyze data for a peer group, onemust first create the report(s) using the Add Report tab and create alist of institutions by selecting the “Reporting Institutions” variable.

There are several types of reports which provide “multipledifferent views of the results”10 the user can generate using the AddReport feature. Basic categories available for querying include thebasic 58 questions and their subsets from the ACRL survey, includingcollections, expenditures, electronic resources, and public services.Researchers who are familiar with the ACRL survey will easilyrecognize these categories. Most of these tabular report outputs alsoallow for exporting directly to Excel by clicking an icon. As withRanking Reports, there are output options for graphs as well. TheTable Report allows for reporting for a given period. Most of thesereports allow for searching cohorts selected institutions created byselecting the “Reporting Institutions” as a variable. For example, theAssociation of Independent Technological Universities (AITU) is anorganization of “leading private American technological universitiesand colleges.”11 While there are some major differences among typesof institutions (not all are doctoral granting research universities, forexample), the common focus on science and engineering education aswell as the private governance make for logical opportunities for peeranalysis of libraries across this group. Table 3 shows a sample reportoutput for selected AITU institutions using two basic 2009 ACRLsurvey metrics: Total Library Resources Expenditures Per EnrolledStudent and ILL Borrowed per FT Undergraduate Student. These datawere extracted from the Excel output report offered in theACRLMetrics Table Report. Interestingly, the data shown seem toindicate a possible positive relationship between library expendituresper student and the amount of material borrowed via inter-libraryloan per student. For the existing data, there is a correlation coefficientbetween the two variables of 0.595, indicating moderate correlation.Perhaps with a larger sample this correlation could be shown to besignificant. These are the kinds of cohort data that can be generatedand exported to Excel and SPSS for statistical analysis. Another type ofreport, the Summary Report, shows basic descriptive statistics forthese two variables in the cohort for a given year. Other reports allow

TableACRLMetrics selecte

Type of report Functions

Allows forcreating

cohorts/peergroups?

Table Summary data for reportinglocation (single time period)

Yes

PI report Summary of selected variablesfor single time period

Yes

Trend/PI Same as PI Report, but forselected range of years

Yes

Graph/PI PIs sorted by PI value(single time period)

Yes

Summary Shows summary data forall selected variables/PIs

Yes

Period Comparisons Shows data values foreach time period anddifference (gap)

Yes

for multi-year comparisons of performance indicators. Table 4 lists thesome basic report types and their features.

In addition to the variables derived directly from the ACRLsurvey, ACRLMetrics offers a range of additional data elements forinclusion in basic reports. These are listed after the ACRL surveydata elements and comprise many of the performance indicatorsoffered by ACRLMetrics. Most data elements, (which includecalculated values, ratios, and other measures) can be extracted forsurvey participants that provided appropriate data in their surveyresponses necessary for ACRLMetrics to derive these data elements.Data can also be extracted for user-generated cohorts and peergroups, as well as, of course, the subscriber's institution. Data aresubsets (shown here in italics) of general categories (shown here inquotes) and appendices (shown here in quotes). The first of these,“Supplemental Trends,” allows searching across variables such aslibrary salary scales and library tenure policy. Unfortunately, datawere not available for these elements at the time of this writing andreport queries gave no results. “Cost Per Outputs” include variablessuch as expenditures per enrolled student and total staff expendi-tures per circulation. “Efficiency Measures” include variables such ascirculation per FTE staff and reference transactions per support staff.“Service Level Measures” include a range of circulation and publicservice data, including circulations per week and reference transac-tions per week. “Input Measures” include such variables as volumesper student and e-books per student. “Key Ratios and Percentages”include elements such as percent of staff expenditures onprofessional staff and percent of operating expenditures oncollection materials. “Output Measures” include elements such ascirculation per enrolled student and number of database searchesper enrolled student. There is also a specific category for libraryholdings data. “Inputs: Library Perspective” includes various dataelements for collection expenditures per student, as well asprofessional staff salaries per student, among other information.“Output: Library Perspective” also contains basic statistical data oncollection use such as holding and circulation per specific categoriesof students (e.g., FT/PT, undergraduate/graduate), as well as avariable for cost of hours open. The “Examples of Trend Metrics”appendix offers another reporting option for operating expendituresacross a variety of institutional and library-specific variables.

4d report types

Allows forcomparing

time periods?Outputformats Other

No Tabular graphExcel

No Tabular graph Displays basic descriptivestatistics (variance, meanmedian, etc.)

Yes Tabular graph

No Graph

No Tabular Displays basic descriptivestatistics (variance, meanmedian, etc.)

No Tabular graphExcel

Allows for searching onlyone period at a time

January 2011 75

,

,

Page 4: An Overview of ACRLMetrics

“Selected Metrics used by Libraries for Benchmarking and BestPractices” lists variables that are, in general, internally focused, suchas % of professional staff to total staff and % student assistants tototal staff. Finally, “Selected Metrics from Library Reports” offers yetanother way of looking at general circulation and inter-library loandata by types of students.

Most of the data elements in these additional categories arenumerical, which offer the researcher rich opportunities for statisticalanalysis beyond basic descriptive work. With these types of data,correlative as well as potentially causal relationships can be exploredacross an array of interesting and potentially very meaningful data foridentifying key, quantifiable trends in academic libraries over the pastdecade. A wide range of research questions can be explored usingthese as well as the basic ACRL survey response data.

Finally, ACRLMetrics offers seven report templates from the ACRLcollection for deriving peer data within certain quantitativeparameters, based on percentages. Researchers can specify thatresults be delivered for variables that fall within a percentage rangeacross institutions, thereby providing a different, more focused wayof deriving peers based on information other than generalinstitutional characteristics. For example, the report template “KeyOutput Metrics Doctorate” allows for the user to specify results onlyfor institutions whose level of service delivery (e.g., referencetransaction per week) fall within a specific range of the subscriber/researcher's institution. Using this feature, the researcher can limitresults to a list of doctoral universities that are similar in theserespects. This is a particularly useful tool for the practitioner whowants to show that his or her library is delivering services at a levelsimilar to aspirant institutions or, perhaps more usefully, institu-tions against which university leadership does not wish to becompared. ACRLMetrics Report Templates are “inspired by the book‘Viewing Library Metrics from Different Perspectives—Inputs, Out-puts, and Outcomes,' by Robert E. Dugan, Petre Henron, and DanutaA. Nitecki.”12

ACRLMetrics is a potentially powerful tool for academic librarymanagement and leadership. At its most basic level, the serviceprovides an excellent tool for individual institutions to keep track ofdata they have provided to ACRL Library Trends and Services Surveyover the past decade. For many organizations, these data servicescombined with the ability to look at overall responses across U.S.(and, minimally, Canadian) higher education is value enough. Theresearcher, however, should consider the representativeness of theACRL survey vis-à-vis the entire population of U.S. higher education.For the researcher that endeavors to compare his or her institutionto like institutions, the ACRLMetrics service requires some amountof work on the researcher's part to build peer groups and perform allthe necessary work of data extraction and creating new tables fromExcel output. One need also consider that often institutions do notprovide complete responses to ACRL survey questions, so one willneed to manually remove institutions from result sets despite theoption for filtering by null entries offered in the report options.Finally, for broader cohorts based on Carnegie Classifications, theresearcher should also consider the limited scope of thesecategories offered in ACRLMetrics service. For true “apples toapples” comparisons, the researcher will have to manually add

76 The Journal of Academic Librarianship

more specific and descriptive Carnegie information such asinstitutional governance; sub-categories of general classifications(such as size of masters institution and type of baccalaureateinstitution); and level of research for doctorate universities, toname a few.

ACRLMetrics provides an extraordinary amount of information forthe researcher. It is a valuable service with a great deal of potential.While it can be somewhat complicated to use for more layeredinquiries, the researcher who has specific questions in mind will findthe work of extracting data from the service's reports to be highlyworthwhile. For the researcher seeking general results data fromcurrent and past ACRL surveys, or is conducting inquiries about his/her own institution, and performing basic peer analysis andbenchmarking, ACRLMetrics offers immediate benefit and relativeease of use. ACRLMetrics provides the researcher with amyriad of dataaccess points and reporting features. There is a lot here and, to be sure,a learning curve for most users. The good news is that ACRLMetricsactively seeks (and receives) subscriber feedback. This will be criticalin the coming years as the service seeks to refine its interface and addfunctionality. Without a doubt, ACRLMetrics makes ACRL survey datamore relevant and accessible for the practitioner and researcher alike.The next question is whether the same can be said for ACRLMetrics'coverage of NCES survey data, which can hopefully be addressed in alater column.

NOTES AND REFERENCES

1. “Library Statistics Program,” accessed October 22, 2010, http://nces.ed.gov/surveys/libraries/aca_survdesign.asp.

2. Association of Research Libraries, Statistics and Assessment,accessed October 24, 2010, http://www.arl.org/stats/annualsurveys/arlstats/index.shtml.

3. Carter, Deborah Faye, “Secondary Analysis of Data,” in Research inthe College Context: Approaches and Methods, eds. Frances K.Stage and Kathleen Manning, (New York: Brunner Routledge,2003), 154.

4. “ACRLMetrics,” accessed November 2, 2010, http://www.acrlmetrics.com.

5. Jason LeDuc, e-mail message to author, November 5, 2010.6. Counting Opinions (Squire) LTD, “Evidence Based Management

for Academic Libraries, LibPas (Library Performance AssessmentSystem): Case Studies: Cornell University, Rutgers University(handout to presentation at ARL Library Assessment Conference,Baltimore, MD, 2010), 2.

7. “ACRLMetrics,” accessed November 2, 2010, http://www.acrlmetrics.com.

8. “Welcome to the 2008-9 FY Trends and Statistics Survey,” accessedFebruary 22, 2009, https://cirssreport.lis.illinois.edu/ACRL/rdlogon2.aspx.

9. “Manage Reports,” accessed November 6, 2010, http://www.acrlmetrics.com/pireports/pireport.php.

10. Ibid.11. “Association of Independent Technological Universities,”

accessed November 7, 2010, http://www.theaitu.org/about.html.12. Lindsay Thompson, e-mail message to author, December 9, 2010.