7
Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students Author(s): Stewart Page Source: Canadian Journal of Education / Revue canadienne de l'éducation, Vol. 23, No. 4 (Autumn, 1998), pp. 452-457 Published by: Canadian Society for the Study of Education Stable URL: http://www.jstor.org/stable/1585758 . Accessed: 11/06/2014 02:45 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Canadian Society for the Study of Education is collaborating with JSTOR to digitize, preserve and extend access to Canadian Journal of Education / Revue canadienne de l'éducation. http://www.jstor.org This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AM All use subject to JSTOR Terms and Conditions

Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

Embed Size (px)

Citation preview

Page 1: Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to StudentsAuthor(s): Stewart PageSource: Canadian Journal of Education / Revue canadienne de l'éducation, Vol. 23, No. 4(Autumn, 1998), pp. 452-457Published by: Canadian Society for the Study of EducationStable URL: http://www.jstor.org/stable/1585758 .

Accessed: 11/06/2014 02:45

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Canadian Society for the Study of Education is collaborating with JSTOR to digitize, preserve and extendaccess to Canadian Journal of Education / Revue canadienne de l'éducation.

http://www.jstor.org

This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AMAll use subject to JSTOR Terms and Conditions

Page 2: Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

Note de recherche / Research Note

Rankings of Canadian Universities, 1997: Statistical Contrivance Versus Help to Students

Stewart Page university of windsor

Maclean's is a major Canadian mass-circulation magazine, with emphasis on Canadian and, secondarily, U.S. and North American content. In its issue of 24 November 1997, Maclean's (MM) published its seventh annual rankings of Canadian universities. Perhaps because of the increased popularity of notions such as cost effectiveness, efficiency, and value for one's "educational dollar," the rating or ranking of universities has become an increasingly popular exercise in higher education. In this spirit, the expressed intention of the Maclean's venture was to "take the measure" of Canadian universities by creating a "rank- ing road map" for purposes of assisting students. For 1997, MM again converted preliminary raw data and information, across several indices, to ranks (first, second, etc.), derived from these an overall rank for each university, and then from these finally constructed a linear rank ordering of all universities. This research note presents analysis of several statistical and related aspects of the 1997 rankings.

METHOD: MEASURES USED BY MM

As in previous years, for 1997, MM again classified each university as medical/ doctoral (N = 15), comprehensive (N = 13), or primarily undergraduate (N = 23). Also as before, the MM data were compiled according to six main measures, each measure itself composed of several indices.

The following measures were used: Student Body (defined by indices of student ability, such as the grade average of incoming students); Classes (indices of class size and "quality," such as percentage of classes taught by tenured faculty); Faculty (indices of faculty members' academic level, rank, and grant record); Finances (indices of budget, student services, and awards); Library (indices assessing holdings and collections); and Reputation (indices based on frequency of alumni support and on a reputational survey sent to senior univer- sity officials, high school guidance counsellors, and chief executive officers of Canadian corporations). Summed over the six measures, a total of 22 indices were used for medical/doctoral universities, 21 for comprehensive universities,

CANADIAN JOURNAL OF EDUCATION 23, 4 (1998): 452-457 452

This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AMAll use subject to JSTOR Terms and Conditions

Page 3: Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

NOTE DE RECHERCHE / RESEARCH NOTE

and 20 for primarily undergraduate universities. MM treated the Faculty, Repu- tation, Classes, and Student Body measures as approximately equally important (each contributing about 20% toward the final score), and treated the Finance and Library measures as somewhat less important (each contributing 12% to the final score).

RESULTS

As with previous years' rankings (see Page 1995, 1996), the 1997 MM data had several pitfalls.

In general terms, the major pitfall is that differences in ordinal ranks (rank ordering) are not amenable to comparative or mathematical interpretation (Siegel, 1959), even when the ranked variable is noncontentious and linear (such as height or weight). Although this problem may be familiar to some academics and statisticians, it is likely to be unfamiliar to most students and readers of MM. The problem is worsened by MM's pop metaphor of competition, in which uni- versities of higher rank are termed "winners," those of lower rank being losers.

The second pitfall is that many indices comprising the six main measures (a rank being available for each index) are unrelated to each other. For the 1997 data, I computed Spearman rank-order (rho) correlations, which indicate the degree of (linear) association between ranks for two variables, for each possible pair of indices MM used, in this instance pooling the indices over all measures. For medical/doctoral universities (N = 15), 22 indices, summed over all meas- ures, were used. The mean number of significant rho correlations between any single index and any other, for the moment disregarding sign, was 4.81 (21%), using a significance (alpha) criterion of p < .05 (I use this for all correlational results reported in this research note). Of the total of 231 correlations between all possible pairs of indices, 106 (45%) were significant. Using this p level, approximately 5% (12) of such correlations could be expected to be significant by chance. For comprehensive universities (N = 13), and primarily undergraduate universities (N = 23), the patterns of these correlations and percentages, both between and within the main measures, were highly similar to those for medical/ doctoral universities.

Many indices used in the six main measures were also unrelated to MM's final rankings. I computed Spearman rho correlations between each university's final rank, as assigned by MM, and its rank on each of the indices comprising the six main measures. For medical/doctoral universities, considering 22 indices, 15 such correlations (68%) were significant at p < .05. For comprehensive universities, considering 21 indices, only 9 correlations (42%) were significant. For primarily undergraduate universities, considering 20 indices, 10 correlations (50%) were significant.

453

This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AMAll use subject to JSTOR Terms and Conditions

Page 4: Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

NOTE DE RECHERCHE / RESEARCH NOTE

The third pitfall is that the universities' mean (average) ranks on the six main measures were not uniformly or strongly related to their final ranking.

I computed Spearman rho correlations between final MM ranks and mean ranks on each of the six MM measures; in these analyses, a university's score on each measure was the mean of the ranks for that measure's component indices. For medical/doctoral universities, of the six measures, the mean ranks for Student Body, Finance, Library, Faculty, and Reputation, but not for Classes, correlated significantly with final ranking, at p < .05. For comprehensive univer- sities, however, mean ranks for only two measures correlated significantly with final ranking; the correlation involving the Classes measure was significant but negative in sign. For primarily undergraduate universities, five of the six corre- lations with final ranking were significant.

The fourth pitfall is that mean ranks on the six measures were not strongly related to each other nor to the measures' various component indices. I computed the matrix of Spearman rho intercorrelations for mean ranks on each of the six measures, as defined in the preceding section. I also computed Spearman rhos between mean ranks on each measure and ranks for all indices used by MM.

For example, for medical/doctoral universities, of a total of 147 possible correlations, composed of both types just described, only 53 (36%) were sig- nificant. For both comprehensive and primarily undergraduate universities, only 33% were significant. For medical/doctoral universities, the mean number of significant correlations between these universities' mean rank on one of the six measures and another measure's mean rank, or rank on one of the 22 MM indices, was 3.50. Of 15 possible correlations between mean rank on a given measure and mean rank on another measure, 7 (46%) were significant. The mean number of significant rhos between a university's mean rank on a measure and that on another measure was 1.11.

Finally, I ascertained to what extent lower-ranking universities differed from higher-ranking ones-for example, in terms of their mean ranks on the six measures and on the component indices upon which the measures are based. For the 1997 data, I therefore examined the top and bottom subgroups (halves) of the universities using the Wilcoxon Rank Sum test, which examines the significance of differences in ranked data on a specified parameter, taken from two inde- pendent samples of subjects (universities). I thus compared universities by assessing whether the rank scores of the "top" half, on all indices, and also whether the mean of ranks given to the indices for each of the six main meas- ures, differed significantly from those of the "bottom" half.

For medical/doctoral universities, the Wilcoxon tests showed that the top and bottom groups differed significantly (at p < .05) on only 50% (11 of 22) of the individual indices. For comprehensive universities, this figure was 36%, and for primarily undergraduate universities, it was 40%. For all three types of uni- versities, the top and bottom halves differed significantly on mean ranks of component indices for only two measures.

454

This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AMAll use subject to JSTOR Terms and Conditions

Page 5: Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

NOTE DE RECHERCHE / RESEARCH NOTE

DISCUSSION AND IMPLICATIONS

For the 1997 rankings, MM offers students its usual "worksheet," which lists all indices used for the 1997 rankings. The worksheet invites students, after "reading the charts," to derive a "shortlist" of universities. Yet, according to my analysis, the data allow no means by which students may reliably prioritize, discriminate between, or generalize from the measures or from their component indices ("indicators"). Moreover, the measures and component indices are not clearly related, conceptually or empirically, to each other or to the final rankings MM assigned. Moreover, I have found in separate analyses that these basic findings characterize the 1996 as well as the 1997 MM ranking data.

In contrast to the foregoing observations, students have other issues that cannot be addressed by statistics or "hard data." For example, as with MM's previous ranking exercises, the "worksheet" metaphor again omits factors such as geographical location and students' financial situation, as well as personal factors and other types of information typically involved in choice of university. Although MM notes that universities' programs and missions may be unique, it does not describe or include any method for using such factors, nor typically are students likely to be familiar with MM's distinction between its three "types" of universities. Since many inconsistencies and anomalies occur in the data (as they did in previous annual ranking exercises), one wonders how students might synthesize or reconcile many of the relationships between the MM measures and their component indices. Several universities do "well" in terms of achieving relatively high rankings, for example, yet do relatively poorly on many specific indices -and vice versa. Moreover, as I reported above, several rho correlations were negative in direction. Thus, in these cases, doing "well" on one index, measure, or rank was associated with doing "poorly" on another.

MM reaffirms the idea that there can be a "best" school, that is, that supposed variance among universities has greater significance than variance among stu- dents or among personal, financial, or other factors. As a general hypothesis (which I mentioned also in Page, 1996), it is conceivable that, with repeated exposure, university "ratings" may come to affect negatively the feelings, atti- tudes, expectations, and perhaps even academic performance of students who attend ostensibly "lower-ranking" universities. Such universities, to use MM's explicit terminology, are those that students have been informed are less innova- tive, worse overall, of lower overall quality, and less likely to generate "leaders of tomorrow" (pp. 40-43). MM describes one school as having become known as "last chance university." MM has so far not considered the effects of such "information" upon students' sense of academic and existential well-being, and perhaps upon how they are perceived by interviewers (or readers of personal resumes) in the course of later job-seeking activities. It is unlikely that significant change in comparison or perception of universities could occur over time in

455

This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AMAll use subject to JSTOR Terms and Conditions

Page 6: Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

NOTE DE RECHERCHE / RESEARCH NOTE

many of MM's criteria (e.g., in its "Reputational" index). Moreover, in future years, it is difficult to conceive of lower-ranking universities being able to bring about significant upward change in terms of their comparative ranks on the MM criteria, or that higher-ranking universities would move significantly downward. The strong possibility exists, therefore, that the foregoing effects could together constitute yet another manifestation of the educational self-fulfilling prophecy (Page & Rosenthal, 1991; Rosenthal & Jacobson, 1968), this time one that will affect some students negatively and others positively at the postsecondary level.

Data such as those MM emphasizes continue to support spurious comparisons by encouraging the view that university choice is analogous to use of Consumer Reports, in which personal or individual differences among consumers play little role in the interpretation or objective identification of "best" goods or services. The issues raised in this process are therefore important in counselling students who are considering the place of the university in their lives. Much of this counselling is increasingly affected by the use of empirically based, "data-driven" information such as the "indicators" supplied and publicized by MM.

Lastly, my analyses were exploratory and atheoretical. That is, I carried them out only to gather statistical information as one (perhaps one of many) means of assessing the variables MM chose for assisting students- presumably, each of MM's measures, and each measure's component indices, are meant to represent variables for which students should be attracted to "better" scores or ranks. Yet, one may ask why MM continues to emphasize these parameters, when their interpretation and interrelationships may be shown by simple statistics to be unreliable, if not totally fallacious. As one answer, many colleagues inform me that MM engages in the ranking exercise as a form of advertising, to "sell magazines." This presumably means that consumers are willing to pay current newsstand prices to obtain what is portrayed as necessary information, consistent with the popular goals of cost-effectiveness and rational decision making. Yet, major publications, like other components of the media, argue that they also constitute a service, providing the public with what it "wants." One may never- theless distinguish between wants and needs. Consumers are frequently presented with supposedly "needed" information not necessarily wanted nor, in fact, subject to rational synthesis or use. They may also receive information they neither want nor need, want but do not need, or, occasionally, both want and need. In any event, through advertising and repeated exposure, the media have become skilled at training customers to perceive themselves as "wanting" certain products, whether these are actually needed or useful. It seems unfortunate, in this light, that student readers of MM are increasingly being trained to want information that they do not in fact need, and whose utility as a rational aid to decision making turns out to be frequently specious.

456

This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AMAll use subject to JSTOR Terms and Conditions

Page 7: Rankings of Canadian Universities, 1997: Statistical Contrivance versus Help to Students

NOTE DE RECHERCHE / RESEARCH NOTE

ACKNOWLEDGMENT

I thank an anonymous reviewer for the Canadian Journal of Education for pointing out that MM emphasizes "input" at the expense of "output" measures. The former represent quantifiable measures of university resources or structure, and the latter represent other aspects of university experience, such as level of student satisfaction or post-university achievement.

REFERENCES

Measuring excellence. (1997, November 24). Maclean's, 110, 28-82.

Page, S. (1995). Rankings of Cana-dian universities: Pitfalls in interpretation. Canadian Journal of Higher Education, 25, 18-30.

Page, S. (1996). Rankings of Canadian universities, 1995: More problems in interpretation. Canadian Journal of Higher Education, 26, 47-58.

Page, S., & Rosenthal, R. (1991). Sex and expectations of teachers and sex and race of students as determinants of teaching behavior and student performance. Journal of School Psychology, 28, 119-131.

Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. New York: Holt, Rinehart, & Winston.

Siegel, S. (1959). Nonparametric statistics. New York: McGraw-Hill.

Stewart Page is a professor in the Department of Psychology, University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4.

457

This content downloaded from 188.72.96.138 on Wed, 11 Jun 2014 02:45:28 AMAll use subject to JSTOR Terms and Conditions