9
Politics, Bureaucrats, and Schools Author(s): Kevin B. Smith and Kenneth J. Meier Source: Public Administration Review, Vol. 54, No. 6 (Nov. - Dec., 1994), pp. 551-558 Published by: Wiley on behalf of the American Society for Public Administration Stable URL: http://www.jstor.org/stable/976675 . Accessed: 15/06/2014 13:24 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve and extend access to Public Administration Review. http://www.jstor.org This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PM All use subject to JSTOR Terms and Conditions

Politics, Bureaucrats, and Schools

Embed Size (px)

Citation preview

Page 1: Politics, Bureaucrats, and Schools

Politics, Bureaucrats, and SchoolsAuthor(s): Kevin B. Smith and Kenneth J. MeierSource: Public Administration Review, Vol. 54, No. 6 (Nov. - Dec., 1994), pp. 551-558Published by: Wiley on behalf of the American Society for Public AdministrationStable URL: http://www.jstor.org/stable/976675 .

Accessed: 15/06/2014 13:24

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve andextend access to Public Administration Review.

http://www.jstor.org

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 2: Politics, Bureaucrats, and Schools

Kevin B. Smith, University of Nebraska-Lincoln Kenneth J. Meier, University of Wisconsin-Milwaukee

Will reducing bureaucracy improve education? According to Kevin B. Smith and Kenneth J Meier, few ideas have swept the nation with such fervor as the proposal that if school systems would adopt public choice reforms student performance would improve. The authors reflect on Chubb and Moe's thesis callingfor an elimination of bureaucracy and its harmfil restraints. However, Smith and Meier use a precise measure of bureaucracy andfind that bureaucracy is not related to three different measures ofstudentperfor- mance. Their position is that bureaucracies develop because school systems need administrators and administra- tive capacity toflunction effectively. Reducing bureaucracy will impose administrative tasks on street-levelpersonnel (teachers) and will likely impair theirperformance rather than improve it.

In a provocative analysis, Chubb and Moe (1990, 1988) argue that private schools perform better than public schools. They reason that efforts by democratic institutions and the complex environments of public schools generate greater levels of bureaucracy in public schools than in private schools. Rules, regulations, and controls restrict the autonomy of teachers and prevent them from doing what they do best-teach. Only by permitting teachers to adopt techniques free from the influence of bureaucratic meddling can schools successfully educate children. As evidence to support their argument, Chubb and Moe use the High School and Beyond data set. They present data that show private school students perform better on standardized tests and link these results to teachers' perceptions of how bureaucratic the school system is. While their primary argument focuses on the differ- ence between public and private schools, they also contend that less bureaucratic public schools are more successful than other public schools and provide some empirical evidence for their claims in an appendix. The work of Chubb and Moe has greatly influenced administrative practices in education. Thirteen states have adopted choice programs, and many school districts are experimenting with charter schools and site-based management to reduce the size of cen- tral bureaucracies (Carnegie Foundation, 1992).

This research reexamines the role of bureaucracy in the educational performance of public school students. Three reasons suggest that probing the bureaucracy/school performance nexus is worthwhile.' First, bureaucracy is a measurable concept. In much of the organiza- tion theory literature, specific, objective measures of bureaucracy or the level of formal bureaucracy are presented--levels of hierarchy, supervisory personnel, existence of rules, ratio of administrative to production personnel (Price, 1972, 19; Galambos, 1964; Rushing, 1967; Meyer, Scott, and Strang, 1987, 196). Chubb and Moe (1990), however, rely on subjective assessments using teacher's percep- tions of how bureaucratic their school system is. Perceptions are influenced by a wide range of phenomena including, perhaps, even feelings that student performance is lacking (Lan and Rainey, 1992; 16).2

Public Administration Review * November/December 1994, Vol. 54, No. 4 551

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 3: Politics, Bureaucrats, and Schools

Second, bureaucracy is the instrument democracy uses to assert control over the school system. Chubb and Moe (1988; 1069) con- tend that democratic institutions such as the school board or other political actors establish goals for the public schools and seek compli- ance with those goals by creating bureaucracies to monitor the schools. Schools are no different from other organizations in this regard (Bozeman, Reed, and Scott, 1992). In the context of complex, industrial democracies, bureaucracy is a prerequisite for democratic governance (Etzioni-Halevy, 1983).3 Bureaucracies, however, have multiple goals and surely not all such goals have negative connotations for the school system.4 Bureaucracies are necessary to maintain the infrastructure of the school system, to translate academic research into techniques usable in the classroom, to provide access for all students, to provide schools a way of assessing their performance, etc. Reducing bureaucracy in such cases may actually harm organizational perfor- mance.

Third, bureaucracy, especially government bureaucracy with its normative connotations, has become the universal scapegoat in Amer- ican politics. Be it a slow down in economic productivity, an ineffi- ciency in pollution control, an explosion in the national debt, or sim- ply just the growth of government, bureaucracy has received much of the blame (Litan and Nordhaus, 1983; Downing, 1984; Goodsell, 1984; Niskanen, 1971; Tiebaut, 1956). The empirical support for this blame, however, has rarely matched the rhetoric; comparisons between private and public sector bureaucracies (and private schools are bureaucracies too) shows that public sector bureaucracies perform about as well as private sector bureaucracies when valid comparisons are made (Goodsell, 1994). Before major administrative reforms are undertaken, we should first verify that a problem exists.

This research examines the relationship between bureaucracy and performance in public school systems using the 50 U.S. states as units of analysis. First, the unit of analysis will be defended. Second, three measures of student performance will be presented. Third, precise measures of bureaucracy as well as several control variables will be identified. Fourth, the relationship between bureaucracy and educa- tional performance will be assessed in a multiple regression model. Finally, the implications of this research for educational policy and administration will be discussed.

Unit of Analysis

Individual versus System Despite increasing calls for changes in the public school system,

works such as Chubb and Moe (1990) that advocate this change expend little empirical effort at the systemic level. Instead, individual data have been the primary unit of analysis. Good reasons support this decision. Survey data on individuals often offer a richness of detail not available in aggregate studies, and the individual is usually the common focus of educational performance studies.

While individuals are the units of analysis in such studies, the broader system of education bears the brunt of criticism and calls for change. Surely using individual data to infer the failure of the aggre- gate system risks the ecological fallacy. An extensive literature dating back to Coleman et al. (1966) demonstrates that many important influences on individual performance (e.g., social class, family stabili- ty) are beyond the control of the broader public educational system. Given the well-established distance between system characteristics and individual performance, using individual data to infer systemic flaws seems problematic.

Measuring educationalperformance on an aggregate

level is an area ofsome controversy and varied estimation.

No single measure of studentperformance is considered

adequate. High schoolgraduation rates are a measure of the

minimum level ofstudentpe~formance within states.

Graduation-rate data are often severely dated and, more

importantly, may not be strictly comparable. A more sensible approach is to concentrate study on the system

level, that is focusing on system characteristics, rather than individual performance. This accomplishes two goals. First, using system-level variables for inputs and outputs, the study avoids the ecological falla- cy. System inputs are compared directly to system outputs. Second, using aggregate, objective measures provides a comparative basis for educational systems, a goal difficult to accomplish with individual level data.

Accordingly, the units of analysis for this study were states, and all data were measured and collected at this system-level. States are a log- ical choice for examining system-level relationships in education. They have the largest and most encompassing educational systems in the United States. They vary considerably in both inputs and out- puts. Proposed reforms are often concentrated at the state level (National Governors' Association, 1983) and research that addresses itself to this level of analysis can thus provide practical information to policy makers. Finally, if Chubb and Moe's (1988, 1990) argument of organizational determinism is correct, then the states are a logical unit of analysis because they mandate many organizational characteris- tics of the school system.

Such aggregation obviously ignores the variation and impact of schools and school districts within states. Choice advocates argue, however, that there is little variation to mask. As a monopoly, Peter- son (1990) contends there are few differences among the public edu- cation systems at any level. Although we do not agree with this per- spective of public schools, the objective of this research is to examine systemic relationships. Because of this, the system-level unit of analy- sis makes sense, even if such aggregation does mask lower level varia- tion.

Measurement and Design

System Performance

Measuring educational performance on an aggregate level is an area of some controversy and varied estimation. No single measure of student performance is considered adequate. High school graduation rates are a measure of the minimum level of student performance within states. Graduation-rate data are often severely dated5 and, more importantly, may not be strictly comparable. Because the quality of schools varies greatly, a high school diploma from one school might mean something quite different from a diploma from another school. Because graduation from high school is considered the minimum level of output from a school system, however, this study will use the per-

552 Public Administration Review * November/December 1994, Vol. 54, No. 6

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 4: Politics, Bureaucrats, and Schools

Scholars have generally assumed the SATandACT are compatible enough to transform the scores into a single

scale that can be used to make comparative study possible. The method of accomplishing this, however, has been far from uniform.

centage of adults aged 25 or older in a state with a high school diplo- ma as one measure of system performance.6

Since the publication of the Coleman (1966) report, the favored yardstick of educational achievement has been performance on a stan- dardized test, usually the SAT or ACT. Aggregate averages of these tests have commonly been used for comparative purposes among state education systems. Measures based on standardized tests have also been widely and justifiably criticized. The alternative used by Chubb and Moe is to measure student performance for the same cohort of students at two different time periods and to take the improvement in scores, or some derivation thereof, as the dependent variable (variously called value-added or gain indicators). Proponents argue this measure is superior because it captures better the impact of schools while con- trolling for background characteristics such as raw cognitive ability and family socioeconomic status (Hanushek and Taylor, 1990; Meyer, 1992).7 The alternative is to use standardized test data and attempt to control for socioeconomic status. Comparable cross-state data for standardized tests are rare, but one such data set does exist. The second measure will be the U.S. Department of Education's 1990 eighth grade math exam that is available for 37 of the 50 states (Mullis etaL, 1991).8

The third measure of student performance will be the average ACT or SAT score of high school seniors in 1988. Although this measure is widely used in various aggregations as a measure of both school and system quality, three main criticisms can be targeted at the measure. First, comparisons among state averages may be misleading if such scores do not take into account the number of students taking the exam. The smaller the percentage of the student population tak- ing the exam, the higher the mean score tends to be. Powell and Steelman (1984) reported that almost three-fourths of the variation in state SAT scores could be attributed to the percentage of students tak- ing the test.

We solve this problem by adopting the U.S. Department of Edu- cation (DOE) solution, using scores only if a minimum of 31.5 per- cent of those eligible actually take the test. The bivariate correlation between the complete sample of 1982 SAT scores (used by Powell and Steelman) and percentage of students taking the test is .857. Using the 31.5 percent minimum, the linear relationship disappears; and the resulting correlation is .001.

The DOE "solution," however, creates its own problem and raises a second criticism of using SAT and ACT scores. The results reported by the DOE consist of two truncated samples-SAT-dominant states (minimum of 31.5 percent students taking SAT) and ACT-dominant states (minimum of 31.5 percent students taking ACT). To get a measure applicable to all 50 states, one must standardize the SAT and ACT scores to permit the two DOE state samples to be merged.

Scholars have generally assumed the SAT and ACT are compatible enough to transform the scores into a single scale (Astin, 1971; Wain-

er, 1986a, 1986b) that can be used to make comparative study possi- ble. The method of accomplishing this, however, has been far from uniform (for a good review of this literature and the various methods proposed, see Lehnen [1992]). The transformation used here, called the standardized education index (SEL), is a state's mean SAT or ACT score expressed as a percentage of the highest score possible. On a national level, and where reasonable dual state scores exist, these per- centages have fluctuated in unison during the past two decades, with ACT means running consistently about 5 percentage points below SAT means. To account for this difference, five points are added to the ACT percentages; and the two state samples are merged.9

Independent Variables

Economic Constraints. Virtually every study of student perfor- mance finds that family background and socioeconomic characteristics account for much of the variance (Coleman et aL, 1963; Jencks et aL, 1972; Hanushek, 1986). Any assessment of student performance, therefore, must control for these factors. Three socioeconomic vari- ables are included in the analysis-the teenage birth rate, the percent- age of adult females who participate in the labor force, and per capita income.10

Teenage birth rates, measured as the number of births per 1,000 women between the ages of 13 and 19, is our indicator of poverty. Teenage birth rates, in addition to creating a burden on students who want to remain in school, also reflect a wide range of other social problems such as poverty, unstable families, a lack of social services, and a generally poor climate for encouraging education.11 Teen birth rates should be negatively associated with student performance.

A second measure of socioeconomic status is the percentage of a state's female population in the labor force. Although households headed by females are known to be negatively related with student performance, a large female work force creates positive role models for girls (Andersen, 1975) and generates additional demand for education so that women can compete for the same jobs as men. Female labor force participation should be positively related to student perfor- mance.

The final socioeconomic measure was the state's per capita income. Wealthy communities can afford to spend more for educa- tion and often do. Although the expenditures may or may not pur- chase higher quality education, families with higher incomes frequent- ly provide other family advantages (access to computers, greater home instruction, preschool education, etc.) that are reflected in later stu- dent performance. We expect that income will be positively associat- ed with student performance.12

School Policies. The decade of the 1980s saw a massive effort on the part of state governments to reform their educational systems. Although most efforts such as decreasing student-teacher ratios or increasing teacher salaries had little impact on student performance, three variables that are under the control of state education agencies are related to performance-compulsory education laws, school size, and long-term educational funding.

Compulsory education laws clearly affect student performance. The adoption of these laws has been historically tied to major shifts of youth from the labor market to schooling (Tyack, 1974). Although most of the benefits of compulsory education laws have already been .attained, some state-to-state variation in the length of time that stu- dents are required to stay in school remain. The measure is the num- ber of years of compulsory school attendance that is required by the state government.

Politics, Bureaucrats, and Schools 553

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 5: Politics, Bureaucrats, and Schools

Three multiple regressions were estimated-one for each dependent variable. Given the large number of independent variables, collineari- ry and misspecification can be a problem. As a result, models were estimated with all variables and then reestimated to remove variables with- out predictive ability. The goal was to present a parsimonious model that tested the bureaucracy relationship in the presence of rigorous con- trols.

For each regression, regression diagnostics were examined. Regressions with only 50 cases are subject to distortion resuling from extreme cases and non-Gausian data. Of particular concern were cases with large studentized residuals and also large influence or leverage measures (Fox, 1991).

In such cases where the regression equation was being distorted, dummy variables for individual states were added to eliminate the dispro- portionate and distorting impact of that state. Fortunately there were few such cases: A total of five different states in the three regressions. No regression equation contained more than four stares with significant studentized residuals and significant Dffits. Rather than present models with all variables, a more parsimonious model without specification problems, and then a final model without the influence of distort- ing cases, only the final model will be presented here.

The last several decades have also witnessed a massive school dis- trict consolidation effort as small and rural school districts were merged to form larger districts (Meyer, Scott, and Strang, 1987; 189). As part of consolidation, schools often become larger. Although there probably are economies of scale in education, small schools have some distinct advantages over larger schools. As Chubb and Moe (1990) would argue, a smaller school is likely to serve a smaller, more homo- geneous area. It should be more able to adjust teaching methods to the specific education problems that the school faces. Systems with smaller schools, therefore, should be positively related to student per- formance.13

The third school policy measure is the long-term funding of edu- cation. Perhaps the most frequent argument heard in education policy debates is that the primary obstacle preventing better student perfor- mance is a lack of resources. This faith in the performance-purchasing power of the dollar has remained unshakable even in the face of over- whelming evidence to the contrary (Hanushek, 1981, 1986). Increased education expenditures have been a cornerstone in many state reform efforts. Expecting increased expenditures to result imme- diately in improved performance, however, misinterprets the opera- tion of the educational system. Expenditures should be considered a long-term investment in the educational system. It takes some time to turn money into better facilities and new curricula; it takes longer to attract talented new teachers; and it takes even longer to develop parental demand for high-quality education by exposing children to better-funded education.

Our dependent variables measure long-run performance (gradua- tion, ACT/SAT scores, eighth grade math exams). Long-term expen- ditures, therefore, should be better predictors than short-term, one- time expenditures. To measure long-term expenditures, a time lag was introduced into the model. Instead of 1988 per pupil expendi- tures, 1960 per pupil expenditures (measured in 1988 dollars) were used. 14 Because such expenditures have only increased during the past 28 years, districts that began with high levels of fiscal commitment should be reaping a higher reward than those whose commitment started low and began spending money only after problems were apparent.'5 Performance should be positively related to 1960s fund- ing.

Bureaucracy. The central variables of concern are for educational bureaucracy. School teachers face two different bureaucracies on a continuing basis-the state educational bureaucracy and the school district bureaucracy. The state bureaucracy was measured as the num- ber of state bureaucrats per 1,000 students (Meyer, Scott, and Strang, 1987; 190). The school district bureaucracy was the ratio of central

district administrative staff per 1,000 students. While such measures capture only one aspect-size-of what is a multidimensional con- cept, they are objective and directly comparable across systems. As yardsticks of bureaucracy, they still remain open to some justifiable criticism, but also offer to bring some degree of clarity to what is often a fuzzy component of the choice debate. Empirically testing Chubb and Moe's arguments on bureaucracy is difficult because they never really define bureaucracy, and they measure it through subjective assessments. Other public choice advocates simply equate bureaucra- cy with the number of bureaucrats in any given organization and view centralized bureaucracies as the most harmful (Fliegel and MacGuire, 1993; 25-27). In short, from the choice perspective, bureaucracy is seen in fuzzy and normative terms. Even if the concept is unclear, however, it is clear that choice proponents see large bureaucracies as a detriment to educational performance. These measures can test this hypothesis and its alternative.

State education bureaucracies have been hailed and damned by various reform-minded sectors of the political spectrum. In their favor, large, professional bureaucracies should be able to enforce and maintain standards that might be ignored at the local level. Large state agencies can also interpret professional research and translate it into practical applications for individual teachers. Arguing against large bureaucracies are the resources they consume that could other- wise be transferred directly to schools, and the inflexibility they impose on local school districts (Chubb and Moe, 1990). Such inflexibility is seen as an obstacle to school-based management and the ability to meet local educational needs through creative educational policies.16

Teacher Influence. The effective-schools' literature, in support of the work of Chubb and Moe (1990), finds that student performance is linked to teacher autonomy in the classroom. Students learn best when teachers teach rather than spend their time on bureaucratic mat- ters. A 50 state survey by the Carnegie Foundation (1988) polled teachers and asked them how involved they were in a variety of tasks. Four questions dealt with traditional teaching functions-selecting textbooks, shaping the curriculum, tracking students, and setting pro- motion and retention policies. Teacher responses on these four ques- tions are strongly interrelated. Rather than use all four measures, the percentage of teachers who said that they were involved in making decisions on tracking (that is, assigning students to classes based on need and curriculum) will be our measure of teacher influence in teaching. 17

The flip side of the teacher question is teacher involvement in administrative functions. Although some teachers and many teachers'

554 Public Administration Review * November/December 1994, Vol. 54, No. 6

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 6: Politics, Bureaucrats, and Schools

unions have sought to influence school administration, such efforts are likely to interfere with classroom activities. Two measures of teacher involvement in administration were used from this survey- the percentage of teachers who said that they were involved in evaluat- ing teacher performance and the percentage of teachers who said that they were involved in selecting new administrators. We feel that both these functions are administrative and likely to take teachers away from their classroom duties.'8 The relationship between these vari- ables and student performance should be negative.

Findings The results presented in Tables 1, 2, and 3 show little evidence to

support Chubb and Moe's argument that more bureaucratic systems suppress school performance. Indeed, in the analyses using gradua- tion rates and eighth grade math tests as dependent variables, bureau- cracy had no predictive ability at all and was subsequently eliminated from the models (Tables 1 and 2). In the absence of the bureaucracy measures, these models retain impressive explanatory power, with R squares of.84 and .93, respectively.

Both district- and state-level bureaucracy were negatively related to the SAT/ACT-based student education index (Table 3). Both mea- sures, however, were only marginally significant-neither met the p < .05 criterion. The negligible performance of these variables argues against drawing any firm conclusions about their effect on student performance. The overall weak to nonexistent impact of bureaucracy in these three models indicates that, whether the relationship is posi- tive or negative, bureaucracy is at best a minor influence on system outputs.

Economic constraints, on the other hand, are powerful predictors of public school performance. In all models, these variables were strongly associated with the dependent variables, and all relationships were in the expected direction. These findings are consistent with much of the literature that examines the impact of socioeconomic characteristics on education.

Of more substantive interest are the impacts of policy and teacher influence variables, which indicate education politics are more than mere posturing, and that the roles of actors within the educational sys- tem have important consequences for performance. School size was a significant variable in all three models, although producing a positive coefficient for the graduation rates' analysis and negative coefficients for the analyses using standardized test scores. Although this finding could be interpreted as contradictory and should, therefore, be dis- missed, it can also be seen as evidence to support an argument made earlier-that graduation rates and system performance are not identi- cal concepts. Larger schools may crank out more graduates, but it appears they do a poorer job of educating them.

Long-term funding (Table 1) and years of compulsory education (Table 3) were also policy variables that made an impact. Long-term funding is associated with higher high school graduation rates, and compulsory education laws are associated with higher SAT/ACT scores. Although only significant in a single model each, the fact that these variables surface at all stands in contrast to findings dating back to Coleman et at (1966) who argued school policies have little con- nection to performance.

Table 2 and Table 3 also show that teacher influence has a signifi- cant impact on student performance on standardized tests. As expect- ed, the influence variable measuring teacher-related activities was posi- tive, and those measuring administrative activities were negative. These findings can be viewed as further evidence against Chubb and

Moe's bureaucracy argument. Following the public choice prescrip- tion to make schools less bureaucratic-that is, eliminating bureau- crats--will push more administrative responsibilities onto teachers. The negative coefficients of the evaluation and administration vari- ables indicate that such bureaucracy-slashing efforts will suppress over- all performance, not improve it.

Bureaucracy: An Alternative Explanation In contrast to the argument that bureaucracy is an unwieldy and

ultimately harmful by-product of control by democratic institutions is the view that bureaucracy is an instrument used to address societal problems (Goodsell, 1994). This position holds that bureaucracy is a function not of external authorities seeking to constrain choice but of need and governmental responsibility. If this alternative view of bureaucracy is correct, we should be able to predict the size of school district bureaucracies as a function of need, money, and responsibility.'9 If bureaucracy grows inevitably as the result of demo- cratic control, the level of prediction using need and responsibility will

Table 1 Bureaucracy and High School Graduation Rates (Dependent Variable = High School Graduation Percentage)

Independent Variable Slope Beta t score Economic Constraints

Teen birth rate -.2236 -.46 4.96* Females in labor force .5407 .39 5.09* Per capita income K -.6604 -.29 3.2 1*

Policies Funding 1960 .0114 .62 7.14* School size .0093 .19 2.26*

Constant 48.7579 R2 .84** Adjusted R2 .82 F 32.59 Nof cases 49 *p< .05. **Model includes controls for Rhode Island and New York.

Table 2 Bureaucracy and Student Math Performances (Dependent Variable = Eighth Grade Math Test)

Independent Variable Slope Beta t score Economic Constraints

Teen birth rate -.4220 -.60 7.60* Females in labor force .5877 .30 4.36*

Policies School size -.0150 -.21 2.82*

Teacher Influence Tracking .2699 .20 2.42* Evaluation -.5233 -.21 3.14* Administration -.3410 -.16 2.26*

Constant 254.9039 R2 .93** Adjusted R2 .91 F 36.73 Nof cases 37 *P < .05. **Model includes controls for Arizona, Hawaii, Maryland, and Rhode Island.

Politics, Bureaucrats, and Schools 555

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 7: Politics, Bureaucrats, and Schools

Table 3 Bureaucracy and College Entrance Exams (Dependent Variable = Standardized ACT or SAT Scores)

Independent Variable Slope Beta t score Economic Constraints

Teen birth rate -.0853 -.46 3.63* Females in labor force .1373 .26 2.44*

Policies Years required .3186 .15 1.58 School size -.0037 -.20 1.66

Bureaucracy State bureaucrats -1.1097 -.17 1.73 District bureaucrats -.1197 -.16 1.49

Teacher Influence Tracking .0785 .22 1.71 Evaluation of teachers -.1967 -.32 2.86*

Constant 50.9526 R2 .69** Adjusted R2 .62 F 9.87 Nof cases 49 *p < .05. **Model includes a control for Arizona.

be minimal. The results of this effort appear in Table 4. Three variables were used as indicators of need-the size of reme-

dial reading programs, the proportion of teachers involved in student tracking, and population density. Programs designed to address spe- cific problems, such as remedial reading, should produce greater administrative burdens. Tracking is used as a measure of teacher involvement in advancing the educational needs of students; as this involvement increases, teachers should have less time for bureaucratic responsibilities. More bureaucrats are also expected where more edu- cation problems-and more programs aimed at tackling them-are found. Given the well-publicized problems of inner-city schools, more bureaucrats should be found in more densely populated areas (Chubb and Moe, 1990; 65). All three need variables were positively related to the size of school district bureaucracy adding support to the argument that bureaucracy is in part a function of need.

State and local levels of bureaucracy are negatively related, lending support to Meyer, Scott, and Strang's (1987; 190) contention that large professional state bureaucracies reduce bureaucracy at the district level (and that small ones push administrative requirements onto local districts). This may be interpreted as additional support for the posi- tion that educational bureaucracies grow as responsibilities increase. Adding to this view is the positive coefficient for state funding. As states send more dollars to local districts, they apparently also transfer responsibility and create demands for more bureaucrats at the school district level. The positive coefficient for per pupil expenditures indi- cates available resources also play a role in determining the size of school district bureaucracy.

School size was negatively related to district bureaucracy suggest- ing a trade-off between educational performance and bureaucracy size. As reported above, education performance is higher in smaller schools. Choosing many small schools over a handful of large ones, however,

Table 4 Determinants of School District Bureaucracy (Dependent Variable = Bureaucrats per 1,000 Students)

Independent Variable Slope Beta t score State bureaucrats -2.9102 -.33 2.97* State funding percent .0494 .24 2.23* Per pupil expenditures .0013 .50 3.70* School size -.0114 -.46 3.88* Population density .0023 .25 1.64 Remedial reading students 38.8669 .26 2.38* Teacher influence-tracking .1045 .22 1.82 Constant -6.8337 R2 .67** Adjusted R2 .60 F 8.85 Nof cases 49 *p < .05. **Model includes controls for Arizona and Kentucky.

will increase the number of central office bureaucrats. Large central- ized schools can reduce the number of bureaucrats through adminis- trative economies of scale, but this reduction in bureaucracy may be at the price of depressing educational performance. Overall, the model explains 67 percent of the variation in school district bureaucracy pro- viding credence to the argument that school district bureaucracies grow as the result of need and responsibilities rather than some inher- ent dynamic of democracy.

Conclusion This research examined the role of bureaucracy in the performance

of public school systems. Building on the work of Chubb and Moe, we empirically investigated the impact of large state- and district-level bureaucracies on educational performance. Using objective indicators of bureaucracy, the results of this analysis argue against the claims of Chubb and Moe and other public choice advocates that large bureau- cracies act as a drag on educational performance. When appropriate controls were introduced, bureaucracy had predictive power for only one of three performance measures, and even then only managed marginal levels of statistical significance.

Further supporting this conclusion are the findings indicating that when teachers assume bureaucratic responsibilities, student perfor- mance suffers. Because assuming such responsibilities is a probable consequence of shrinking bureaucracies, this argues against the public choice prescription of eliminating bureaucrats to improve public school performance. Far from restricting the autonomy of teachers, bureaucrats can free them from administrative responsibilities and allow them to concentrate on what they do best-teach.20

These findings suggest that policy makers and administrators should carefully evaluate proposals to reduce bureaucracy. If reducing bureaucracy has no impact on organizational performance, such reforms will impose costs that greatly exceed any benefits. Several cur- rent policy options are directly linked to organizational performance, reducing the size of bureaucracy is not one of them.

556 Public Administration Review * November/December 1994, Vol. 54, No. 6

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 8: Politics, Bureaucrats, and Schools

Kevin B. Smith is an assistant professor of political science at the University of Nebraska-Lincoln. He has published articles in the Journal of Politics, American Politics Quarterly, and the Journal of Pub- lic Administration Research and Theory. He is the author of The Iron Man (a biography of Congressman Glenn Davis) and of the forth- coming School Choice: Politics, Markets and Fools.

Kenneth J. Meier is a professor of political science at the Universi- ty of Wisconsin-Milwaukee and the editor of the American Journal of Public Political Science. His research interests concern the role of bureaucracy in the public policy process. His most recent publica- tions include The Politics of Sin: Drugs, Aklohol and Public Policy and Politics and the Bureaucracy: Policy Making in the Fourth Branch of Government (3d edition).

Notes

1. We do not intend this manuscript to be a critique of Chubb and Moe as much as an extension of their work. We do not address their main argument, the difference between public and private schools, but only examine the relationship between bureaucracy and student performance in public schools. Other more elaborate com- mentaries on their work already exist (Clune and Witte, 1990).

2. A teacher faced with a school system that is not producing high quality students (as indicated by standardized tests) might well perceive that teachers could educate stu- dents better if only they had greater autonomy. Bureaucracy, in this manner, becomes a scapegoat for poor performance. Since objective indicators of bureaucra- cy are available, a more reasonable measurement strategy would be to compare objective indicators of bureaucracy with objective indicators of student perfor- mance. The High School and Beyond data set does not have objective measures of bureaucracy; Chubb and Moe (1990), similar to many other researchers, were forced to make do with the data set that they had.

3. The common view in public administration is that bureaucracy can undermine democracy (Redford, 1969); while bureaucracy and democracy do generate tensions for each other, the demands of a 20th century democracy clearly require bureau- cratic organizations.

4. Chubb and Moe contend that it is precisely this multitude of goals that creates a bureaucratic drag on the school system. Public schools have to be all things to all people; and, therefore, with conflicting goals and regulations, technique could well replace purpose. Schools become more concerned with complying with regulations than with educating students.

5. High school graduation rates are calculated only for the official census every ten years. In addition, the rates are for all adults age 25 or older so that the rates reflect graduation patterns over several decades; they also do not adjust for migration pat- terns.

6. These data for 1989 can be found in the Statistical Abstract of the United States, 1991 (Bureau of the Census, 199?; 140).

7. One problem with this measure, especially at the individual level is that subtracting any two measures that have some measurement error (and standardized tests clearly contain some measurement error) compounds the total error in relation to the vari- ation that taps the underlying concept (Harris, 1963). In many cases, the change scores are so small (Chubb and Moe's scores average two test questions) that mea- surement error dwarfs any real variation.

8. This exam was part of the National Assessment of Educational Progress. States were given the option of participating in the 8th grade math exams. Owing to the costs of participation, only 37 of the states did so. The state scores are from Mullis etal. (1991).

9. This results in merging an ACT sample with a mean of 57.2 and a standard devia- tion of 2.7, and an SAT sample with a mean of 55.7 and a standard deviation of 1.6. The three measures of student performance are interrelated. The correlation between high school graduates and the SEI is .68. The correlation between SEI scores and the math exam is .77, and the correlation between high school graduates and the math exam is .69. These positive correlations suggest that the three indica- tors are measuring different aspects of the same concept.

10. The aggregation at the state level again masks a great deal of variation at the district and at the school level on these variables.

11. The teen birth rate is the best of the poverty predictors in this area. Other mea- sures tried include AFDC rates and urbanization.

12. A variable that arguably should have been included here was a measure of minority population, and originally such a measure was included. Although negatively asso- ciated with student performance, minority population measures tended to be highly correlated with other variables in the model. Such correlations support an argument

that race has little to do with student performance, although the poor socioeco- nomic conditions many minority populations find themselves trapped in almost certainly do.

All socioeconomic data are for 1988. The sources are as follows: Females in labor force-U.S. Bureau of Labor Statistics, Geographic Profile of Employment and Unemployment, 1988; teenage birth rate-National Center for Health Statistics, Vital Statistics of the United States, 1988; per capita income- The StatisticalAbstract of the United States.

As reflected in Table 1, the predicted positive relationship involving income did not materialize. This is probably an artifact of our use of raw income data and not controlling for cost-of-living differences across states. We assume that such control would have produced the predicted outcome.

13. Size is not an unfettered policy option; it is influenced by population density, trans- portation availability, and other factors. Nonetheless some states have been far more aggressive in consolidating schools than are others.

14. The school size variable is the number of students per school. That variable and the number of years of compulsory education were taken from the Digest of Educa- tion Statistics, 1991. School expenditures per student for 1960 were taken from the 1963 StatisticalAbstract of the United States.

15. Several time lags were tried, ranging in five-year increments from 1985 to 1960. The longer the lag, the stronger relationship, meaning the 1960 variable worked best. The impact of funding on education may be through attitudes such as sup- port for education or perhaps by attracting and retaining quality teachers by estab- lishing a long-term stable school system. Reforms are also likely to affect younger students more than older students; thus a large portion of this lag is accounted for by the 13-year public school curricula. Policy makers generally fail to realize that all reforms have an incubation period.

16. All data, except for state bureaucrats, are for 1988 and from the Digest of Education Statistics. Measuring state bureaucracy is difficult because state education agencies often have multiple functions. Our measure is the number of full-time professional staff in the headquarters of the state education department for 1982 from Educa- tional Government in the States: A Status Report on State Boards of Education, Chief State School Officers, and State Education Agencies (Washington: U.S. Department of Education, 1985).

17. These teaching-involvement measures are strongly correlated with the smaller schools measure. The correlations with school size are -.52 for textbooks, -.60 for curriculum, -.46 for tracking, and -.51 for promotions. Much of the impact of teacher influence, therefore, probably works through the school size measure.

18. Our definition of administrative functions for teachers is at odds with the effort to empower teachers. That movement attempts to create greater teacher autonomy by vesting oversight functions in the teachers themselves.

19. In addition to variables defined in other tables, this analysis included measures of population density, remedial reading programs, and state funding. Population den- sity is the number of persons per square mile using the 1980 Census of Population. Remedial reading programs is the percentage of students in such programs reported in the National Center for Educational Statistics, Comparisons of Public and Private Schools, 1987-88 (Washington: author). State funding is the percentage of the school districts' budgets provided by state government; it is from the Digest of Edu- cation Statistics.

20. One possibility that we have not examined is that poor management rather than bureaucracy per se is a cause of low student performance. Bureaucracy and the quality of management are distinct concepts as recent work on red tape and bureau- cracy has demonstrated (see Bozeman, Reed, and Scott, 1992).

Politics, Bureaucrats, and Schools 557

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions

Page 9: Politics, Bureaucrats, and Schools

References Ancarrow, Janice S. and Elizabeth Gerald, 1990. Comparisons of Public and Private

Schools, 197-1988. Washington, DC: U.S. Department of Educational Research and Improvement.

Andersen, Kristi, 1975. "Working Women and Political Participation, 1952-1972." American Journal ofPoliticalScience, vol. 19 (August), 439-454.

Astin, A., 1971. PredictingAcademic Performance in College. New York: Free Press. Bozeman, Barry, Pamela N. Reed, and Patrick Scott. 1992. "Red Tape and Task

Delays in Public and Private Organizations." Administration and Society, vol. 24 (November), 290-322.

Carnegie Foundation for the Advancement of Teaching, 1988. Teacher Involvement in Decisionmaking: A State-by-State Profile. New York: Carnegie Foundation for the Advancement of Teaching.

, 1992. School Choice. Princeton, NJ: The Carnegie Foundation. Chubb, John E. and Terry M. Moe, 1988. "Politics, Markets, and the Organization of

Schools." American Political Science Review vol. 82 (December), 1065-1089. _ 1990. Politics, Markets and Schools. Washington, DC: The Brookings Institution.

Clune, William H. and John F. Witte, eds., Choice and Control in American Education, vols. 1 & 2. Briston, PA.: The Falmer Press.

Coleman, James S. et al., 1966. Equality of Educational Opportunity. Washington, DC: U.S. Government Printing Office.

Downing, Paul B., 1984. Environmental Economics and Policy. Boston: Little, Brown. Etzioni-Halevy, Eva, 1983. Bureaucracy and Democracy: A Political Dilemma. Boston:

Roudedge and Kegan Paul. Fliegel, Seymour and James MacGuire, 1993. Miracle in East Harlem. New York:

Basic Books. Fox, John, 1991. Regression Diagnostics. Newbury Park, CA: Sage Publications Inc. Galambos, P., 1964. "On the Growth of the Employment of Non-Manual Workers in

the British Manufacturing Industries, 1948-1962." Bulletin ofthe Oxford University Institute of Economics and Statistics, vol. 26 (November), 369-383.

Goodsell, Charles T., 1984. "The Grace Commission: Seeking Efficiency for the Whole People?" PublicAdministration Review, vol. 44 (May/June), 196-204.

, 1994. The Case for Bureaucracy: A PublicAdministration Polemic. 3d ed. Chatham, NJ: Chatham House.

Hanushek, Erik A., 1981. "Throwing Money at Schools." Journal of Policy Analysis and Management, vol. 1, no. 1, 19-41.

, 1986. "The Economics of Schooling: Participation and Performance." TheJournal of Economic Literature, vol. 24, no. 3 1141-77.

Hanushek, Erik A. and Lori L. Taylor, 1990. "Alternative Assessments of the Perfor- mance of Schools." The Journal ofHuman Resources, vol. 25, no. 2 179-201.

Harris, Chester W., 1963. Problems in Measuring Change. Madison: University of Wisconsin Press.

Jencks, C., M. Smith, H. Avery, M.J. Bane, D. Coehn, J. Gintis, B. Heyns, and S. Michelson, 1972. Inequality: A Reassessment of the Effect of Family and Schooling in America. New York: Basic Books.

Jones, Calvin, National Opinion Research Center and U.S. Office of Educational Research and Improvement. 1980-1986. High School and Beyond Washington, DC: Office of Educational Research and Improvement.

Lan, Zhiyong and Hal G. Rainey, 1992. "Goals, Rules, and Effectiveness in Public Private, and Hybrid Organizations." Journal of Public Administration Research and Theory, vol. 2 January) 3-29.

Lehnen, Robert G., 1992. "Constructing State Education Performance Indicators from

ACT and SAT Scores." Policy Studies Journa4 vol. 20, no. 1 22-40. Litan, Robert E. and William D. Nordhaus, 1983. Reforming Regulation. New Haven:

Yale University Press. Meyer, John, W. Richard Scott, and David Strang, 1987. "Centralization, Fragmenta-

tion, and School District Complexity." Administrative Science Quarterly, vol. 32 (March), 186-201.

Meyer, Robert H., 1992. "Education Reform: What Constitutes Valid Indicators of Educational Performance?" The LaFollette Policy ReporA vol. 4, no. 2.

Mullis, Ina V. S., John A Dossey, Eugene H. Owen, and Gary W. Phillips, 1991. The State of Mathematics Achievement: NAEP's 1990 Assessment of the Nation and the Trial Assessment of the States. Washington, DC: National Center for Education Statistics.

National Center for Education Statistics, 1991. Digest ofEducation Statistics. Washing- ton, DC: U.S. Government Printing Office.

Niskanen, William, 1971. Bureaucracy and Representative Government. Chicago: Aldine.

Peterson, Paul E., 1990. "Monopoly and Competition in American Education." In William H. Clune and John F. Witte, eds., Choice and Control in American Educa- tion. New York: The Falmer Press.

Powell, Brian and Lala Carr Steelman, 1984. "Variation in State SAT Performance: Meaningful or Misleading?" Harvard Educational Review, vol. 54, no. 4 389-412.

Price, James L., 1972. Handbook of Organizational Measurement. Lexington, MA: D.C. Heath and Company.

Redford, Emmette S., 1969. Democracy in the Administrative State. New York: Oxford University Press.

Rushing, William A., 1967. "The Effects of Industry Size and Division of Labor on Administration." Administrative Science Quarterly, vol. 12 (September) 273-295.

Tiebout, Charles M., 1956. "A Pure Theory of Local Expenditures." Journal of Political Economy, 64 416-424.

Tyack, David B., 1974. One Best System: A History of American Urban Education. Cambridge, MA: Harvard University Press.

Wainer, H., 1986a. "Five Pitfalls Encountered While Trying to Compare States on their SAT Scores." Journal of Educational Measurement, vol. 23, no. 1 69-81.

,1986b. "The SAT as a Social Indicator: A Pretty Bad Idea." In H. Wain- er, ed., Drawing Inferences From Self-selected Samples, New York: Springer.

U.S. Bureau of the Census, 1963. Statistical Abstract of the United States. Washington, DC: Government Printing Office.

, 1991. Statistical Abstract of the United States, 1991. Washington, DC: U.S. Government Printing Office, 140.

, The Statistical Abstract of the United States (for per capita income data) Washington, DC: U.S. Government Printing Office.

U.S. Bureau of Labor Statistics, 1988. Geographic Profile of Employment and Unemploy- menA 1988. Washington, DC: U.S. Government Printing Office.

U.S. National Center for Health Statistics, Vital Statistics of the United States, 1988, Washington, DC: U.S. Government Printing Office.

Digest ofEducation Statistics, 1991. , StatisticalAbstract of the United States, 1963. ,Digest of Education Statistics. 1988 info for all data. , Educational Government in the States: A Status Report on State Boards of

Education, Chief State School Officers, and State Education Agencies Washington: U.S. Department of Education, 1985.

558 Public Administration Review * November/December 1994, Vol. 54, No. 6

This content downloaded from 194.29.185.145 on Sun, 15 Jun 2014 13:24:13 PMAll use subject to JSTOR Terms and Conditions