Borrowing wise words from those truly market-based, Private Independent schools…

Lately it seems that public policy and the reformy rhetoric that drives it are hardly influenced by the vast body of empirical work and insights from leading academic scholars which suggests that such practices as using value-added metrics to rate teacher quality, or dramatically increasing test-based accountability and pushing for common core standards and tests to go with them are unlikely to lead to substantial improvements in education quality, or equity.

Rather than review relevant empirical evidence or provide new empirical illustrations in this post, I’ll do as I’ve done before on this blog and refer to the wisdom and practices of private independent schools – perhaps the most market driven segment and most elite segment of elementary and secondary schooling in the United States.

Really… if running a school like a ‘business’ (or more precisely running a school as we like to pretend that ‘businesses’ are run… even though ‘most’ businesses aren’t really run the way we pretend they are) was such an awesome idea for elementary and secondary schools, wouldn’t we expect to see that our most elite, market oriented schools would be the ones pushing the envelope on such strategies?

If rating teachers based on standardized test scores was such a brilliant revelation for improving the quality of the teacher workforce, if getting rid of tenure and firing more teachers was clearly the road to excellence, and if standardizing our curriculum and designing tests for each and every component of it were really the way forward, we’d expect to see these strategies all over the home pages of web sites of leading private independent schools, and we’d certainly expect to see these issues addressed throughout the pages of journals geared toward innovative school leaders, like Independent School Magazine.  In fact, they must have been talking about this kind of stuff for at least a decade. You know, how and why merit pay for teachers is the obvious answer for enhancing teacher productivity, and why we need more standardization… more tests… in order to improve curricular rigor? 

So, I went back and did a little browsing through recent, and less recent issues of Independent School Magazine and collected the following few words of wisdom:

From Winter 2003, when the school where I used to teach decided to drop Advanced Placement courses:

A little philosophy, first. Independent schools are privileged. We do not have to respond to the whims of the state, nor to every or any educational trend. We can maximize our time attuned to students and how they learn, and to the development of curriculum that enriches them and encourages the skills and attitudes of independent thinkers. Our founding charters and missions established independence for a range of reasons, but they now give all of us relative curricular autonomy, the ability to bring together a faculty of scholars and thinkers who are equipped to develop rich, developmentally sound programs of study. As Fred Calder, the executive director of New York State Association of Independent Schools, wrote in a letter to member schools a few years ago: “If we cannot design our programs according to our best lights and the needs of our communities, then let the monolith prevail and give up the enterprise. Standardized testing in subject areas essentially smothers original thought, more fatally, because of the irresistible pressure on teachers to teach to the tests.”

http://www.nais.org/publications/ismagazinearticle.cfm?ItemNumber=144300

Blasphemy? Or simply good education!

And from way, way back in 2000, in a particularly thoughtful piece on “business” strategies applied to schools:

Educators do not respond to the same incentives as businesspeople and school heads have much less clout than their corporate counterparts to foster improvement. Most teachers want higher salaries but react badly to offers of money for performance. Merit pay, so routine in the corporate world, has a miserable track record in education. It almost never improves outcomes and almost always damages morale, sowing dissension and distrust, for three excellent reasons, among others: (1) teachers are driven to help their own students, not to outperform other teachers, which violates the ethic of service and the norms of collegiality; (2) as artisans engaged in idiosyncratic work with students whose performance can vary due to factors beyond school control, teachers often feel that there is no rational, fair basis for comparison; and (3) in schools where all faculty feel underpaid, offering a special sum to a few sparks intense resentment. At the same time, school leaders have limited leverage over poor performers. Although few independent schools have unionized staff and formal tenure, all are increasingly vulnerable to legal action for wrongful dismissal; it can take a long time and a large expense to dismiss a teacher. Moreover, the cost of firing is often prohibitive in terms of its damage to morale. Given teachers’ desire for security, the personal nature of their work, and their comparative lack of worldliness, the dismissal of a colleague sends shock waves through a faculty, raising anxiety even among the most talented.

http://nais.org/publications/ismagazinearticle.cfm?ItemNumber=144267

Unheard of! Isn’t firing the bad teacher supposed to make all of those (statistically) great teachers feel better about themselves? Improve the profession? [that said, we have little evidence one way or the other]

How can we allow our leading private, independent, market-based schools to promote such gobbledygook? Why do they do it? Are they a threat to our national security or our global economic competitiveness because they were not then, nor are they now (see recent issues: http://www.nais.org/) fast-tracking the latest reformy fads? Testing out the latest and greatest educational improvement strategies on their own students, before those strategies get tested on low income children in overcrowded urban classrooms? Why aren’t the boards of directors of these schools – many of whom are leaders in “business” – demanding that they change their outmoded ways? Why? Why? Why? Because what they are doing works! At least in terms of their success in continuing to attract students and produce successful graduates.

Now, that’s not to say that these schools are completely stagnant, never adopting new strategies or reforms. They do new stuff all the time (technology integration, etc.) – just not the absurd reformy stuff being dumped upon public schools by policymakers who in many cases choose to send their own children to private independent schools.

In my repeated pleas to private school leaders to provide insights into current movements in teacher evaluation and compensation, I’ve actually found little change from these core principles of nearly a decade ago.  Private independent schools don’t just fire at will and fire often and teacher compensation remains very predictable and traditionally structured. I’d love to know, from my private school readers, how many of their schools have adopted state mandated tests?

Private independent schools pride themselves on offering small class sizes   (see also here) and a diverse array of curricular opportunities, as well as arts, sports and other enrichment – the full package.  And, as I’ve shown in my previous research, private independent schools charge tuition and spend on a per pupil basis at levels much higher than traditional public school districts operating in the same labor market. They also pay their headmasters well! More blasphemy indeed.

In fact, aside from “no excuses” charter schools whose innovative programs consist primarily of rigid discipline coupled with longer hours and small group tutoring (not rocket science), and higher teacher salaries (here, here & here) to compensate the additional work, private independent schools may just be among the least reformy elementary and secondary education options out there.

That’s not to say they are anything like “no excuses” charter schools. They are not in many ways. But they are equally non-reformy.  In fact, the average school year in private independent schools is shorter not longer than in traditional public schools – about 165 days.  And the average student load of teachers working in private independent schools (course sections x class size) is much lower in the typical private independent school than in traditional public schools. But that ain’t reformy stuff at all, any more than trying to improve outcomes of low income kids by adding hours and providing tutoring.

None-the-less, for some reason, well educated people with the available resources, keep choosing these non-reformy and expensive schools. Some of these schools have been around for a while too! Maybe, just maybe, it’s because they are doing the right things – providing good, well rounded educational opportunities as many of them have for centuries, adapting along the way (see: http://www.exeter.edu/admissions/109_1220_11688.aspx) .  Perhaps they’ve not gone down the road of substantially increased testing and curriculum standardization, test-based teacher evaluation – firing their way to Finland – because they understand that these policy initiatives offer little to improve school quality, and much potential damage.

Perhaps there are some lessons to be learned from market based systems. But perhaps we should be looking to those market based systems that have successfully provided high quality schooling for centuries to our nation’s most demanding, affluent and well educated leaders, rather than basing our policy proposals on some make-believe highly productive private sector industry where new technologies reduce production costs to near $0 and where complex statistical models are used to annually deselect non-productive employees.

Just pondering the possibilities, and still waiting for Zuck (an Exeter alum) to invest in Harkness Tables for Newark Public Schools and class sizes of 12 across the board!

Productivity continued…updated…

Update

Mark Dynarski has added some additional useful recommendations regarding productivity research. Dynarski’s comments come in response to our suggestions for improving the rigor of productivity research, where our suggestions were based on rigorous application of relevant methods that we would expect to see applied in productivity research.

We agree with Mark Dynarski that using relevant methods alone doesn’t guarantee that they are used well.  We were starting from the position that the work of Roza and Hill doesn’t not apply relevant methods at all, no less well.  That in mind, we concur with Dynarski’s argument that it is not only important to use the right methods, but to use them well, and that reasonable standards may be applied. Here are Mark Dynarski’s suggestions:

Here are some examples of what I had in mind for research standards: the analysis has been replicated by another researcher working independently (replication being a lynchpin of scientific method). Predictions from the analysis have explanatory power outside the sample. The modeling framework is mathematically consistent. The research team has no conflicts of interest.

Applying these standards might result in excluding a lot of current research (even peer-reviewed research), but I think that would be the point Welner and Baker are making.

Readers interest in assessing research might take a look at the National Academy of Sciences’ Reference Manual on Scientific Evidence, now in its third edition, especially the chapter by Kaye and Freedman on statistics. It’s highly readable and available for free download from the academy’s website.

Below is my original reply to Mark Dynarski’s comment:

Over at Sara Mead’s Ed Week blog, Mark Dynarski checks in with a few relevant questions and observations. Actually, as it turns out, we agree ALMOST entirely with Dynarski when he says:

And focusing on peer-reviewed research as a form of quality assurance, as Baker and Welner suggest, seems problematic. Peer-reviewed research journals have highly variable degrees of editorial control, and peer review itself can vary from cursory reading to exhaustive and detailed comments. My own observation is that focusing on research with rigorous designs probably is a superior contributor to quality on average. There never seem to be enough of these when difficult debates on education policy issues arise, though.

Our only disagreement here is with his characterization of what we said. We did not uphold peer review as the gold standard. Though we probably used the phrase – peer review – too often in the brief itself. Rather, we believe just as Dynarski stated, that research with rigorous designs is a superior contributor to quality, on average! Hell yes. Absolutely. That’s our point. At the very least, the issues and questions at hand should be framed, or frame-able in relevant terms for rigorous evaluation.

That is precisely our concern with the Roza/Hill and Roza and other colleagues materials we address in our report (see pages 9 to 14). Further, a large section of our report summarizes the relevant methods – those rigorous and appropriate designs that should be applied to the questions at hand, but are noticeably absent even at the most cursory level in Roza and Hill’s materials.

To save you all the trouble of actually reading our entire brief, I’ve copied and pasted below the section of our brief where we address relevant methods:

Summary of Available Methods

Discussions of educational productivity can and should be grounded in the research knowledge base. Therefore, prior to discussing the Department of Education’s improving productivity project website and recommended resources, we think it important to explain the different approaches that researchers use to examine productivity and efficiency questions. Two general bodies of research methods have been widely used for addressing questions of improving educational efficiency. One broad area includes “cost effectiveness analysis” and “cost-benefit analysis.” The other includes two efficiency approaches: “production efficiency” and “cost efficiency.” Each of these is explained below.

 Cost-Effectiveness Analysis and Cost-Benefit Analysis

In the early 1980s Hank Levin produced the seminal resource on applying cost effectiveness analysis in education (with a second edition in 2001, co-written with Patrick McEwan),[i] helpfully titled “Cost-Effectiveness Analysis: Methods and Applications.” The main value of this resource is as a methodological guide for determining which, among a set of options, are more and less cost effective, which produce greater cost-benefit, or which have greater cost-utility.

The two main types of analyses laid out in Levin and McEwan’s book are cost-effectiveness analysis and cost-benefit analysis, the latter of which can focus on either short-term cost savings or longer term economic benefits. All these approaches require an initial determination of the policy alternatives to be compared. Typically, the baseline alternative is the status quo. The status quo is not a necessarily a bad choice. One embarks on cost-effectiveness or cost-benefit analysis to determine whether one might be able to do better than the status quo, but it is not simply a given that anything one might do is better than what is currently being done. It is indeed almost always possible to spend more and get less with new strategies than with maintaining the current course.

Cost-effectiveness analysis compares policy options on the basis of total costs. More specifically, this approach compares the spending required under specific circumstances to fully implement and maintain each option, while also considering the effects of each option on a common set of measures. In short:

Cost of implementation and maintenance of option A

Estimated outcome effect of implementing and maintaining option A

Compared to

Cost of implementation and maintenance of option B

Estimated outcome effect of implementing and maintaining option B

Multiple options may (and arguably should) be compared, but there must be at least two. Ultimately, the goal is to arrive at a cost-effectiveness index or ratio for each alternative in order to determine which provides the greatest effect for a constant level of spending.

The accuracy of cost-effectiveness analyses is contingent, in part, upon carefully considering all direct and indirect expenditures required for the implementation and maintenance of each option. Imagine, for example, program A, where the school incurs the expenses on all materials and supplies. Parents in program B, in contrast, are expected to incur those expenses. It would be inappropriate to compare the two programs without counting those materials and supplies as expenses for Program B. Yes, it is “cheaper” for the district to implement program A, but the effects of program B are contingent upon the parent expenditure.

Similarly, consider an attempt to examine the cost effectiveness of vouchers set at half the amount allotted to public schools per pupil. Assume, as is generally the case, that the measured outcomes are not significantly different for those students who are given the voucher. Finally, assume that the private school expenditures are the same as those for the comparison public schools, with the difference between the voucher amount and those expenditures being picked up through donations and through supplemental tuition charged to the voucher parents. One cannot claim greater “cost effectiveness” for voucher subsidies in this case, since another party is picking up the difference. One can still argue that this voucher policy is wise, but the argument cannot be one of cost effectiveness.

Note also that the expenditure required to implement program alternatives may vary widely depending on setting or location. Labor costs may vary widely, and availability of appropriately trained staff may also vary, as would the cost of building space and materials. If space requirements are much greater for one alternative, while personnel requirements are greater for the second, it is conceivable that the relative cost effectiveness of the two alternatives could flip when evaluated in urban versus rural settings. There are few one-size-fits-all answers.

Cost-effectiveness analysis also requires having common outcome measures across alternative programs. This is relatively straightforward when comparing educational programs geared toward specific reading or math skills. But policy alternatives rarely focus on precisely the same outcomes. As such, cost-effectiveness analysis may require additional consideration of which outcomes have greater value, which are more preferred than others. Levin and McEwan (2001) discuss these issues in terms of “cost-utility” analyses. For example, assume a cost-effectiveness analysis of two math programs, each of which focuses on two goals: conceptual understanding and more basic skills. Assume also that both require comparable levels of expenditure to implement and maintain and that both yield the same average combined scores of conceptual and basic-skills assessments. Program A, however, produces higher conceptual-understanding scores, while program B produces higher basic-skills scores. If school officials or state policy makers believe conceptual understanding to be more important, a weight might be assigned that favors the program that led to greater conceptual understanding.

In contrast to cost-effectiveness analysis, cost-benefit analysis involves dollar-to-dollar comparisons, both short-term and long-term. That is, instead of examining the estimated educational outcome effect of implementing and maintaining a given option, cost-benefit analysis examines the economic effects. But like cost-efficiency analysis, cost-benefit analysis requires comparing alternatives:

Cost of implementation and maintenance of option A

Estimated economic benefit (or dollar savings) of option A

Compared to

Cost of implementation and maintenance of option B

Estimated economic benefit (or dollar savings) of option B

Again, the baseline option is generally the status quo, which is not assumed automatically to be the worst possible alternative. Cost-benefit analysis can be used to search for immediate, or short-term, cost savings. A school in need of computers might, for example, use this approach in deciding whether to buy or lease them or it may use the approach to decide whether to purchase buses or contract out busing services. For a legitimate comparison, one must assume that the quality of service remains constant. Using these examples, the assumption would be that the quality of busing or computers is equal if purchased, leased or contracted, including service, maintenance and all related issues. All else being equal, if the expenses incurred under one option are lower than under another, that option produces cost savings. As we will demonstrate later, this sort of example applies to a handful of recommendations presented on the Department of Education’s website.

Cost-benefit analysis can also be applied to big-picture education policy questions, such as comparing the costs of implementing major reform strategies such as class-size reduction or early childhood programs versus raising existing teachers’ salaries or measuring the long-term economic benefits of those different programmatic options. This is also referred to as return-on-investment analysis.

While cost-effectiveness and cost-benefit analyses are arguably under-used in education policy research, there are a handful of particularly useful examples:

  1. Determining whether certain comprehensive school reform models are more cost-effective than others?[ii]
  2. Determining whether computer-assisted instruction is more cost-effective than alternatives such as peer tutoring?[iii]
  3. Comparing National Board Certification for teachers to alternatives in terms of estimated effects and costs.[iv]
  4. Cost-benefit analysis has been used to evaluate the long-term benefits, and associated costs, of participation in certain early-childhood programs.[v]

Another useful example is provided by a recent policy brief prepared by economists Brian Jacob and Jonah Rockoff, which provides insights regarding the potential costs and benefits of seemingly mundane organizational changes to the delivery of public education, including (a) changes to school start times for older students, based on research on learning outcomes by time of day; (b) changes in school-grade configurations, based on an increased body of evidence relating grade configurations, location transitions and student outcomes; and (c) more effective management of teacher assignments.[vi] While the authors do not conduct full-blown cost effectiveness or cost-benefit analyses, they do provide guidance on how pilot studies might be conducted.

Efficiency Framework

As explained above, cost-benefit and cost-effectiveness analyses require analysts to isolate specific reform strategies in order to correspondingly isolate and cost the strategies’ components and estimate their effects. In contrast, relative-efficiency analyses focus on the production efficiency or cost efficiency of organizational units (such as schools or districts) as a whole. In the U.S. public education system, there are approximately 100,000 traditional public schools in roughly 15,000 traditional public school districts, plus 5,000 or so charter schools. Accordingly, there is significant and important variation in the ways these schools get things done. The educational status quo thus entails considerable variation in approaches and in quality, as well as in the level and distribution of funding and the population served.

Each organizational unit, be it a public school district, a neighborhood school, a charter school, a private school, or a virtual school, organizes its human resources, material resources, capital resources, programs, and services at least marginally differently from all others. The basic premise of using relative efficiency analyses to evaluate education reform alternatives is that we can learn from these variations. This premise may seem obvious, but it has been largely ignored in recent policymaking. Too often, it seems that policymakers gravitate toward a policy idea without any empirical basis, assuming that it offers a better approach despite having never been tested. It is far more reasonable, however, to assume that we can learn how to do better by (a) identifying those schools or districts that do excel, and (b) evaluating how they do it. Put another way, not all schools in their current forms are woefully inefficient, and any new reform strategy will not necessarily be more efficient. It is sensible for researchers and policymakers to make use of the variation in those 100,000 schools by studying them to see what works and what does not. These are empirical questions, and they can and should be investigated.

Efficiency analysis can be viewed from either of two perspectives: production efficiency or cost efficiency. Production efficiency (also known as “technical efficiency of production”) measures the outcomes of organizational units such as schools or districts given their inputs and given the circumstances under which production occurs. That is, which schools or districts get the most bang for the buck? Cost efficiency is essentially the flip side of production efficiency. In cost efficiency analyses, the goal is to determine the minimum “cost” at which a given level of outcomes can be produced under given circumstances. That is, what’s the minimum amount of bucks we need to spend to get the bang we desire?

In either case, three moving parts are involved. First, there are measured outcomes, such as student assessment outcomes. Second, there are existing expenditures by those organizational units. Third, there are the conditions, such as the varied student populations,  and the size and location of the school or district, including differences in competitive wages for teachers, health care costs, heating and cooling costs, and transportation costs.

It is important to understand that all efficiency analyses, whether cost efficiency or production efficiency, are relative. Efficiency analysis is about evaluating how some organizational units achieve better or worse outcomes than others (given comparable spending), or how or why the “cost” of achieving specific outcomes using certain approaches and under some circumstances is more or less in some cases than others. Comparisons can be made to the efficiency of average districts or schools, or to those that appear to maximize output at given expense or minimize the cost of a given output. Efficiency analysis in education is useful because there are significant variations in key aspects of schools: what they spend, who they serve and under what conditions, and what they accomplish.

Efficiency analyses involve estimating statistical models to large numbers of schools or districts, typically over multiple years. While debate persists on the best statistical approaches for estimating cost efficiency or technical efficiency of production, the common goal across the available approaches is to determine which organizational units are more and less efficient producers of educational outcomes. Or, more precisely, the goal is to determine which units achieve specific educational outcomes at a lower cost.

Once schools or districts are identified as more (or less) efficient, the next step is to figure out why. Accordingly, researchers explore what variables across these institutions might make some more efficient than others, or what changes have been implemented that might have led to improvements in efficiency. Questions typically take one of two forms:

  1. Do districts or schools that do X tend to be more cost efficient than those doing Y?
  2. Did the schools or districts that changed their practices from X to Y improve in their relative efficiency compared to districts that did not make similar changes?

That is, the researchers identify and evaluate variations across institutions, looking for insights in those estimated to be more efficient, or alternatively, evaluating changes to efficiency in districts that have altered practices or resource allocation in some way. The latter approach is generally considered more relevant, since it speaks directly to changing practices and resulting changes in efficiency.[vii]

While statistically complex, efficiency analyses have been used to address a variety of practical issues, with implications for state policy, regarding the management and organization of local public school districts:

  1. Investigating whether school district consolidation can cut costs and identifying the most cost-efficient school district size.[viii]
  2. Investigating whether allocating state aid to subsidize property tax exemptions to affluent suburban school districts compromises relative efficiency.[ix]
  3. Investigating whether the allocation of larger shares of school district spending to instructional categories is a more efficient way to produce better educational outcomes.[x]
  4. Investigating whether decentralized governance of high schools improves efficiency.[xi]

These analyses have not always produced the results that policymakers would like to hear. Further, like many studies using rigorous scholarly methods, these analyses have limitations. They are necessarily constrained by the availability of data, they are sensitive to the quality of data, and they can produce different results when applied in different settings.[xii] But the results ultimately produced are based on rigorous and relevant analyses, and the U.S. Department of Education should be more concerned with rigor and relevance than convenience or popularity.

 


[i] Levin, H. M. (1983). Cost-Effectiveness. Thousand Oaks, CA: Sage.

Levin, H. M., & McEwan, P. J. (2001). Cost effectiveness analysis: Methods and applications. 2nd ed. Thousand Oaks, CA: Sage.

[ii] Borman, G., & Hewes, G. (2002). The long-term effects and cost-effectiveness of Success for All. Educational Evaluation and Policy Analysis, 24, 243-266.

[iii] Levin, H. M., Glass, G., & Meister, G. (1987). A cost-effectiveness analysis of computer assisted instruction. Evaluation Review, 11, 50-72.

[iv] Rice, J. K., & Hall, L. J. (2008). National Board Certification for teachers: What does it cost and how does it compare? Education Finance and Policy, 3, 339-373.

[v] Barnett, W. S., & Masse, L. N. (2007). Comparative Benefit Cost Analysis of the Abecedarian Program and its Policy Implications. Economics of Education Review, 26, 113-125.

[vi] See Jacob, B., & Rockoff, J. (2011). Organizing Schools to Improve Student Achievement: Start Times, Grade Configurations and Teacher Assignments. The Hamilton Project. Retrieved November 6, 2011 from http://www.hamiltonproject.org/files/downloads_and_links/092011_organize_jacob_rockoff_paper.pdf

See also Patrick McEwan’s review of this report:

McEwan, P. (2011). Review of Organizing Schools to Improve Student Achievement. Boulder, CO: National Education Policy Center. Retrieved December 2, 2011 from http://nepc.colorado.edu/thinktank/review-organizing-schools

[vii] Numerous authors have addressed the conceptual basis and empirical methods for evaluating technical efficiency of production and cost efficiency in education or government services more generally. See, for example:

Bessent, A. M., & Bessent, E. W. (1980). Determining the Comparative Efficiency of Schools through Data Envelopment Analysis, Education Administration Quarterly, 16(2), 57-75.

Duncombe, W., Miner, J., & Ruggiero, J. (1997). Empirical Evaluation of Bureaucratic Models of Inefficiency, Public Choice, 93(1), 1-18.

Duncombe, W., & Bifulco, R. (2002). Evaluating School Performance: Are we ready for prime time? In William J. Fowler, Jr. (Ed.), Developments in School Finance, 1999–2000, NCES 2002–316.Washington, DC: U.S. Department of Education, National Center for Education Statistics.

Grosskopf, S., Hayes, K. J., Taylor, L. L., & Weber, W. (2001). On the Determinants of School District Efficiency: Competition and Monitoring. Journal of Urban Economics, 49, 453-478.

[viii] Duncombe, W. & Yinger, J. (2007). Does School District Consolidation Cut Costs? Education Finance and Policy, 2(4), 341-375.

[ix] Eom, T. H., & Rubenstein, R. (2006). Do State-Funded Property Tax Exemptions Increase Local Government Inefficiency? An Analysis of New York State’s STAR Program. Public Budgeting and Finance, Spring, 66-87.

[x] Taylor, L. L., Grosskopf, S., & Hayes, K. J. (2007). Is a Low Instructional Share an Indicator of School Inefficiency? Exploring the 65-Percent Solution. Working Paper.

[xi] Grosskopf, S., & Moutray, C. (2001). Evaluating Performance in Chicago Public High Schools in the Wake of Decentralization. Economics of Education Review, 20, 1-14.

[xii] See, for example, Duncombe, W., & Bifulco, R. (2002). “Evaluating School Performance: Are we ready for prime time?” In William J. Fowler, Jr. (Ed.), Developments in School Finance, 1999–2000, NCES 2002–316. Washington, DC: U.S. Department of Education, National Center for Education Statistics.

Closing schools: Good Reasons and Bad Reasons

Current reformy rhetoric dictates that we MUST CLOSE FAILING SCHOOLS! That we must close those schools that are dropout factories or have persistently low achievement levels on state assessments. And, that we must, in the process, fire all of the staff in those schools that have caused these dismal conditions year after year, by thinking only of themselves, their tenure, their pensions and their wages – which are clearly too high for workers of their meager cognitive ability.

Take these simple bold steps and things will get better! Surely they will.

But, the bottom line is that you can’t just close down the poorest schools in any city school system and simply replace them with less poor ones – problem solved! That is, unless the larger strategy is actually about closing down entire neighborhoods, allowing them to become blighted, then seeking investors to step in and gentrify the area, replacing the old population with a new, less poor one! Problem solved. Or alternatively, if one relies on the off chance of a large scale natural disaster disproportionately displacing the poorest families to a large urban district in a neighboring state. But I digress.

A major unintended consequence of this ill-conceived reform movement is that it is distracting local school administrators and boards of education from closing and/or reorganizing schools for the right reasons by focusing all of the attention on closing schools for the wrong ones. In fact, even when school officials might wish to consider closing schools for logical reasons, they now seem compelled to say instead that they are proposing specific actions because the schools are “failing!” Not because they are too small to operate at efficient scale, that local demographic shift warrants reconsidering attendance boundaries, or that a facility is simply unsafe, or an unhealthy environment.

In really blunt terms, the current reformy rhetoric is forcing leaders to make stupid arguments for school closures where otherwise legitimate ones might actually exist!

There are legitimate reasons, cost saving reasons and other, to close schools and reorganize the delivery of educational services across organizational units and geographic locations within a district. Often, when I’m pushed to suggest the types of steps districts might take to achieve cost savings, the first issue I turn to is school organization/optimization.  Closing schools is not necessarily a bad thing. Closing schools for the wrong reasons and under the wrong pretexts is a bad thing. Reorganizing schools may lead to staffing reductions. These are cost cutting realities in a labor intensive industry. The fact is that you can’t really cut much from costs without cutting labor costs.  When enrollments decline significantly over time, fewer teachers are needed to get the job done and the staff may need to be reorganized.

But closing schools based on test scores, and pretending that we are somehow appropriately dismissing the staff that “caused” those low test scores is – well – just dumb.

Now let’s talk about some of the more legitimate reasons that a district might choose to close/reorganize schools.

First, let’s define “cost” and “cost cutting.” Cost is the minimum amount that needs to be spent to      achieve any given level of outcomes. It’s certainly possible to spend more than the minimum hypothetical – perfect world – cost of achieving any given level of outcomes. In fact, it’s pretty much a given that spending on outcomes occurs in less than perfect conditions, including unevenly growing and declining enrollments and unevenly distributed facilities capacity, quality and efficiency. Ultimately, the goal is to figure out how to reduce those barriers – less than perfect conditions – in order to get closer to that hypothetical minimum cost of achieving a given level of outcomes. In other words, the goal in times of budget cuts it to figure out how to spend less, but not compromise outcomes.

Here’s a short list of legitimate reasons a district might choose to close schools.

Economies of Scale

Operating unnecessarily small schools within a district creates inappropriate inequities. Providing more resources per pupil in one school necessarily means less in others. If those differences are based on legitimate differences in costs and student needs, that’s fine. It’s a difference that advances rather than erodes equity. But, sustaining inefficiently small schools at the expense of others within a large, population dense school district doesn’t meet those criteria. So, it’s in the best interest of the district as a whole to find ways to optimize the distribution of enrollments across schools within districts. To make sure, for example, that there aren’t elementary schools in one part of town with only 100 or so students, and in another part of town with 1,200 students. That there aren’t high schools with 300 to 400 students drawing  resources from high schools with 1,500 students. This can be really tricky to accomplish. But even moving toward optimal, while not reaching it is better than nothing. The literature on economies of scale suggests that elementary schools of 300 to 500 students and high schools of 600 to 900 students seem to produce optimal outcomes, and these sizes are consistent with literature that suggests that districts with 2000 to 4000 pupils seem to minimize costs of producing outcomes.

Facility efficiency

Some school facilities are simply more efficient to operate than others. They have more efficient mechanical/HVAC systems, are better insulated, have fewer deferred maintenance issues, potentially longer overall projected useful life.   Some facilities simply have more efficient space for accommodating the kinds of programs and services that need to be delivered. Evaluating the costs and benefits of maintaining and upgrading the current stock of facilities and whether children can be more efficiently distributed across “better” spaces with lower operations and maintenance costs is something any/all school districts should be engaged in on an ongoing basis.

Transportation efficiency

As population distribution shifts across spaces within a district, and while considering other reasons for reorganizing and redistributing students across schools – usually via changes to school attendance zones, but potentially with choice programs as well – evaluation of transportation efficiency should also be on the table.  In a district with dramatically declining enrollment or geographically shifting enrollment, school closings may be inevitable. In fact, a district may find itself closing some schools and selling off land, while opening others in different locations (less likely in more densely populated urban centers, but common in sprawling exurbia).

Health & Safety Concerns

This one is (or at least should be) a no brainer. Kids shouldn’t be housed in unsafe or unhealthy facilities.  That in mind, districts should engage in cost-benefit analyses to evaluate/compare the costs of improving the problem facilities/spaces versus other reorganization options.  Closing unsafe, unhealthy schools and appropriately distributing students among “better” spaces is obviously a legitimate reason for school closing.

Socioeconomic integration/balancing

A final reason why a district might close and/or reorganize schools to improve performance while maintaining (or cutting) spending, is to achieve better peer group balance across schools. Of course, this only works when the district is a) heterogeneous enough to be able to create better balanced peer groups and b) geographically small enough to not incur substantial transportation costs when implementing such a policy. A substantial body of research indicates that concentrated poverty and for that matter racial composition (racial isolation) in schools can affect the costs of achieving a given outcome target. Optimizing peer group composition across schools while considering interaction with other cost drivers (transportation) makes sense.  Of course, the U.S. Supreme Court has placed some constraints on the role of race in re-assignment policies, http://www.oyez.org/cases/2000-2009/2006/2006_05_908. But options remain available.

Improving peer group balance, optimizing school sizes, optimizing bus routes, making best use of most operationally efficient and educationally efficient learning spaces all can help districts both reduce costs and improve outcomes.

AND ABSOLUTELY NONE OF THIS HAS ANYTHING TO DO WITH CLOSING FAILING SCHOOLS.  Why, because there’s little or no evidence that closing “failing” schools improves either productivity or efficiency.

It’s not that sexy. It’s not reformy. It’s just good management decision making to get the most bang-for-the-buck. And it’s all stuff that districts can and should be working on constantly.

Closing schools is never easy. Someone will always be irked, no matter what the reason for the closure. A neighborhood will feel that it has lost its identity. Alums will feel that a piece of their childhood has been taken away.  So if we’re going to go down this road, and fight the difficult political fights that school closing plans create, then we ought to be closing the schools for the right reasons, and not the wrong ones!

 

Productivity Agenda Yes! But based on real research & rigorous analysis!

Pau Hill and Marguerite Roza’s response to my recent report – with Kevin Welner – and series of blog posts seems to offer as its central argument that we’re simply a curmudgeons, offering lots of complaints about the rigor of their arguments and their suggestions for improving schooling productivity and efficiency, but providing no creative or immediately useful ideas or solutions for school districts or states in these tough economic times.

My first response would be that bad ideas are bad ideas, even in the absence of alternatives. The fact that budgets are tight and many schools are underperforming is not an argument for implementing unproven, ill-considered policy solutions.

That said, my second response is that Kevin Welner and I did in fact offer our own solutions, both in our policy brief and elsewhere.

On the first response, allow me to point out that many of Hill and Roza’s own suggestions for how states or school districts should deal with these tough economic times are not cost-cutting measures at all. They provide little or no potential short-run cost savings – where cost savings means spending reduction while not harming (or perhaps even improving) student outcomes. For example, among the “cost-cutting” solutions offered in Petrilli and Roza’s Stretching the School Dollar[i], brief, one of the resources recommended by the U.S. Department of Education, is designing and implementing new teacher evaluations. I may disagree with the shape that most of these new systems are taking, but I certainly agree that improving evaluations is a good idea.

It is not, however, a cost-cutting measure. In fact, it would require significant up-front investments, and whether or not these new systems will improve student outcomes (to say nothing of doing so cost-effectively) is a completely open question.

In addition, Kevin Welner and I address numerous concerns regarding other proposals in Roza’s work on pages 9 to 15 of our policy brief, including proposals (in) for states to simply cut off aid for limited English proficient children after two years, or to cap the distribution of special education aid. These are simply spending cuts, not cost savings and there is little or no evidence that such cuts would cause no harm, much less generate improvements.

Hill and Roza also suggest that their brief on curing “Baumol’s disease” provides useful insights into how private sector industries have found cures that might be translated to public education. Kevin Welner and I thoroughly rebut the basic premise of the Baumols’ disease claim and Hill and Roza’s proposed solutions on Page 10-11 of our report. In short, Kevin Welner and I explain about this Baumol’s hypothesis (& solutions):

In sum, the report begins with two highly contestable claims. It then draws an unsupported causal connection between the two claims. Further, it assumes that the problem is universal—that the system as a whole is diseased. In making this assumption, the authors ignore any possibility that lessons may be derived from within the public education system.

My second general response is that, in contrast with Roza and Hill’s characterization of my work, Kevin Welner and I offer up lists of issues that have been studied regarding productivity and efficiency in public schooling, as well as frameworks for studying those issues and for guiding decision making.  That is, our policy brief on this topic is not merely a list of curmudgeonly complaints and critiques of the work of Roza and others, though we do have serious concerns (elaborated through curmudgeonly complaints) about the quality of much of that work and claims made. In fact, I personally have even more serious concerns about related work from Roza and colleagues that has come to light since the writing of that brief. Yes, Kevin and I can really crank out those curmudgeonly complaints. But we don’t stop there… this time.

Among other things, Kevin and I point to research that offers potentially mundane solutions – the non-reformy stuff – like organizing schools within districts to optimize distributions of enrollments (school size) to achieve economies of scale – and we cite a handful of examples provided in a paper we cite by Brian Jacob & Jonah Rockoff[ii], including (a) changes to school start times for older students, based on research on learning outcomes by time of day; (b) changes in school-grade configurations, based on an increased body of evidence relating grade configurations, location transitions and student outcomes; and (c) more effective management of teacher assignments. Perhaps more importantly, we address methods and resources for guiding decision making on the basis of cost-effectiveness, and point out how cost effectiveness of particular options may, in fact, vary across settings.

It also bears mentioning that we don’t necessarily reject all of the ideas that Hill and Roza present. Rather, we explain that some might be explored, and might even be evaluated in pilot settings before we starting pitching them as large scale reforms. For instance, regarding large scale changes to teacher compensation systems, we explain:

That is not to say that such ideas cannot or should not be piloted and tested. They are, to some extent, researchable questions, most appropriately studied through relative efficiency analyses across districts and schools that are applying varied approaches, including the proposed new approaches.

In a forthcoming piece we go further to explain that some of our concerns regarding Roza and Hill’s current proposals are that they encourage large scale policy experimentation on the most vulnerable children, and we find that unacceptable given the extent of unknowns involved.  We explain (forthcoming):

Further, the political rhetoric around the immediacy of reform focuses on so-called failing schools, and failure is identified through performance metrics heavily influenced by student demographics. Simply put, we have ethical concerns with imposing unproven and sometimes unstudied policies on schools in low-income communities of color. And this is what we see happening.

There is time to figure some of this stuff out, and there are things we can look to do more immediately to achieve short run cost savings where necessary. As I often point out, when advocates use language like “And they need these ideas NOW,” it is most often a ploy to compel people into expediting ill-conceived policies with as much potential to do harm as to do good.

Many of the things that can and should be done in the short run, to reorganize local school district budgets, and to cut spending with minimal negative impact on outcomes, are really mundane things. They’re just not sexy enough to make for a good political platform or to generate public outcry. They don’t even lend themselves to good acronyms or catch phrases, like LIFO (last in, first out). They don’t have really cool, catchy names like Parent Trigger (note – I’m not blaming or even trying to connect Roza and Hill to Parent Trigger). And they can’t be likened to a disease to be cured.

They are instead ideas with a strong, empirically-based track record.

In the longer term, yes – there are policies that should be considered and tested, including those pertaining to teacher quality. But they are substantive education policies, not cost-cutting measures, and we should not be using a budget crisis to justify unwarranted haste and recklessness. Good policy making must rely on good policy analysis, and this relationship should not be severed simply because money is tight. If anything, it should be strengthened.

===============

Supplemental Note:

Hill and Roza offer as evidence that their proposed strategies have been evaluated in terms of productive efficiency and cost effectiveness, three links to related “studies”. One of those links points to a paper by Dan Goldhaber and Roddy Theobald, regarding potential costs/benefits of Seniority based versus “Quality-Based” layoffs. On the one hand, the paper does not yield any decisive guidance for short term budget planning and on the other hand, suffers the circular logic I’ve discussed on numerous occasions on my blog regarding measuring the effectiveness of the policy by the same measure used to implement the policy (e.g. did firing teachers with low value added scores leave us with more teachers with high value added scores).  The central conclusion of the paper(s) is that “Finally, simulations suggest that a very different group of teachers would be targeted for layoffs under an effectiveness-based layoff scenario than under the seniority-driven system that exists today.” This is hardly surprising, and of limited usefulness for informing state or local leaders on how to handle personnel decisions in tough budgetary times, or the expected benefits or downsides of such policies, and fails to address such basic issues as the costs of putting into place a system that might be used for making such decisions. I provide a hypothetical discussion of this topic in an earlier blog post. The other study cited is Eric Hanushek’s paper suggesting that teachers whose effectiveness ratings are one standard deviation above the mean can yield $400,000 benefit in present value of student future earnings with a class size of 20. This study also provides no guidance for how district administrators might cut costs, or even hold the line, while attracting or retaining teachers whose value added scores are a standard deviation higher than their current average. Rather, this study speaks to the kind of large scale deselection which I’ve discussed numerous times in this blog. The third “study” is not a study at all, but rather an opinion brief by Roza with relatively meaningless national ball park estimates of job loss under alternative dismissal scenarios.


[i] Petrilli, M., & Roza, M. (2011). Stretching the School Dollar: A brief for State Policymakers. Thomas B. Fordham Institute. Retrieved November 6, 2011, from http://www.edexcellencemedia.net/publications/2011/20110106_STSD_PolicyBrief/20110106_STSD_PolicyBrief.pdf.

[ii] See Jacob, B., & Rockoff, J. (2011). Organizing Schools to Improve Student Achievement: Start Times, Grade Configurations and Teacher Assignments. The Hamilton Project. Retrieved November 6, 2011, from http://www.hamiltonproject.org/files/downloads_and_links/092011_organize_jacob_rockoff_paper.pdf.

Friday Thoughts: In my own words (recent media commentary)

Interview for In These Times:

[I]t’s much easier to point blame at those working within the system–like teachers–than to actually raise the revenues to provide the resources necessary to really improve the system–to pay sufficient wages to attract and retain top college graduates and to provide the working conditions that would make teaching more appealing–including smaller total student loads… and higher quality infrastructure, materials, supplies, equipment and other supports.

http://www.inthesetimes.com/working/entry/12618/teachers_and_communities_overshadowed_by_corporate_fixes_for_schools/

In my interview with Geoff Mulvihill of AP:

In response to what reforms are needed most in New Jersey?

From a research angle, if you looked at the high-performing and the low-performing schools and you asked yourself what’s different about them, well, our highest-performing schools also have step-structured pay scales, collective bargained agreements, tenure, union contracts as do our low-performing schools. That’s not a differentiating factor.

These things that we’re talking about like merit pay, disrupting union contracts and collective bargaining don’t tend to be the things that the high-performing schools are doing.

http://www.courierpostonline.com/article/20120103/NEWS02/301030016/Educating-New-Jersey-s-urban-kids-costs-more-scholar-says?odyssey=nav|head

Follow up in a similar question

If you look at the biggest differences between the schools that are doing well and the schools that are doing poorly, there may be differences in teaching quality. There may be differences in skill-set of the teachers who are sorting themselves among the more and less desirable schools.

It may be that we’ve got some inequities in teaching quality. But to suggest that those inequities are a function of not having merit pay or they’re a function of having collective bargaining and a union presence doesn’t seem to fit when those structures also exist in the highly successful and affluent districts.

http://www.courierpostonline.com/article/20120103/NEWS02/301030016/Educating-New-Jersey-s-urban-kids-costs-more-scholar-says?odyssey=nav|head

On where to go from here:

I think we’ve got to keep up the effort of targeting resources toward the high-need districts, and the key is that equitable and adequate funding — and this is my big punchline — is the necessary condition for everything. If you want to run a good charter school, if you want to run a good public school, you’ve got to have enough money to do a good job.

NJ Charter Data Round-up

Note: I will be making updates to this post in the coming days/weeks.

As we once again begin discussing & debating the appropriate role for Charter schools in New Jersey’s education reform “mix,” here’s a round-up on the New Jersey charter school numbers, in terms of demographic comparisons to all other public and charter schools in the same ‘city’ and proficiency rates (across all grades) compared to all others in the same ‘city.’

Key Findings:

Many NJ charter schools, especially those most often touted in the media as great success stories, continue to serve student populations that differ dramatically from populations of surrounding schools in the same city (see note *). These charters differ in terms of percentages of children who qualify for free lunch, percent classified as having disabilities, or percent with limited English language proficiency.

On average, given their demographics, NJ charter schools continue to have proficiency rates around where one would expect. Demographically advantaged charter schools have higher average proficiency than other schools around them. Demographically disadvantaged charter schools have lower average proficiency rates than others around them. Not tricky/heavy statistics here. Just a comparison of relative proficiency and relative demography.

When one estimates what I would call a “descriptive regression” model characterizing the differences in proficiency rates across district and charter schools in the same cities, one finds that compared against schools of similar demography, and on the same grade level and subject area tests, the charter proficiency rates, on average are no different than their traditional public school counterparts. In this particular regression model, charters did have higher proficiency in Science (charter x science interaction). More descriptive stuff to come when I get a chance. Not sure when that will be.

Note: The model includes a fixed effect for CITY location for each traditional public and charter school, such that each charter is compared against other schools in the same CITY.

But to be absolutely clear, this particular analysis misses the point entirely in two ways. First, it is merely descriptive of the average proficiency rates of charter and non-charter schools across tests, subjects & grades. It is not a test, by any means of comparative effectiveness of schools. Second, as I explain below,  comparisons of charterness vs. non-charterness are not particularly helpful for informing policy.

Policy Perspectives:

Issue 1: The relevant policy question is not whether charters on average perform better than traditional public schools on average and therefore whether we should simply replace more traditional public schools with more charter schools. The relevant questions are “what works? For whom? And under what circumstances?” Charter schools, traditional public schools and private schools all vary widely in quality and in their ability to serve different populations well. Some schools of each organizational type do well (at least for some kids) while others, quite bluntly, suck, no matter who they try to serve. Further, I’ve written previously about these arguments that charters or private schools “do more (than traditional public schools) with less money.” However, rarely are those money comparisons rigorously or accurately conducted. Often times the assertion of “more with less” isn’t backed by any analysis at all of the “with less” part of that equation (and sometimes not the “more” part either). But these are the types of issues we need to be exploring, including specifically what are the resource implications of the models being offered by those “successful” schools, be they charters, traditional public schools, or other alternatives.

Issue 2: It may not be that the only appropriate role for charters in the mix is for them to all try to serve the most representative population – a population mirroring that of the city as a whole or their zip codes. But, for those that don’t – for those that serve a niche – we need to recognize them as such, and need to monitor the extent that their demographic selection may have adverse effects on the system as a whole. We also need to recognize that their demographic difference may play a significant role in explaining either their apparent success, or apparent failure. We should recognize, for example, that schools like Robert Treat or North Star Academy may be showing high outcomes but are doing so largely as a function of serving very different populations than others around them. Further, there may be nothing wrong with that if they are truly doing well by the kids they serve. That may just be their appropriate niche. We just can’t pretend that this model of success can be spread city wide or statewide. And, it may be inappropriate to encourage these schools to serve more representative populations. Perhaps they should stick with what they are good at. As a result, it may be more reasonable for charters like North Star or Robert Treat to establish similar niche schools in other New Jersey cities rather than pretending they can expand dramatically in the same cities and still maintain their current level of achievements.

Issue 3: We also need to remember that NJ’s large urban districts themselves operate a wide variety of schools and segment their own student populations at the secondary level through such options as magnet schools. Charters aren’t the only segmenting force. Charters including those that are demographically representative and those that aren’t have simply become a part of that mix. And we need to recognize where each fits into that mix and consider very seriously the implications for the system as a whole.

Issue 4: Finally, as I so often point out policy perspectives and parental interests may differ sharply when it comes to “elite” charter schools. From a policy perspective, elite charter schools provide limited implications for scalability (and for charters as a broad-based policy “solution”) because their benefits are derived from concentrating motivated, often less poor (non-disabled & fluent English speaking), self-selected students with the staying power to endure “no excuses” charter models.  From a parental perspective, this public policy limitation often provides the strongest personal incentive to pursue a specific school for one’s own children. Again, it comes down to that “other” strongest in-school factor driving student success – peer effect. Peer effect is a limitation (confounding factor) in public policy (unless we can find clever strategies to optimize peer distribution). But peer effect may be a legitimate quality indicator for parental choices.

Data notes:

As I’ve noted numerous times on this blog, my goal here is to access and report on publicly available data from widely recognized and/or official government sources. These are the most recent data of that type available. And here are the sources:

District and Charter School Location Information: http://nces.ed.gov/ccd/bat (2009-2010)

District special education classification rates: http://www.nj.gov/education/specialed/data/ADR/2010/classification/distclassification.xls

School level % LEP/ELL & % Free Lunch: http://www.nj.gov/education/data/enr/enr11/enr.zip

Combined Demographic Data: Charter Demographics 2011

*Note: City and Zip Code averages constructed by summing all students, all free lunch students, all LEP/ELL students for all schools in each “city” and in each “zip  code” as identified by school location based on the NCES Common Core of Data and then dividing city-wide (or zip code wide) % LEP/ELL and % Free Lunch by city (or zip) wide total enrollment for both traditional public schools and charters (that is, charters are part of the city-wide, or zip-wide average).  For special education, to estimate the citywide (and zip) average for schools, the district overall rate was applied to district schools.  This would not be an appropriate way to compare individual city schools to charter schools, since special education populations are not evenly distributed across city schools (or throughout a zip code), but is a more reasonable approach for generating the citywide aggregates. Again, charters are included in citywide and in zip-code level averages.

Differentiating “cost savings” from “expenditure reduction”

Today, it’s time for a little School Finance 101, clarifying the difference between what is a “cost savings” versus what is an “expenditure reduction.”

Cost savings means finding ways to reduce expenditure while still addressing the same range of objectives (goals, intended outcomes) and while still achieving the same level or quality of outcomes with respect to each objective.

Expenditure reduction typically means choosing not to address some objectives, goals or intended outcomes. That is, to cut back the scope of production (drop a product line, eliminate product features, cut curricular offerings, address fewer objectives/goals). Further, expenditure reduction might also mean simply choosing not to shoot for as high outcomes on specific objectives.

Note that an expenditure reduction can also be achieved by realizing actual cost savings. But, any old expenditure reduction cannot necessarily be classified as cost savings.

Here’s an example of potential “cost savings.” If you are running a small, remote rural district and questioning whether you can continue to maintain advanced placement calculus as an in-house, district staffed course for 3 to 5 students per year, you might consider the alternative of having those students take the course online. It may just be that student outcomes, for your particular students and that particular course are not measurably adversely affected by the change and that the change results in a substantial reduction in expenditure per child on AP calculus. That is, expenditure was reduced and the outcome held constant. Cost savings were achieved.

The cost savings example above is considerably different from comparing the total per pupil expense on brick and mortar schooling and all of the programs and services included in that, with the per pupil expense needed to offer an online curriculum of core required academic courses (or any subset that differs in scope of goals/objectives).  This is the critical flaw in the interpretations and presentations of (some though not all of) the findings in the recent Fordham Institute study on the supposed “cost” of online versus blended, versus brick and mortar schooling.

One might clean up this aggregate brick and mortar to online schooling comparison by attempting to isolate the per pupil costs of offering those same courses/programs/services within the brick and mortar structure and measuring and comparing the outcomes of those same courses offered each way (in house versus online, as above with AP Calculus).[1]

In these times of tight local school district budgets and rhetoric about “stretching the school dollar” and the “new normal,” paying close attention to the distinctions above is critically important.

The recent Fordham Report on online and blended learning provides some interesting new data, but provides no insights (yet) regarding “cost savings.” Again, there’s some potentially useful stuff in there, but comparisons like those made in Figure 1, p. 4  (comparing total brick and mortar per pupil spending to the other two options) are very deceptive and do much to undermine the rest of the report.

Similarly, the “stretching the dollar” brief released last year by the Fordham Institute provides little or no valuable information regarding “cost savings” but does provide a laundry list of ideas for cutting services (with no evidence or measure of the results of such cuts), such as cutting off services to limited English speaking children after two years or cutting total funding to special education (by capping and redistributing those funds uniformly across districts). Kevin Welner and I address in greater detail the various expenditure reduction strategies cast as “cost savings” by Petrilli and Roza in the “stretching the dollar” brief in a recent NEPC report.

Further, it’s important to understand that it’s not necessarily even an expenditure reduction when a school district cuts from its budget something that it then expects someone else, such as parent, to pay for (like cutting district funding for athletic travel and either replacing it with fees, or expecting local sports booster clubs to raise the money). It may be a school budget reduction, and a reduction to the school’s expenditure, but the expenditure is still there.

I’m not saying that schools or districts should never simply cut expenditures by reducing the scope of their services, or shooting lower on some goals. Some goals/objectives may nolonger be (as) important, or may need to be traded off to use scarce resources toward other more “important” (importance being measured in any number of ways) goals/objectives.

Rather, I’m saying that if it’s an expenditure cut, it’s an expenditure cut.

If it’s really just a transfer of responsibility for the expenditure, acknowledge that.

And, if it really is an attempt at “cost savings” then it’s legit to call it that.

So, when presented with these quick and easy, off the shelf school finance solutions for supposed cost savings, please ask yourself whether the authors/presenters really have evaluated cost savings or merely expenditure reductions.

And to those authors/presenters who I’m not always sure understand the difference themselves please make at least some effort to differentiate between real “cost savings” and simple “expenditure reduction.”

 

 


[1] Alternatively, one might argue that the singular goal of any of the three options is a high school diploma, and that the different estimates are of the “costs” under each model of achieving that singular goal. However, in this case, it becomes important to evaluate the “quality” of the outcome – high school diploma – when obtained these very different ways (perhaps by evaluating preparedness for higher education, access, persistence, 6-year graduation, for otherwise similar students).

Misunderstanding & Misrepresenting the “Costs” & “Economics” of Online Learning

The Fordham Institute has just released its report titled “The Costs of Online Learning” in which they argue that it is incrementally cheaper to move from a) brick and mortar schooling to b) blended learning and then c) fully online learning.

http://www.edexcellencemedia.net/publications/2012/20120110-the-costs-of-online-learning/20120110-the-costs-of-online-learning.pdf

Accompanying this report is a blog post titled “Understanding the Economics of Online Learning” from the Quick and the Ed. http://www.quickanded.com/2012/01/understanding-the-economics-of-online-learning.html/comment-page-1#comment-78690

On first glance, both the report itself and especially the blog post from Quick & Ed display basic misunderstandings of the concept of “cost,” a very basic economic concept. I find this to be particularly disturbing in a blog post titled “understanding the economics of online learning.”

“Cost” refers to the cost of providing a service level of specific quality, which in education might be measured in terms of student outcomes. That includes all costs of achieving those outcomes, whether covered by public subsidy, or whether passed along to other participants in the system. A really good guide for understanding “costs” this type of analysis is Hank Levin’s book on Cost Effectiveness Analysis.

By contrast an “expense” is that which is expended toward providing some given level or portion of service. You can spend less and get less. You can spend more and get more. But getting Y quality service will cost you X, and no less than X (where X represents the minimum amount you would need to spend, given the most efficient production technology for achieving Y quality of service).  You can conceivably spend more than X for Y quality of service, but that would be, shall we say, inefficient.

Often to cover the full cost of any particular service, like public schooling for example, several parties incur expenses. It is assumed that the majority of the cost of brick and mortar schooling is covered at government expense. But, we all know that there are also fees for many things in some states (and districts), such as participation fees for sports, personal expense on school lunch, or transportation fees. Assuming attendance is compulsory, transportation fees are necessarily part of the cost of the education system whether covered by parents through fees (a tax by another name) or covered by the local public school district.

The “cost” of brick and mortar schools doesn’t change if we simply decide to cut transportation services while maintaining compulsory attendance laws. Rather, we pass along that expense to someone else – the parents. That expense is still there, and it may have even increased if we add in the cumulative parental expense on transportation (in effect, a tax for school participation).

What’s being compared in the online learning report is not “cost” but expenses on varied levels of service provision.

We might be generous here and set aside the thorniest issue and assume that the measured academic outcomes addressed by each option are the same regardless of model type or student served (likely a huge, unsupported assumption). But the outcomes of brick and mortar schooling include not only the measured academic outcomes, but any and all outcomes derived from the total expenses on brick and mortar schooling (those used in the study), including outcomes of athletic and arts participation, physical education, etc.  If the range of outcomes covered by brick and mortar schooling are broader, that should be taken into account in this type of analysis. That is, if brick and mortar schooling is providing more than just the core academic programs – including sports, clubs, arts, phys ed  – and online services are not – the analysis should either add these costs to the online service costs (what these things would cost if privately supplemented) or should subtract them from the brick and mortar cost. Otherwise this is a rather pointless apples to five course meal comparison (unless we also throw in a utility analysis and assume all of that other stuff to have zero utility… a suspect assumption).

One might argue… so what’s the big deal, the kid goes to school in the kitchen in their house, and the parent is simply in the next room working from home, as opposed to the child being in a brick and mortar school for the day. Well, even that’s not a $0 expense endeavor. To nitpick, it’s likely that the increased monitoring role of the parent in this case would reduce the parent’s work productivity to some extent – an opportunity cost. The opportunity costs become potentially much larger if the parent’s productivity depends more on not being at home, but they can no-longer be away from home. Then there’s the marginal increase to utilities associated with having the child at home and online, and potential increased food expense (a little hard to judge). Additional computer hardware, etc. This kind of “little” stuff adds up across large numbers of kids.

I do not see anywhere in this study (on quick glance) or in the post above, any discussion of the varied amount of expense (portion of cost) that would be passed along to someone else (parents) under each model in order to achieve the same outcomes.  This has to be accounted for in order to have a thoughtful conversation on public policy implications. In other words, the present study does little to advance thoughtful conversation on public policy implications of online and blended learning models. But with some additional work, perhaps it might.

It may not be feasible to construct a full tally of all of the “costs” passed along to someone else under each model, but it’s at least worth listing out what some/many of those things might be and the likely range of costs being passed along.

It may still be reasonable to make the argument that government expense can be reduced, but it’s not necessarily a reduction in the cost of the service, but rather a transfer of responsibility for covering that cost. It may be… though I’m not entirely sure… that the total cost is also reduced. But taking that next step in the analysis also involves evaluating the full costs of inputs and full range of outcomes achieved.

Spending less to get less doesn’t reduce costs. It reduces only expenditures and that distinction is important.

6 Things I’m Still Waiting for in 2012 (and likely will be for some time!)

I start this new year with reflections on some unfinished business from 2011 – Here are a few bits of information I anxiously await for 2012. Some are likely within reach. Others, well, not so much.

  1. A thoroughly documented (rigorously vetted) study by Harvard economist Roland Fryer, which actually identifies and breaks out in sufficient detail (& with appropriate rigor & thorough documentation) the costs of delivering in whole and in part (and costs of bringing to scale), no excuses curriculum/models/strategies and comprehensive wrap-around services.
  2. The long since promised rigorous New Jersey charter school evaluation – or even better – improved student level data in New Jersey such that researchers can actually conduct reasonable analyses of charter schooling and reforms/strategies more generally across New Jersey public & charter schools.
  3. That long list of all of those other average to below average paying professions – professions other than teaching – where compensation is entirely merit based and based substantially on (noisy) multiple regression estimates of employee effectiveness determined by the behavior of children as young as 8 years old [generously assuming 3rd grade test scores to represent the lower end of the value-added grade range],  AND where the top college graduates just can’t wait to sign up!
  4. That long list of highly successful market-based charter and/or independent private schools – schools not bound by the shackles of union negotiated agreements – where teacher compensation is not strongly predicted by (or directly a function of) experience and/or academic credentials,  AND where the top college graduates just can’t wait to sign up (or stick around)! (see also: https://schoolfinance101.wordpress.com/2010/10/09/the-research-question-that-wasn%E2%80%99t-asked/)
  5. Evidence that there really is enough money tied up in (wasted on) cheerleading and ceramics to be reallocated to provide sufficient class size reduction in core content areas and increased classroom teacher wages (toward improving teacher quality) to make substantive improvements to the quality of high poverty schools!
  6. Evidence that  the differences in student outcomes between high performing affluent suburban public school districts and lower performing poor urban and inner urban fringe school districts are somehow explained by substantial differences in personnel policies, merit-based teacher compensation, teacher benefits and negotiated agreements as opposed to substantive differences in family backgrounds and available resources.

For elaboration on a few of these issues, see my recent AP interview with Geoff Mulvihill: http://www.mycentraljersey.com/article/20120101/NJNEWS10/301010003

And so the new year of education policy research and blogging begins. A year in which I, myself, will be engaged in addition, more extensive analyses of the finances of charter schools, revenue raising and expenditure patterns by locations and by network affiliation. A year in which I also expect to be digging deeper into the distribution and effects of cuts in state aid and funding constraints on school and district resource allocation and exploring across multiple states (and districts and schools within states) the causes and consequences of inequities and inadequacies in public education funding.

The Comparability Distraction & the Real Funding Equity Issue

Yesterday, the US Department of Education released a new report addressing how districts qualified for Title I funds (higher poverty districts) often allocate resources across their schools inequitably, arguing that requirements for receiving Title I funds should be strengthened.

The report is here: http://www2.ed.gov/rschstat/eval/title-i/school-level-expenditures/school-level-expenditures.pdf

Related resources here: http://www2.ed.gov/about/offices/list/opepd/ppss/reports.html#comparability-state-local-expenditures

It is certainly problematic that many public school districts have far from predictable, far from logical and far from equitable formulas for distributing resources across their schools. This is a problem which should be addressed. And improving comparability provisions for receipt of Title I funding is an appropriate step to take in this regard.

However, it is critically important to understand that improving within district comparability of resources across schools is only a very small piece of a much larger equity puzzle. It’s a drop in the bucket. Perhaps an important drop, but not one that will even come close to resolving the major equity issues that plague public education systems today.

I have written on this topic previously both on this blog and in peer reviewed publications:

  • Baker, B. D., & Welner, K. G. (2010). “Premature celebrations: The persistence of interdistrict funding disparities” Educational Policy Analysis Archives, 18(9). Retrieved [date] from http://epaa.asu.edu/ojs/article/view/718
  • B. D. (2009). Within-district resource allocation and the marginal costs of
    providing equal educational opportunity: Evidence from Texas and Ohio. Education Policy
    Analysis Archives, 17(3). Retrieved [date] from http://epaa.asu.edu/epaa/v17n3/.
  • Baker, B.D. Re-arranging deck chairs in Dallas: Contextual constraints on within district resource allocation in large urban Texas school districts. DeckChairsinDallas.Baker (forthcoming in Journal of Education Finance)

Among other things, I have pointed out on this blog that one reason why focusing on within district disparities between “rich and poor” schools is misguided is because most of the disparities in wealth among families and children occur across district lines rather than within district boundaries. (2nd major point in post)

The new U.S. Dept. of Ed. report reinforces this overemphasis on within district disparity, ignoring entirely between district disparity. In part, it is perhaps a more politically convenient argument to point blame at local school district officials, rather than states, for not doing their part to improve equity across schools. Local school officials make good targets, but it’s harder to pick on states & state legislatures.

Here’s one way in which the USDOE report casts the disparities:

The report compares the number of Title I (higher poverty) schools that have lower per pupil spending than non-Title I schools in the same district.  This becomes fodder for the news headlines. And I would argue, fuels public distraction from the bigger inequities.

Now, there are a multitude of methodological quibbles I have with this analysis. First, it compares only the average spending of Title I and non-Title I schools within districts, without consideration for other factors which frequently serve as strong predictors of different school site spending across schools within districts (primarily, concentrations of children with disabilities, and district choices to locate specific programs in specific schools). Poverty is one factor – and a very important one at that – but it’s also important to look across the full range of poverty concentration across schools in a district, rather than just splitting schools into Title I and non-Title I. The Deck Chairs in Dallas article above provides examples of the steps one should take to evaluate equity in spending across schools within districts. So too does this article: http://epaa.asu.edu/ojs/article/view/5

But, let’s take a look at the more important issue that is missed entirely in the myopic focus on within district disparities and “blame the local districts” approach to school funding equity.

First stop, Philadelphia. This first graph shows the box plot of elementary school spending per pupil from the data set used in the USDOE report (nice new data to play with!) Philadelphia city elementary schools simply have far less than elementary schools in surrounding districts (in Pennsylvania). THIS IS THE MAJOR EQUITY CONCERN!  Here’s how these funding differences play out along a continuum of all schools in the metro (within PA) with respect to students qualified for free or reduced price lunch:

Philadelphia schools are in Red. Indeed, the pattern of spending per pupil with respect to % free or reduced price lunch is not what I would want/expect to see across schools within Philadelphia. It actually appears somewhat regressive. That is, higher poverty schools within Philadelphia having marginally lower spending per pupil than lower poverty ones. But, there may be some other factors at play (such as special education population distributions) which complicate the interpretation of this relationship. But, we also see that:

  1. the majority of Philadelphia elementary schools have near or over 80% free or reduced price lunch
  2. the majority of schools in this picture that are over 80% free or reduced price lunch are Philadelphia schools
  3. Philadelphia schools have systematically fewer per pupil resources than those of surrounding districts
  4. the majority of other schools in the metro area have fewer than 40% free or reduced price lunch
  5. these much lower poverty schools IN OTHER DISTRICTS have higher average spending.

These are the districts with which Philadelphia must compete to recruit and retain a sufficient quantity of high quality teachers. And it’s clearly a losing battle.

Focusing only on the disparities inside Philadelphia, bringing the comparability hammer down on Philadelphia does little to resolve the bigger funding equity issues that are a function of neglect by the Commonwealth of Pennsylvania, not the city of Philadelphia.

Not all metro areas look this bad. In many cases, central cities are on average or slightly above average for their metro areas. But arguably, not “enough” above average that they have wide latitude to reshuffle their resources aggressively to their higher poverty schools. Note that if Philadelphia did strive to create a strong progressive distribution of resources toward higher poverty schools, all other schools in the district would be left with next to nothing – at least relative to their surroundings. This is the very “deck chairs” issue I discuss in my paper on Dallas (well, actually on Texas as a whole).

It also turns out that many smaller cities, and very poor inner urban fringe areas (with particularly weak tax base) are often as disadvantaged or much more disadvantaged than the urban core. Places we don’t always hear about. Here’s one of my favorite small city examples, Utica, NY:

Utica City elementary schools (1 in Box Plot) have much lower average per pupil spending than elementary schools in surrounding districts.Here’s the scatterplot with respect to % free or reduced price lunch:

Like Philadelphia, there appear to be inequities in resources across Utica City elementary schools. But again, most Utica City elementary schools have over 80% free or reduced price lunch and spend less per pupil than most elementary schools in surrounding districts, many of which are not wealthy districts by any stretch of the imagination. They’re just not as poor as Utica itself. Here’s a little more backdrop on the position of Utica among NY State school districts.

While it is important, and relevant to consider ways to tighten regulations on Title I districts to require that they are allocating resources equitably across schools within their boundaries, we cannot and should not let the emphasis on Title I and Comparability distract us from the bigger equity issues – the harder equity issues to resolve.  While it’s politically convenient to blame local bureaucrats (those overpaid fat cats in large city school district central offices) we must also maintain pressure on states to do the right thing, and ensure that these districts have the resources they need in order to distribute them equitably.

see also: http://www.schoolfundingfairness.org/