Blog

Friday Finance 101: School Finance Formula & Money Matters Basics

Modern state school finance formulas – aid distribution formulas – typically strive (but fail) to achieve two simultaneous objectives: 1) accounting for differences in the costs of achieving equal educational opportunity across schools and districts, and 2) accounting for differences in the ability of local public school districts to cover those costs. Local district ability to raise revenues might be a function of either or both local taxable property wealth and the incomes of local property owners, thus their ability to pay taxes on their properties.

Figure 1 presents a hypothetical example of the distribution of state and local revenue per pupil across school districts, sorted by poverty concentration. The hypothetical relies on the simplified assumption that districts with weaker local revenue raising capacity also tend to be higher in poverty concentration. While that’s not uniformly true, there is often at least some correlation between the two [it serves to make this hypothetical a bit more straightforward]. Accepting this oversimplified characterization, Figure 1 shows that the typical low poverty and high local fiscal capacity district would likely raise the vast majority of the cost of providing its children with equal educational opportunity through local tax dollars. There may be some small share of state general aid assuming that the total cost of providing equal educational opportunity exceeds the local resources raised with a fair tax rate.

Figure 1

 

This pattern is usually arrived at (if it is arrived at) through some overly complicated formula requiring multiple inefficiently and illogically laid out spreadsheets of calculations and based on measures for which each state chooses its own, completely distinct and unrecognizable nomenclature. A short version might go as follows:

Step 1 – determine target funding level (need & cost adjusted foundation level) per pupil for each district

Target Funding per Pupil = Foundation Level x Student Need Adjustments x Geographic Cost Adjustments

Where the foundation level is some specified per pupil dollar amount. Where student need adjustments include adjustments for individual student educational needs, as for children with limited English language proficiency and children with one or more disabilities, and collective characteristics of the student population such as poverty, homelessness and/or mobility/transiency rates. Where geographic costs refer to geographic variations in competitive wages, and factors such as economies of scale and population sparsity.

Step 2 – determine the share of target funding to be raised by local communities

State Aid per Pupil = Target Funding per Pupil – Local Fair Share

Yep. That’s it. Student needs and costs are accommodated in Step 1, and differences in local wealth and/or capacity to pay are accommodated in Step 2! Now convert that into about 2,000+ separate calculations and create incomprehensible names for each measure (like calling a weight on “low income students” a “student success factor”) and you’ve got a state school finance formula.

But I digress.

Implicit in the design of state school finance systems is that money may be leveraged for improving both the measured and unmeasured outcomes of children.  That is, that money matters to the quality of schooling that can be provided in general and that money matters toward the provision of special services for children with greater educational needs. That is, money can be an equalizer of educational opportunity.

In a typical foundation aid formula, it is implied that a foundation level of “X” should be sufficient for producing a given level of student outcomes in an average school district. It is then assumed that if one wishes to produce a higher level of outcomes, the foundation level should be increased. In short, it costs more to achieve higher outcomes[1] and the foundation level in a state school finance formula is the tool used for determining the overall level of support to be provided.

Further, it is assumed that resource levels may be adjusted in order to permit districts in different parts of the state to recruit and retain teachers of comparable quality. That is, the wages paid to teachers affect who will be willing to work in any given school. In other words, teacher wages affect teacher quality and in turn they affect school quality and student outcomes. This is plain common sense, and this teacher wage effect operates at two levels. First, in general, teacher wages must be sufficiently competitive with other career opportunities for similarly educated individuals. The overall competitiveness of teacher wages affects the overall academic quality of those who choose to enter teaching.[2] Second, the relative wages for teachers across local public school districts determine the distribution of teaching quality.[3] Districts with more favorable working conditions (more desirable facilities, fewer low income and minority students) can pay a lower wage and attract the same teacher. Wages matter, therefore, money matters.

Finally, those student need adjustments in state school finance formulas assume that the additional resources can be leveraged to improve outcomes for low income students, or students with limited English language proficiency. First, note that some share of the additional resources is needed in higher poverty settings simply to provide for “real resource” equity – or to pay the wage premium for doing the more complicated job. Second, resource intensive strategies such as reduced class sizes in the early grades, high quality (using qualified teaching staff)[4] early childhood programs, intensive tutoring and extended learning time programs may significantly improve outcomes of low income students. And these strategies all come with significant additional costs (even when adopted under the veil of “no excuses charterdom“).

But, because providing more money to support public schools often means raising more tax dollars and because providing supplemental resources to children whose own communities may lack local revenue raising capacity often means more aggressive redistribution of state tax revenues, whether and how money  matters in education is often hotly politically contested.

School finance is a political minefield, which is arguably why so many pundits have tried to distract from school finance issues by advancing ludicrous arguments that education equity and overall quality can be improved by altering teacher labor markets via statistical deselection without ever addressing funding deficiencies and wage disparities or by expanding charter schooling and ignoring the role of philanthropic contributions (while counting on them).  Unfortunately for those political pundits, school finance is a minefield they must eventually walk through if they ever expect to make real progress in resolving quality or equity concerns.

In a recent report titled Revisiting the Age Old Question: Does Money Matter in Education?[5] I review the controversy over whether, how and why money matters in education, evaluating the current political rhetoric in light of decades of empirical research.  I ask three questions, and summarize the response to those questions as follows:

Does money matter? Yes. On average, aggregate measures of per pupil spending are positively associated with improved or higher student outcomes. In some studies, the size of this effect is larger than in others and, in some cases, additional funding appears to matter more for some students than others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent – in other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.

Do schooling resources that cost money matter? Yes. Schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes. Again, in some cases, those effects are larger than others and there is also variation by student population and other contextual variables. On the whole, however, the things that cost money benefit students, and there is scarce evidence that there are more cost-effective alternatives.

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more adequate funding with more accountability for its use may be most promising.

While there may in fact be better and more efficient ways to leverage the education dollar toward improved student outcomes, we do know the following:

  • Many of the ways in which schools currently spend money do improve student outcomes.
  • When schools have more money, they have greater opportunity to spend productively. When they don’t, they can’t.
  • Arguments that across-the-board budget cuts will not hurt outcomes are completely unfounded.

In short, money matters, resources that cost money matter and more equitable distribution of school funding can improve outcomes. Policymakers would be well-advised to rely on high-quality research to guide the critical choices they make regarding school finance.

Regarding the politicized rhetoric around money and schools, which has become only more bombastic and less accurate in recent years, I explain the following:

Given the preponderance of evidence that resources do matter and that state school finance reforms can effect changes in student outcomes, it seems somewhat surprising that not only has doubt persisted, but the rhetoric of doubt seems to have escalated. In many cases, there is no longer just doubt, but rather direct assertions that: schools can do more than they are currently doing with less than they presently spend; the suggestion that money is not a necessary underlying condition for school improvement; and, in the most extreme cases, that cuts to funding might actually stimulate improvements that past funding increases have failed to accomplish.

To be blunt, money does matter. Schools and districts with more money clearly have greater ability to provide higher-quality, broader, and deeper educational opportunities to the children they serve. Furthermore, in the absence of money, or in the aftermath of deep cuts to existing funding, schools are unable to do many of the things they need to do in order to maintain quality educational opportunities. Without funding, efficiency tradeoffs and innovations being broadly endorsed are suspect. One cannot tradeoff spending money on class size reductions against increasing teacher salaries to improve teacher quality if funding is not there for either – if class sizes are already large and teacher salaries non-competitive. While these are not the conditions faced by all districts, they are faced by many.

It is certainly reasonable to acknowledge that money, by itself, is not a comprehensive solution for improving school quality. Clearly, money can be spent poorly and have limited influence on school quality. Or, money can be spent well and have substantive positive influence. But money that’s not there can’t do either. The available evidence leaves little doubt: Sufficient financial resources are a necessary underlying condition for providing quality education.

There certainly exists no evidence that equitable and adequate outcomes are more easily attainable where funding is neither equitable nor adequate. There exists no evidence that more adequate outcomes will be attained with less adequate funding. Both of these contentions are unfounded and quite honestly, completely absurd.

 


[1] Duncombe, W. and Yinger, J.M. (1999). Performance Standards and Education Cost Indexes: You Can’t Have One Without the Other. In H.F. Ladd, R. Chalk, and J.S. Hansen (Eds.), Equity and Adequacy in Education Finance: Issues and Perspectives (pp.260-97). Washington, DC: National Academy Press.

[2] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.  Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352. David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498. Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408. Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

[3] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144. Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62. Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438. Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[5] Baker, B.D. (2012) Revisiting the Age Old Question: Does Money Matter in Education. Shanker Institute. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

More thoughts on Charter Punditry & Declarations of Certainty

I’m a little late in pouncing on this one. JerseyJazzMan beat me to the punch with some relevant points.  A short while back, the Wall Street Journal posted an op-ed by Deborah Kenny, CEO of New York based charter chain Harlem Village Academies. Kenny’s op-ed purported to explain why charter schools are successful.  Of course, we could spend all day on that contention alone, since it is relatively well understood that charter results have been mixed at best. Indeed, I have explained in my published work and in blog posts that the track record for certain charter chains and in certain settings seems stronger than in others.

Here is how Deborah Kenny explained why charters succeed (implicitly where traditional public schools do not):

Critics claim that charter schools are successful only because they cherry-pick students, because they have smaller class sizes, or because motivated parents apply for charter lotteries and non-motivated parents do not. And even if charters are successful, they argue, there is no way to scale that success to reform a large district.

None of that is true. Charters succeed because of their two defining characteristics—accountability and freedom. In exchange for being held accountable for student achievement results, charter schools are generally free from bureaucratic and union rules that prevent principals from hiring, firing or evaluating their own teams.

http://online.wsj.com/article_email/SB10001424052702303703004577472422188140892-lMyQjAxMTAyMDIwNDEyNDQyWj.html?mod

As is par for the course of late in such arguments, Kenny’s chartery punditry is completely void of any data or contextual information that might provide insights as to why, or even whether charter schools “succeed.” Yet, while bafflingly void of substantiation, Kenny’s punditry is disturbingly decisive & hyper-confident.

It is yet another case of declaring to know absolutely what we absolutely don’t know!

For the moment, let’s accept Kenny’s proposition that at least in New York City, many charter schools affiliated with high profile management organizations have posted solid test scores (not entirely the case… but let’s accept that proposition…).

So then, let’s compare New York City charter schools from these CMO chains to traditional public schools in the city on a handful key parameters – a) how much they spend and b) which kids they serve – each relative to the schools which they supposedly far outshine.  These are things that actually matter. Now… if they do spend the same as NYC traditional public schools and serve similar student populations, we might be able to make the case that their “success” is a function of something different that they are doing with the same dollar – more bang for the buck. A relevant question… but a hard one to distill. But, if they serve very different student populations, then it’s even harder to distill what the heck is really going on.[1]

Further, if they are outspending NYC public schools that do serve similar populations, their access to resources may be what allows them to do different stuff… which may then explain their supposed “success.”  It would certainly be hard to make the above claims without looking at any of this, wouldn’t it?

So, here’s the stat sheet:

For each of these comparisons I have used a three year panel of data on NYC Charters schools and all NYC traditional public schools, from 2008 to 2010. To compare spending, I have used the estimates generated in our recent report on charter school spending:

  • Baker, B.D., Libby, K., & Wiley, K. (2012). Spending by the Major Charter Management Organizations: Comparing charter school and local public district financial resources in New York, Ohio, and Texas. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/spending-major-charter.

Further discussion of the spending comparisons for NYC can be found here: https://schoolfinance101.wordpress.com/2012/05/07/no-excuses-really-another-look-at-our-nepc-charter-spending-figures/

In short, each of these charter chains spends more per pupil than NYC public schools that serve similar student populations. Some, like KIPP and UnCommon schools spend a lot more!

Further, when compared against same grade level schools citywide, each of these charter chains serves fewer children with disabilities (and I lack data on the type of disabilities, which may also matter).

Finally, when compared against same grade level schools in the same zip code, each of these charter chains serves far fewer low income children and FAR fewer children with limited English language proficiency.

These substantive differences in resources and student populations make it difficult if not impossible to assert that these charter school chains operating in New York City have somehow identified a magic formula for success that is neither resource dependent nor dependent on serving very different student populations than city district schools.

There is certainly no basis whatsoever for asserting that accountability and freedom – specifically freedom from bureaucratic and union rules – are necessarily the determinants of charter success. In fact, these broad principles apply similarly to all independent charters, but while some are good, others suck – and many are allowed to persistently suck despite supposed heightened accountability. Indeed, the upper half is better than average! And the lower half… is not!

It’s hard to suggest that either of these factors – accountability or freedom – are the determinants of charter success when success varies so widely across charters. What does tend to vary across charters is a) access to philanthropic resources and b) student populations served. AND… it may also be the case that some charters have adopted unique strategies…… some of which may actually come with additional costs!

There may be some cool stuff going on in some of these schools, just as there may be some cool stuff going on in NYC district schools.  It may well be that freedom from bureaucratic rules permits schools to do cool stuff.  It would certainly seem advantageous in the context of New York State moving forward to be able to skip out on complying with new, ill-conceived teacher evaluation legislation.

We need to figure out what works and for whom, whether those ideas come from traditional public schools, charter schools or private schools.

We need to figure out the costs of doing these things. Ken Libby, Kathryn Wiley and I discuss these issues in our recent policy brief (read it! It’s not some anti-charter propaganda. It’s an actual study of spending data… with detailed documentation & extensive lit review).

Unfortunately, the tendency among charter “defenders” is to simply deny, deny, deny… ignore costs (make bizarre, unfounded excuses, present half-assed, back of the napkin estimates, or sidestep them)… ignore substantive contextual issues, etc., etc., etc. (certainly, the tendency among the attackers is to declare all charter operators/supporters to be union-busting privatizing profiteers – also an unhelpful characterization for a diverse array of institutions).

It’s time to start digging deeper into what makes schools tick and for whom and how to provide the mix of schooling that best serves the largest share of children.


[1] As I explained in a recent post, even in a lottery study – of students lotteried in/lotteried out – those lotteried out likely attend schools with substantively different classroom peers than those lotteried in, and it remains difficult if not impossible to distill school/teacher effect from peer effect since both operate at the classroom level.

 
 

 

 

How much does Federal Title I Funding Affect Fairness in State School Finance Systems?

About this much!

These funding profiles are based on the methodology used in our reports on school funding fairness. The reports can be found here: http://schoolfundingfairness.org/ and the technical appendix can be found here: http://schoolfundingfairness.org/

This graph is based on an updated model which includes data from 2007-08, 2008-09 and 2009-10 (these are linear projections of otherwise messy distributions… hence the fact that some of the lines cross at/around 0% poverty).

The bottom line is that while Federal Title I programs certainly provide much needed funds to many high poverty districts, in the big picture, they are a drop in the bucket. They are now, and have been for some time.

The states in this figure are among the least equitable in the nation. And Title I aid simply isn’t sufficient to fix that. Equitable and adequate financing of local public school districts remains the responsibility of the states, and these states have some work to do!

Friday Finance 101: What Can we Learn about Education Costs & Efficiency by Studying Existing Public Schools?

One pervasive reformy argument is that our entire education system may be instantly transformed to be more productive and efficient by instantly adopting untested reformy policies and/or untested solutions of sectors other than education. Further, that we must take these bold leaps of faith because the public education system itself is too corrupt, too bloated, too inefficient to provide any useful lessons! Perhaps the whole system can be replaced with you-tube videos. Or perhaps we can just fire all of the teachers with more than 10 years experience and pay the rest based on the test scores they produce! Or perhaps some other lessons of industry can cure the (unsubstantiated) ills of American public schooling!

Kevin Welner and I addressed this issue in our critique of materials provided on the U.S. Department of Education’s website on improving educational productivity.  Specifically, Marguerite Roza and Paul Hill in one working paper titled Curing Baumol’s Disease argue that the entire public schooling system suffers from a disease of inefficiency and thus any lessons for improving educational productivity must be sought outside of the current system.

Similar arguments have been used by those who claim that state legislatures and state courts should never rely on cost analyses based on current practices of existing educational systems in order to either guide the design of state school finance systems through reform legislation, or to evaluate whether state school finance systems are equitable or adequate.

Researchers and/or policy analysts tend to use either of two general approaches to study education costs, identifying spending levels that should generally be sufficient for achieving desired outcomes and identifying how education costs vary from one location to another across districts within a state and how those costs vary by the needs of varied student populations. One approach involves gathering focus groups of informed constituents to specify the inputs to schooling they believe are needed to get the job done. These professional judgment panels are essentially proposing a hypothesis of the programs and services needed under varied conditions and for varied student populations to achieve desired outcomes. The alternative is to construct statistical models which estimate the relationship between current district spending levels and current student outcomes, with consideration for various factors that affect the cost of achieving desired outcomes (student characteristics, district characteristics, labor market pressures) and with consideration for factors that influence whether districts are more or less likely to spend inefficiently.

This approach, called education cost function modeling has been used extensively in peer-reviewed studies of education costs and cost variation.[1]  As Tom Downes, an economist from Tufts University explained back in 2004: “Given the econometric advances of the last decade, the cost-function approach is the most likely to give accurate estimates of the within-state variation in the spending needed to attain the state’s chosen standard, if the data are available and of a high quality” (p. 9).[2]

But, because these methods are sometimes used beyond academic journals, and in the highly political context of estimating not only how much money is possibly needed to achieve certain outcomes, but also how that money should be distributed across districts and children, they are not without controversy. These methods become the subject of more heated debate when they are introduced as evidence to assist judges in their evaluation of the constitutionality of state school finance systems. Heck, as explained below, a few authors have gone to great lengths to try to explain/argue how such information should never be used to either guide policy development or evaluate the rationality of current policies. Those assertions are completely unjustified.

The goal of education cost modeling – or any form of cost analysis – whether applied for evaluating equal educational opportunity or for producing adequacy cost estimates is to establish “reasonable marks” to provide guidance in developing more rationale state school finance systems. Only with reasonable marks in hand can one make informed judgments as to whether existing policies are wide of those reasonable marks.

Historically, funding levels for state school finance systems have largely been determined by taking the total revenue generated for schooling as a function of statewide tastes for statewide taxation and dividing that funding by the number of students in the system. That is, the budget constraint – or total available revenue – and total student enrollment have been the key determinants of the foundation level, or basic allotment. To some degree, this will always be true. But reasonable estimates of the “cost” of producing desired outcomes, given current technologies of production (the range of practices actually used/tested), may influence the taste for additional taxes by revealing that the preferences regarding taxation and the preferences regarding desired quality of public education are misaligned, meaning that one or the other should be adjusted. That is, if we find out that higher outcomes are going to cost us more, we can then have a more reasonable discussion of whether we are willing to pay that amount more for the expected gain in quality, or whether to lower our expectations. Alternatively, we can simply fly blind!

It’s rather like the individual who wishes to buy a Cadillac Escalade but expects only to spend about $25,000. After a little research, he finds that he can either buy a Ford F-150 for $25,000 or an Escalade for $65,000. That’s where that little bit of research comes in handy – identifying the gap between uniformed assumptions and reasonably informed ones, albeit with greater precision (actual prices) in this example than in cost estimation in education. Heck, if one wants to get really crazy with this, one could fit a statistical model relating prices with various features of existing makes and models of “comparable” vehicles.

Reasonable estimates of cost may also assist courts in determining whether current funding levels and distributions are wide of a reasonable mark, or substantially misaligned with constitutional standards. Cost model estimates are not meant to be exact predictions of what student outcomes will necessarily occur next year if we suddenly adopt a state school finance system based on the cost model estimates. Cost models provide guidance regarding the general levels (predictions with error ranges) of funding increases that would be required to produce measured outcomes at a certain level, assuming that districts are able to absorb the additional resources without efficiency loss.

Studies of state school finance reform also suggest that the key to successful school finance reforms is that they are both substantive and sustained. If additional dollars to high need districts are best leveraged toward high quality preschool programs and/or early grades class size reduction, we are unlikely to see changes to college readiness outcomes the following year (or following five years). If the additional dollars are best leveraged toward increasing teacher salaries for teachers in their optimal years of experience, allowing districts to recruit and retain “better” teachers over time, we are also unlikely to see immediate returns in student test scores.

Importantly, cost model estimates are estimates based on the actual production technologies of schooling. They are based on the outcomes schools and/or districts produce under different circumstances, for different children – the actual children they serve, based on the actual assessments given, and based on the real conditions under which children attend school.

Some critics of education cost analysis in general, and cost function modeling in particular assert that all local public school districts are simply inefficient, mainly because they pay their personnel based on parameters not associated with improved student outcomes.[3] Therefore, they assert that it is useless to consider the spending practices of current districts when trying to determine how much needs to be spent to achieve desired outcomes. A common version of this argument goes that if schools/districts paid teachers based on test scores they produce and if schools/districts systematically excessed ineffective teachers, productivity would increase dramatically and spending would decline. Thus, educational adequacy could be achieved at much lower cost, and therefore, estimating costs based on current conditions/practices is a meaningless endeavor.[4]

The most significant problem with this logic is that there exists absolutely no empirical evidence to support it. It is entirely speculative, frequently based on the assertions that teacher workforce quality can be improved with no increase to average wages, simply by firing the bottom 5% each year and paying the rest based on the student test scores they produce.  To return to the car purchasing analogy above, this is like assuming that somewhere out there is a car/truck with all the features of the Escalade, but the price of the F-150 – specifically, a version of the Escalade itself produced by a new, yet to be discovered technology with materials not yet invented that allow that vehicle to be sold at less than1/3 its original price.

In fact, the logical way to test these very assertions would be to permit or encourage some schools/districts to experiment with alternative compensation strategies, and other “reforms,” and to include these schools and districts among those employing other strategies (production technologies) in a cost model, and see where they land along the curve. That is, do schools/districts that adopt these strategies land in a different location along the curve? In fact, some schools and districts do experiment with different strategies and those schools carry their relevant share of weight in any statewide cost model. Thus far, what we seem to be seeing is that the more productive experimental approaches being used a) aren’t that bold and b) cost quite a bit!

Pure speculation that some alternative educational delivery system would produce better outcomes at much lower expense is certainly no basis for making a judicial determination regarding constitutionality of existing funding, and is an unlikely (though not unheard of) basis for informing statewide mandates or legislation.  Cost model estimates, as well as recommendations of professional judgment and expert panels can serve to provide useful, meaningful information to guide the formulation of more rational, more equitable and more adequate state school finance systems.


[1] Duncombe, W., Yinger, J. (2008) Measurement of Cost Differentials In H.F. Ladd & E. Fiske (eds) pp. 203-221. Handbook of Research in Education Finance and Policy. New York: Routledge.  Duncombe, W., Yinger, J. (2005) How Much more Does a Disadvantaged Student Cost? Economics of Education Review 24 (5) 513-532. Duncombe, W.D. and Yinger, J.M. (2000).  Financing Higher Performance Standards: The Case of New York State. Economics of Education Review, 19 (3), 363-86. Duncombe, W., Yinger, J. (1999). Performance Standards and Education Cost Indexes: You Can’t Have One Without the Other. In H.F. Ladd, R. Chalk, and J.S. Hansen (Eds.), Equity and Adequacy in Education Finance: Issues and Perspectives (pp.260-97). Washington, DC: National Academy Press. Duncombe, W., Yinger, J. (1998) “School Finance Reforms: Aid Formulas and Equity Objectives.” National Tax Journal 51, (2): 239-63. Duncombe, W., Yinger, J. (1997). Why Is It So Hard to Help Central City Schools? Journal of Policy Analysis and Management, 16, (1), 85-113. Imazeki, J., Reschovsky, A. (2004b) Is No Child Left Beyond an Un (or under)funded Federal Mandate? Evidence from Texas. National Tax Journal 57 (3) 571-588.

[2] Downes (2004) What is Adequate? Operationalizing the Concept of Adequacy for New York State. http://www.albany.edu/edfin/Downes%20EFRC%20Symp%2004%20Single.pdf

[3]Hanushek, E. (2005, October). The alchemy of ‘costing out’ and adequate education. Paper presented at the Adequacy Lawsuits: Their Growing Impact on American Education conference, Cambridge, MA. Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223.

[4] For elaboration on this argument, see: Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223

Friday Finance 101: Equitable and Adequate Funding and Teacher Quality is Not an Either-Or choice!

In recent years, the casual observer of debates over public education policy might be led to believe that improving teacher quality and ensuring that low income and minority school children have access to high quality teachers has little or nothing to do with the equity or adequacy of financing of schools. The casual observer might be led to believe that there actually exists a sizable body of empirical research that confirms a) that high quality teaches matter, b) that money doesn’t matter and c) by extension money has nothing to do with recruiting, retaining or redistributing teacher quality. These arguments, while politically convenient for those hoping to avoid thorny questions of tax policy and state aid formulas, are not actually grounded in any body of decisive, empirical research. Rather, to the contrary, it is reasonably well understood that while teacher quality does indeed matter, teacher wages also matter and teacher working conditions matter, both in terms of the level of quality of the overall teacher workforce and in the distribution of quality teachers.

The modern debate over the role of teachers and teaching quality for improving student outcomes dates back to findings within the Coleman report in the 1960s. The Coleman report looked at a variety of specific schooling resource measures, most notably teacher characteristics, finding positive relationships between these traits and student outcomes. A multitude of studies on the relationship between teacher characteristics and student outcomes have followed, producing mixed messages as to which matter most and by how much.[1] Inconsistent findings on the relationship between teacher “effectiveness” and how teachers get paid – by experience and education – added fuel to “money doesn’t matter” fire. Since a large proportion of school spending necessarily goes to teacher compensation, and (according to this argument) since we’re not paying teachers in a manner that reflects or incentivizes their productivity, then spending more money won’t help.[2] In other words, the assertion is that money spent on the current system doesn’t matter, but it could if the system was to change.

Of course, in a sense, this is an argument that money does matter. But it also misses the important point about the role of experience and education in determining teachers’ salaries, and what that means for student outcomes.

While teacher salary schedules may determine pay differentials across teachers within districts, the simple fact is that where one teaches is also very important in determining how much he or she makes.[3] Arguing over attributes that drive the raises in salary schedules also ignores the bigger question of whether paying teachers more in general might improve the quality of the workforce and, ultimately, student outcomes. Teacher pay is increasingly uncompetitive with that offered by other professions, and the “penalty” teachers pay increases the longer they stay on the job.[4]

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[5] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[6] In addition, more recent studies have tackled the specific issues of relative pay noted above. Loeb and Page showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[7]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[8]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[9]

With regard to teacher quality and school racial composition, Hanushek, Kain, and Rivkin (2004) note: “A school with 10 percent more black students would require about 10 percent higher salaries in order to neutralize the increased probability of leaving.”[10] Others, however, point to the limited capacity of salary differentials to counteract attrition by compensating for working conditions.[11]

Finally, it bears noting that those who criticize the use of experience and education in determining teachers’ salaries must of course produce a better alternative, and there is even less evidence behind increasingly popular ways to do so than there is to support the policies they intend to replace. In a perfect world, we could tie teacher pay directly to productivity, but contemporary efforts to do so, including performance bonuses based on student test results,[12] have thus far failed to produce concrete results in the U.S. More promising efforts to measure productivity, such as new teacher evaluations that incorporate heavily-weighted teacher productivity measures based on their students’ test scores, are still a work in progress, and there is not yet evidence that they will be any more effective (or cost-effective) in attracting, developing or retaining high-quality teachers.

To summarize, despite all the uproar about paying teachers based on experience and education, and its misinterpretations in the context of the “Does money matter?” debate, this line of argument misses the point. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.


[1] Hanushek, E.A. (1971) Teacher Characteristics and Gains in Student Achievement: Estimation Using MicroData. Econometrica 61 (2) 280-288, Clotfelter, C.T., Ladd, H.F., Vigdor, J.L. (2007) Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review 26 (2007) 673–682, Goldhaber, D., Brewer, D. (1997) Why Don’t Schools and Teachers Seem to Matter? Assessing the Impact of Unobservables on Educational Productivity. The Journal of Human Resources, 332 (3) 505-523, Ehrenberg, R. G., & Brewer, D. J. (1994). Do school and teacher characteristics matter? Evidence from High School and Beyond. Economics of Education Review, 13(1), 1-17, Ehrenberg, R. G., & Brewer, D. J. (1995). Did teachers’ verbal ability and race matter in the 1960s? Economics of Education Review, 14(1), 1-21, Jepsen, C. (2005). Teacher characteristics and student achievement: Evidence from teacher surveys. Journal of Urban Economics, 57(2), 302-319, Jacob, B. A., & Lefgren, L. (2004). The impact of teacher training on student achievement: Quasi-experimental evidence from school reform. Journal of Human Resources, 39(1),50-79, Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 471, Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains. Review of Educational Research, 73(1), 89-122, For a recent review of studies on the returns to teacher experience, see: Rice, J.K. (2010) The Impact of Teacher Experience: Examining the Evidence and Policy Implications. National Center for Analysis of Longitudinal Data in Educational Research.

[2] Some go so far as to argue that half or more of teacher pay is allocated to “non-productive” teacher attributes, and so it follows that that entire amount of funding could be reallocated toward making schools more productive. See, for example, a recent presentation to the NY State Board of Regents from September 13, 2011 (page 32), slides by Stephen Frank of Education Resource Strategies: http://www.p12.nysed.gov/mgtserv/docs/SchoolFinanceForHighAchievement.pdf

[3] Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62

[4] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.

[5] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[6] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[7] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[8] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71. See also: Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[9] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

[10] Hanushek, Kain, Rivkin, “Why Public Schools Lose Teachers,” Journal of Human Resources 39 (2) p. 350

[11] Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438, Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[12] For recent studies specifically on the topic of “merit pay,” each of which generally finds no positive effects of merit pay on student outcomes, see: Glazerman, S., Seifullah, A. (2010) An Evaluation of the Teacher Advancement Program in Chicago: Year Two Impact Report. Mathematica Policy Research Institute. 6319-520, Springer, M.G., Ballou, D., Hamilton, L., Le, V., Lockwood, J.R., McCaffrey, D., Pepper, M., and Stecher, B. (2010). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching. Nashville, TN: National Center on Performance Incentives at Vanderbilt University, Marsh, J. A., Springer, M. G., McCaffrey, D. F., Yuan, K., Epstein, S., Koppich, J., Kalra, N., DiMartino, C., & Peng, A. (2011). A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses. Final Evaluation Report. RAND Corporation & Vanderbilt University.

Which states screw the largest share of low income children? Another look at funding fairness

Here’s a little Friday afternoon fun with the updated Census Fiscal Survey data through 2009-2010. I’ve written recently about the national school funding fairness report card, which I work on with colleagues from the Education Law Center. The report card can be found here:

http://www.schoolfundingfairness.org/

I also recently wrote a blog post about America’s Most Screwed City School Districts. It was clear to some readers that the most screwed city school districts happen to be concentrated in certain states like Illinois and Pennsylvania, and also in Connecticut which is often perceived as a reasonably well funded and fairer state (than the other two).

Par for the course, as soon as we release the School Funding Fairness report card using data from 06-07 to 08-09 (most recent available at the time we put it together), the Census Bureau releases their 2009-10 district level finance figures… leading to the usual flurry of misinterpretations of data (which I’ll get to another day). Not being able to resist the temptation, despite a heavy backlog of other work to do, I decided I had to play with the updated fiscal data. I also decided for fun to take an alternative look at the data, bridging the idea I presented on my blog about screwed city schools with the general idea of state school funding systems. I decided to ask which states screw the most low income kids.

Here’s my operational definition of screwed for this post. A district is identified as screwed (new technical term in school finance… as of a few posts ago) if a) the district has more than 50% higher census poverty than other districts in the same labor market and b) lower per pupil state and local revenues than other districts in the same labor market. As I’ve explained on numerous previous occasions, it is well understood that districts with higher poverty rates (among other factors) have higher costs of providing equal educational opportunity to their students.

I then tally the percent of statewide enrollments that are concentrated in these screwed districts to determine the share of kids screwed by their state. And here are the rankings… or at least the short list of states that screw the largest share of low income students:

Not much new here. The same culprits make up the list. Nebraska is elevated to its position of disgrace by its systematic underfunding of Omaha Public Schools, which seemed to improve for a fleeting few years, but recent data don’t look so good. Woonsocket and Pawtucket bring Rhode Island into the mix… but raising additional fun questions regarding placement of blame (another post, another day… but should the city managers/local officials have the authority to deprive children in their jurisdiction of state constitutional rights? under what circumstances and by what mechanism should the state step in? Can they?).

Here are a few graphs showing the distributions of individual districts in Illinois, Pennsylvania and Connecticut. On the horizontal axis is the relative poverty rate of districts compared to all other districts in the same core based statistical area. On the vertical axis is the state and local revenue per pupil relative to the average for all other districts in the core based statistical area.

Again, Allentown, Reading and Philadelphia are massively screwed (yep… a new school finance classification). Meanwhile… Lower Merion… in the Philly ‘burbs is not screwed at all. An intriguing contrast in Pennsylvania school finance is that Pittsburgh has long had far more adequate funding than Philadelphia for a variety of reasons. It is important to understand here that the highest poverty districts – those with 3x the average for their labor market – likely need FAR MORE revenue per pupil than their neighbors to get by – not just the same. So, while York and Harrisburg are decidedly less screwed than Allentown or Reading, they too are not in particularly good shape. They have about the same revenue per pupil as surrounding districts, and 3x the poverty rate.

Here’s Illinois:

Waukegan and Aurora East, along with Round Lake hold the coveted spots of “most screwed” but Chicago Public Schools isn’t far behind (with over 400k students). A multitude of smaller high poverty districts in the Chicago metro not shown here also have very low relative revenue per pupil.

Finally, here’s Connecticut once again:

Again, Bridgeport and New Britain, along with Waterbury (among others) remain substantially screwed. Recall from my previous post that Hartford and New Haven funding is somewhat distorted by magnet school aid.

So why does any of this matter anyway. Well, at face value it’s patently unfair to systematically deprive these districts of resources comparable to their less needy neighbors.  If money doesn’t matter to New Britain or Bridgeport, then why does it matter to Greenwich or Westport? Really, if money is so damn trivial for improving schooling quality, they why don’t all those districts in the upper right hand corner of these graphs just give all that useless money to those in the lower right hand corner. Oh, wait… perhaps money does matter…???…!!!

One thing about school finance that’s really important to understand is that the relative position of districts matters a great deal. It matters a great deal because education is a labor intensive industry. It is about getting a sufficient quantity of sufficient quality teachers in front of kids who need them. The spending behavior and negotiated agreements, and working conditions in districts like Westport and Greenwich matter for the teacher recruitment potential for Bridgeport.  The distribution of quality teachers across districts in a labor market depends on numerous factors, many of which tie back to available resources. And in these states, large numbers of children attend high need districts that simply lack resources to compete.

Notably, those districts sitting pretty in the upper left hand corner of these figures also have had traditional teacher contracts, tenure, seniority preferences and likely other policies that would make “reformers” cringe for years.  But most are doin’ just fine.  So too do the even higher spending and lower poverty elite private schools in the same labor markets! Most don’t use test scores as the basis for providing merit pay and I’m quite sure that few if any of them use test scores as the basis for firing the bottom 5% of their teachers every year. They haven’t been and aren’t being subjected to manipulative heavy handed takeovers, school closures and massive charter school expansion.

None of that reformy junk would likely do much good for the Westports, Greenwiches or Lower Merions of the US school system.  And none of that reformy junk is likely to be much good for the Bridgeports, New Britains, Allentowns, Readings, Philadelphias or Chicagos!

I find it particularly infuriating when I hear news of these “most screwed” districts being blamed for their own failure by the state officials who have deprived them systematically of resources for decades.

What these districts need as a baseline – a fair starting point – is equitable & adequate funding. Once that has been accomplished, then, and only then can we start having a reasonable conversation about how to best leverage that funding to improve student outcomes. But without the funding, there are no options for leveraging it.

 

 

 

 

Deconstructing Funding Fairness: Comments on the release of our latest report

Today, I, along with colleagues at the Education Law Center released the second round report on school funding fairness which can be found here:

http://www.schoolfundingfairness.org

We cover much ground in this report and develop what we believe are a useful set of indicators for comparing state school finance systems. In this new version of the report, we also include interactive tables and graphs thanks to the efforts and expertise of Danielle Farrie.

http://www.schoolfundingfairness.org/ia_reports.htm

But there’s always more to the story. There’s always more to be discussed/addressed that can’t be fully captured in a short policy report. Specifically, I would like to address a handful of potential misconceptions regarding funding fairness.

First, it is important to understand that unfair conditions may occur even in states that would appear in our updated report to be generally fair.

Second, it is really important to understand that the percent of money that comes from the state – from state tax revenue sources – does not seem to predict/influence the overall level of fairness. Fairness is not achieved by pushing all funding away from property tax revenues and onto state source revenues. In fact, such a move might do little to improve fairness while substantially increasing revenue volatility (income tax revenues which fuel state general funds are typically far more volatile – elastic to economic conditions – than are property tax revenues.   The real key to a good school finance formula is to figure out how to integrate the revenue sources into a system that is overall fair, and stable.

Third, federal revenues make things only marginally fairer. Their effect is minor. Yes, they are targeted to higher poverty districts generally. And yes, for those districts the resources are needed and may seem substantial. But, in the big picture of funding fairness, it comes down to providing that right mix of state and local funds to achieve a system that is overall fair.

Let’s take a closer look at each of these issues.

There are unfair conditions even in states that appear fairer!

Let’s begin with a look at Connecticut, a state that appears to a) spend a fair amount on its schools and b) spend marginally more on higher poverty districts. Or at least so the federal data on state and local revenues which we use in the funding fairness report indicate.

Connecticut is a particularly interesting case. As it turns out the fairness we find in our report is selective in two ways. First, the progressive tilt to the formula overall is significantly influenced by special aid provided primarily to Hartford and New Haven. Other high poverty districts lack this benefit. It is selectively applied. Second, the aid to which I refer is aid targeted for magnet schools which partly serve children from other districts in an effort to integrate minority and non-minority, low income and non-low income students. That this aid shows up in the expenditures of Hartford and New Haven also creates some distortion to the calculation of per pupil spending.

Here is an arguably more accurate portrayal of the selective fairness of funding in Connecticut. To clarify – selective fairness is… well… unfair.

This graph relates current spending per pupil (Net current expenditures per ADM 2011) after removing magnet aid from district expenditures. Overall, Hartford and New Haven remain better funded than other high poverty districts, but lower than with magnet aid included. Further, several very high poverty Connecticut districts have very low funding compared to their surroundings, precisely what landed them on my previous list of most screwed school districts.

Figure 1

Allocating more state aid doesn’t make it fairer if aid is allocated unfairly!

There exists a common assertion that disparities in school funding across districts are largely caused by disparities in property tax base – local property wealth – and the failure of states to allocate enough aid to offset those disparities. At times, I even hear advocates suggesting that if we could just do away with property tax funding of schools, and move all of the funding to state taxes and make the system completely state controlled, all of these equity concerns would be resolved. Wrong. Wrong… and double Wrong.

First, as a tangent which I mentioned above, allow me to point out that property tax revenues actually play a really important role in stabilizing school revenues over time and acting as a counterbalancing force to state aid fluctuations. State school finance systems require a balanced portfolio of revenue sources!  State income tax revenues are much more volatile to economic cycles.

That aside, the figure below shows as we did in our first edition of the report, that states where districts on average receive a higher share of funding from the state (either as actual state disbursements, or in some cases as cleverly reclassified local property tax revenues raised by state mandated minimum tax rates, perhaps with revenue sharing), do not necessarily have fairer- more progressive – distributions of state aid.

Figure 2

So, how can this be? The implication of this is that state aid itself is being allocated unfairly? Is that possible? How might a state allocate aid in ways that fails to improve the fairness of the overall distribution of state and local revenue?

Well, let’s start with a hypothetical of what should be, or the distribution of aid as it might appear in a progressively funded state like New Jersey or Ohio. The figure below shows that state aid must counter two forces of local economics. First, state aid must be allocated in higher amounts to districts with less local capacity to raise that aid on their own. Second, to achieve progressiveness, aid must be allocated in higher supplemental amounts – or weighted amounts – to districts with greater student needs. If we totally oversimplify these issues and assume that low capacity districts also tend to have higher needs, it might look something like this:

Figure 3

And, as it turns out, states like New Jersey actually do look something like that:

Figure 4

But even New Jersey isn’t “perfect” in this regard. Note that middle wealth districts actually drop below the highest wealth districts. The pattern “dips” when it should perhaps climb more consistently from left to right.

So then, what the heck is going on in other states? Well, here are a few examples of the state aid distributions in states that scrape the bottom of the fairness barrel in our updated report. I will have a new report out this fall supported by the Center for American Progress in which I dissect how states actually use their aid formula to make things worse! Unbelievable, but true. Some states actually allocate state aid so inequitably as to make funding gaps bigger! (see this post for an explanation of the pig!)

Figure 5. North Carolina

Figure 6. Texas

In this final figure, I show how New York State “tweaks” their aid formula from its initial calculations to its final calculations in ways that actually increase the funding gap from lower to higher poverty districts. The first cut (left hand side of the figure) calculations of state aid in New York would have many districts getting little or no state general foundation aid. But, the state aid formula then tweaks that amount by guaranteeing a minimum aid of $500 per pupil and an upward adjustment to the aid share for districts that are middle to upper middle wealth. Then, as I’ve discussed in previous posts, the state allocates disproportionate property tax relief aid to the wealthiest districts. Overall, these adjustments have the effect of increase the low poverty to high poverty funding gap from $1,100 per pupil to $2,300 per pupil. Yep… using state aid to double the funding gap! The politics of state school finance systems at work!

Figure 7. New York

Federal aid is no substitute for a sound, well designed, progressive state school finance system!

Finally, what about that federal aid and specifically what about that biggest chunk federal aid allocated to local districts primarily on the basis of poverty? Doesn’t that do the trick? Doesn’t the federal aid create the necessary upward tilt? Well… uh… no… it doesn’t. It helps, indeed. But Federal Title I aid creates only marginal improvements.

Consider that according to the most rigorous empirical research on the topic that it generally costs double to achieve comparable outcomes in a district that is 100% low income versus one that is 0% low income. That is, each low income child would warrant a “weight” of about 1.0 if counting low income as qualifying for free/reduced lunch (185% income level). When using the more stringent 100% poverty threshold, the required weight is about 1.5.

The following figure and table show that on average nationally federal title I funding adjusts upward the tilt of revenues per pupil by about 5% for a district that is 30% in poverty (100% poverty level). This would be comparable to about a 5% adjustment for a district that is 70% or more “low income” (qualified for free or reduced lunch, see page 31). That’s a relatively modest and far from sufficient adjustment!

Figure 8

Figure 9

So, while Federal Title I Aid is not entirely irrelevant, it is far from sufficient for achieving the extent of need-based targeting required for high poverty settings.

A sound, well-designed, progressive state school finance system is required.

Sadly, far too few of such systems presently exist.

Equitable and adequate financing of public school systems in the U.S. remains largely a state responsibility, and some states continue to either throw their entire education systems under the bus (Arizona, Tennessee), or selectively disregard children living in high poverty settings. Put simply, money matters. School funding equity and school finance reforms matter.

It’s not sexy and it’s not reformy. In fact, it’s quite possibly anti-reformy, but the reality is that equitable and adequate financing of state education systems remains the necessary underlying condition for providing quality schooling and achieving equal educational opportunity for all children.

Five Ridiculously Reformy “Copy & Paste” Policies & Why They’re Misguided

65 Cent Solution (now defunct?)

What is it? It was (thankfully this one is pretty much dead!) a policy proposal being pitched in the mid-2000s which would require, through state mandate/legislation or regulation, that local public school districts show on paper that they spend 65% of their total budgets on “instruction.”

The argument was that the average district nationally allocates somewhat less than 65% to instruction. Instruction is good. Private sector businesses use benchmarks, therefore education should use benchmarks. 65% is a benchmark. Therefore it should be used! Viola… freakin’ brilliant?

Backers of this proposal argued that the policy allowed state legislators to claim they were increasing classroom spending without actually allocating more money.

But, the backers were caught with their pants down in a memo leaked to the Austin Statesmen newspaper in Texas. In an article in Educational Policy (full citation below), Doug Elmer & I summarize the whole memo debacle:

In addition to these criticisms of what qualified as instructional spending, many opponents of the bill questioned the motives behind FCE (First Class Education) and the 65% solution proposal. These suspicions were in part confirmed by a memo written by Mooney to Republican legislators and obtained by the Austin American-Statesman in 2005 (Embry, 2005). In the memo, Mooney (2003) listed several political benefits of the 65% Solution, including the following:

  • Splitting of the Education Union. The 1st Class Education proposal pits administrators and teachers at odds with one another. . . .
  • Direct Fix for Public Education. While voucher and charter school proposals have great merit, large segments of the voting public—especially suburban, affluent women voters—view these ideas as an abandonment of public education . . . targeted segments of voters may be more greatly predisposed to supporting voucher and charter school proposals, as Republicans address the voting public with greater credibility on public education issues. . . .
  • Allows the Use of Unlimited Non-Personal Money for Political Position Advantages. The aforementioned benefits can be achieved with funding in any amount and from any source.
  • It Wins! As with initiatives proposing tax limits, term limits, and the definition of marriage, ballot successes for the 1st class is exceedingly likely.

Of course, one thing that never seemed to get discussed in this process was that empirical research on the issue of instructional spending shares of budgets, student outcomes and other school quality measures suggests little if any relationship – especially with respect to the 65% threshold.

Thankfully, this particular bit of copy and paste education policy foolishness seems to have come and gone!

Research

Taylor, L, Grosskopf, S. (2007) Is a Low Instructional Share an Indicator of School Inefficiency?

Exploring the 65-Percent Solution http://bush.tamu.edu/research/workingpapers/ltaylor/The_65_Percent_Solution.pdf

Baker, B.D., Elmer, D.R. (2009) The Politics of Off-the-Shelf School Finance Reform. Educational Policy 23 (1) 66-105

Parent Trigger

The Parent Trigger is perhaps even more obnoxious and deceptive than the 65 cent debacle. What is it? Well, the Parent Trigger is a policy that allows the majority of parents of students in any failing (generally meaning high poverty/minority concentration) school to vote, by simple majority to have the school taken over by a private company, charter operator, or to simply fire all of the teachers and the principal and start fresh (options may vary). The assertion is that this mechanism gives low income and minority parents “rights” that they are simply unable to assert through bloated and non-responsive urban district bureaucracies. While it may be true that some urban district bureaucracies are less than responsive, the parent trigger sure as hell isn’t the solution.

The parent trigger basically permits a simple majority of parents of children who happen to attend a given school for a period of time to stage a takeover of that school, and this could be done for a variety of motives in a variety of ways with a plethora of possible distorted, negative consequences. A group of middle school parents (during their 3 to 4 year window) might, for example, take a year to take-over their school and turn it over to a private charter management company.  The parent majority might, for example, have a gripe against LGBT students, or students of a particular race, culture or religion. Charter takeover would allow the simple majority to makeover the school into a themed school – like a school for traditional family values, or a English only academy. The simple majority could easily use this tool to oppress any minority population (and don’t give me that crap about this being better than the tyranny of the district oppressing everyone).

Further, if the simple majority of parents do forcibly convert the school to a privately managed charter, it may turn out that all parents and children lose important statutory and constitutional rights, as I have discussed in previous posts regarding parental/student/teacher rights in privately managed charter schools.

Notably, this hostile takeover could have occurred under the majority rule of parents on one cohort of students and have lasting adverse effects on subsequent cohorts of children whose parents had little input.

Further, this mechanism removes from the process any/all other residents of the community surrounding the school (who contribute tax dollars to the school), placing all control in the hands of the simple majority of parents with children attending the school at any one point in time.

It’s a ridiculous approach granting disproportionate, ill-defined power to an ill-defined majority constituency seemingly intended to do little more than stimulate infighting among low income and minority populations as a distraction from the larger policy issues.

Blog Posts

Potential abuses of the Parent Trigger

https://schoolfinance101.wordpress.com/2010/12/07/potential-abuses-of-the-parent-trigger/

Public/Private Status of Charter Schools

https://schoolfinance101.wordpress.com/2012/05/02/charter-schools-are-public-private-neither-both/

Why Public/Private Status Matters: Legal Issues

https://schoolfinance101.wordpress.com/2012/05/04/follow-up-on-why-publicnessprivateness-of-charter-schools-matters/

Weighted Student Funding Reformy edition

This one is an example of a totally reasonable policy concept that has been dreadfully abused and over-emphasized as a panacea for urban district budgeting and management, and conflated with many other management policies strategies.

Weighted student funding itself is simply an approach to calculating the need and cost based funding to be delivered to schools or districts. Several states used weighted student formulas to drive money to districts. Several districts use weighted student formulas to allocate budgets out to schools.

But, in the reformy world, weighted student funding has taken on the meaning of a Money Follows the Child coupled with Decentralized School Site Control model. Put simply, there really isn’t much evidence that decentralized governance across schools within districts is particularly effective policy for improving productivity and efficiency, or equity for that matter! (see Baker & Elmer article below).

But, the most frustrating part of the WSF discussion for me has been that it has encouraged many to argue that the big problem with school funding today is the disparities in budgets across schools within large city districts. Yeah… there are some significant problems there. But those are not the biggest problems. Between district disparities and state school finance systems continue to severely constrain district’s ability to target funds to their highest need schools.

Sadly, despite there being some virtues of WSF as a funding approach, the reformy takeover and complete mis-representation of the issue has led to some truly baffling think tanky reporting on WSFs. Take for example 2010 Bunkum Award Winner The Reason Foundation:

http://nepc.colorado.edu/bunkum/2010/time-machine-award

This Reason Foundation report has multiple features that make it an award winner. It engages in definitional acrobatics, pouring a kitchen sink’s worth of assorted reforms into a vessel it calls Weighted Student Formula (WSF) reforms. And, in a truly breathtaking innovation, the report enters its time machine and attributes positive reform outcomes to policy changes that had not yet been implemented. In broad terms, WSF reforms involve linking funding to each student, with that funding calculated as the student’s base allocation and any additional funds for special needs, economic deprivation or other reasons. The Reason report somehow manages to squeeze into this WSF concept three additional reforms: (a) site-based management; (b) site-based budgeting; and (c) school choice. The expert third party reviewer said this about the Reason “umbrella labeled as WSF:” “[it] deceptively suggests that all related policies are necessarily good—even going so far as to credit those policies for improvements that took place before the policies were implemented.”

“The report then irresponsibly recommends untested, cherry picked policy elements, some of which may substantially undermine equity for children in the highest-need schools within major urban districts.” For example, the plan suggests that extra funds for economically deprived students be eliminated but that added money should be given to gifted and talented students. The report also ignores a large body of relevant literature on within-district equity and school site management in its uncritical effort to find support for the foundation’s ideological policy preferences.

Look… a good weighted student formula is not a bad idea at all. Pretending that district weighted student formulas and decentralized governance will solve the most pressing equity issues in education today, however, is totally ridiculous!

Research on WSF

Baker, B. D., & Welner, K. G. (2010). “Premature celebrations: The persistence of interdistrict funding disparities” Educational Policy Analysis Archives, 18(9). Retrieved [date] from http://epaa.asu.edu/ojs/article/view/718

Baker, B. (2009). Review of “Weighted Student Formula Yearbook 2009.” Boulder and Tempe: Education and the Public Interest Center & Education Policy Research Unit. Retrieved [date] from http://epicpolicy.org/thinktank/review-Weighted-Student-Formula-Yearbook

Baker, B.D., Elmer, D.R. (2009) The Politics of Off-the-Shelf School Finance Reform. Educational Policy 23 (1) 66-105

Baker, B.D. (2009) Evaluating Marginal Costs with School Level Data: Implications for the Design of Weighted Student Allocation Formulas. Education Policy Analysis Archives 17 (3)

Baker, B.D. (2012) Re-arranging deck chairs in Dallas: Contextual constraints on within district resource allocation in large urban Texas school districts. Journal of Education Finance 37 (3) 287-315

Toxic Trifecta Teacher Evaluation Policies

Another type of cut-and paste policy that’s been driving me up the wall lately is what I refer to as the Toxic Trifecta Teacher Evaluation Framework. I have explained in previous posts the issues associated with Value Added Models for determining teacher effects on student outcomes. I have also explained how Student Growth Percentiles are not appropriate for the task at all. But, I have also explained how this information might be responsibly used, for example, for exploring patters across teacher within a school or district, while retaining the option to decide that the data were simply wrong.

Toxic trifecta policies, in very simple terms, MANDATE THE MISUSE OF STATISTICAL INFORMATION FOR MAKING TENURE AND DISMISSAL DECISIONS.

They negate responsible human judgment altogether and replace it with rigid, ill-conceived frameworks reflecting a baffling degree of statistical ignorance (and educational and management ignorance).  

Here are the elements to look out for in Toxic Trifecta Teacher Evaluation Policies:

  1. Mandating potentially invalid VAM or necessarily invalid SGP scores to be used as a fixed share in determining personnel decisions.  Necessarily becomes an overriding factor!
  2. Forcing precise cut-point determinations through data with absurdly wide error ranges (creating categories of performance with defined cut points for VAM or SGP estimates).
  3. Forcing that personnel decisions be made on the basis of this information, on strict timelines, without consideration of any other contextual factors (or the possibility that the estimates are simply WRONG)

Really, any one of these elements alone is bad enough. But in combination, they are a complete disaster (except for the legal profession)!

As I’ve explained on many occasions, simply saying that the VAM or SGP measure of teacher effect on student test score change is “only 20%” or “only 40%” of the evaluation is unhelpful. It is still assumed to be valid and important and it may be neither.

Further, that element which varies most in the overall scheme is likely to tip the scales on most decisions. And the variance in VAM or SGP estimates is a mix of a) real effect, b) noise and c) bias (likely heavy bias in SGPs). Further, noise and bias are quite likely to dominate any “real effect” (and the real effect may not be an important effect). And we simply can’t know what share is real, bias or nose.

On the second element, it is utterly foolish to try to set up cut-scores for defined performance categories (with a point or two difference changing the category) given the extent of noise and bias in the measures. How can say that a 25 is unacceptable and a 26 is okay, when both have error ranges of 50 points on each end?  It then stands to reason that it is even more foolish to tie high stakes decisions to falling just above or below these cut scores from year to year.

Now,  I don’t know what the current TEACHNJ (Ruiz) bill includes from the toxic trifecta, but on my last read, it included all three components, the worst of which was the absolute requirement that teachers lose tenure after 2 bad evaluations, stated rigidly as follows:

Notwithstanding any provision of law to the contrary, the principal, in consultation with the panel, shall revoke the tenure granted to an employee in the position of teacher, assistant principal, or vice-principal if the employee is evaluated as ineffective in two consecutive annual evaluations. (p. 10)

Further, in an effort to rub salt in the wound following this mandated misuse of statistical information, the versions of the bill which I had reviewed indicated that teachers could only appeal these decisions on procedural grounds. Several other states have already adopted trifecta elements in part or entirely.

As I’ve mentioned in a few recent blog posts, there might (though I’m increasingly pessimistic) exist some reasonable uses of VAM estimates or SGPs for informing management decision making in schools.  Those reasonable uses invariable acknowledge that these measures are not only noisy but may also simply be wrong, and permit human judgment to make that call. Toxic trifecta policies prohibit those reasonable uses, and will ultimately mandate that bad decisions be made based on inadequate information.

Related Articles

Green, P.C., Baker, B.D., Oluwole, J. (2012) Legal implications of dismissing teachers on the basis of value-added measures based on student test scores. BYU Education and Law Journal 2012 (1)

Blog Posts

Toxic Trifecta: https://schoolfinance101.wordpress.com/2012/04/19/the-toxic-trifecta-bad-measurement-evolving-teacher-evaluation-policies/

If it’s not Valid, Reliability Doesn’t Matter: https://schoolfinance101.wordpress.com/2012/04/28/if-its-not-valid-reliability-doesnt-matter-so-much-more-on-vam-ing-sgp-ing-teacher-dismissal/

Video Post: https://schoolfinance101.wordpress.com/2012/05/23/video-thoughts-on-test-scores-vam-sgp-teacher-evaluation/

Mutual Consent Hiring/Assignment/Dismissal

This is one of those policies that had seemed relatively pointless and innocuous. Originally it was mostly about district human resource management policies, not about state requirements. But, as a state mandate and in conjunction with other teacher evaluation policies (the toxic trifecta), mutual consent policies take on new meaning.

Mutual consent policies – when adopted as state legislation or regulation – require that principals have the “last  word” on which teachers are assigned to or hired to work within their buildings. These policies have been driven by two ideas/purposes. On the one hand, there were the outrage-invoking news stories of principals being forced to draw from pools of excess teachers (implied [without validation] to be awful teachers and completely unqualified) in large city districts, when they supposedly knew they could get someone better from the outside. Second… and originally… these policies were intended to improve the distribution of teacher qualifications across more and less advantaged schools within districts.  Both are virtuous to the extent that a) the problem is real and b) the solution works.

But, there are many problems of both basic logic and of operational reality when it comes to mutual consent policies. Here’s a short list:

  1. Mutual consent assumes only good decisions are made at building level and bad ones at district level;
  2. Mutual consent ignores that district officials hire/fire and assign principals;
  3. Mutual consent sets up scenario where central office may wish to assign ‘good teachers’ to weak school but principal could reject (district might even be trying to groom new leaders for the school);
  4. Research suggests that it doesn’t actually accomplish much if anything!

In really simple terms, mutual consent causes administrative chaos, by mandating that the subordinate has final word, when the subordinate never really has the final word. What kind of silly crap is that? At least as a state policy mechanism?

It’s one thing if a district decides to have a collaborative process, or even a policy of collaboration regarding personnel decision between building leaders and central office. But having the state mandate that building leaders have authority over central office, when central office ultimate has authority over building leaders is ludicrous. Suggesting that this is based on how big business in the private sector works is even more ludicrous!

Further… what’s particularly warped is when a mutual consent policy is proposed in the same legislation as the toxic trifecta elements above. The toxic trifecta mandates who the principal must fire or at least de-tenure and under what specific circumstances and based on measures over which the principal has no control and may have limited statistical understanding????  And then the mutual consent policy “empowers” the principal? Are you kidding me?

Look, districts should design personnel policies such that school leaders can build good teams. I’m all for that, and have conducted and published research on that very topic. I favor building level involvement in personnel policy toward the goal of building effective teams. State mandated “mutual consent” does little or nothing to advance this goal.

As for the research on mutual consent, the one study done on a large district that used the policy found that it did not achieve its goal of improving the distribution of teacher quality:

We conduct an interrupted time-series analysis of data from 1998-2005 and find that the shift from a seniority-based hiring system to a “mutual consent” hiring system leads to an initial increase in both teacher turnover and share of inexperienced teachers, especially in the district’s most disadvantaged schools. For the most part, however, these initial shocks are corrected within four years leaving little change in the distribution of inexperienced teachers or levels of turnover across schools of different advantage. http://www.nctq.org/docs/Mutual_Concent_8049.pdf

Blog Posts

Regarding research on mutual consent: https://schoolfinance101.wordpress.com/2010/10/08/nctq-were-sure-it-will-work-even-if-research-says-it-doesnt/

New Jersey Charter Data Roundup: A look at the 2010-11 Report Cards

Here’s a quick run-down on the 2010-11 New Jersey School Report Card data on charter schools. No-one else is putting out decent summaries of this stuff, so I feel obligated to revisit these data periodically. They don’t change much over time. But those older blog posts get buried over time. So, here we go.

Let’s take a specific look at Newark because that’s where most of our attention has been paid regarding high flying charter performance.

Data sources:

1. NJDOE Report Card

2. NJDOE Enrollment File

3. NJDOE Directory File (for City location)

Percent Free Lunch

Percent ELL

Percent Female

Regression Model of Charter Performance

More explanation is provided below. But this regression model (raw output on link below) is simply intended to compare the average proficiency rates across all tests and grades of charter schools to other schools in the same city and with similar characteristics. The bottom line is that as in previous similar regressions, there remains a small statistically non-significant margin of difference in average overall proficiency. But, the graphs that follow are perhaps more fun/interesting to explore.

CharterRegression

Now, for the following figures, the overall charter effect variable is removed, so that we can see how individual charter schools lie with respect to expected proficiency levels. The following figures compare schools to their predicted performance given each of the characteristics in the regression model. On the vertical axis is the standardized residual or the standard deviations above or below predicted performance. Along the horizontal axis is the percent free lunch of the schools, just so that we can see how they sort out by poverty concentration. Note that poverty concentration is already controlled for in the models. I begin with a few figures for select tests in Newark, and then present some statewide figures.

Newark Schools over and under predicted performance

Statewide schools over and under predicted performance

On average, this statewide picture is actually pretty ugly. It would certainly be very hard to argue that charter school expansion across New Jersey has led to any substantive overall improvement of educational opportunities. Numerous charter schools are substantial underperformers. And overall, as the regression model indicates, the net performance is bread even.

Take home points

This analysis merely compares the average proficiency rates of schools with similar characteristics in the same city. It does not measure whether charters “add value” per se.  This isn’t really ideal from a research perspective, because it doesn’t attempt to sort out whether these schools are actually doing something that leads to higher performance.

To address this question we might try either of two strategies – estimating achievement gains across matched schools – or hypothetically matched schools/children, or by a lottery based analysis comparing kids lotteried in to those lotteried out and staying in neighborhood schools.

But, I would argue that we still might not learn much of policy relevance for Newark from either of these approaches. Why?

Well, let’s consider the first approach – a matched school analysis (or virtual match based on individual students).  Let’s say we specifically wanted to determine the effectiveness of schools like North Star, Robert Treat or Gray charter.  The problem is that there really aren’t any “matched” schools or match-able kids – except perhaps those in magnet schools.  Note on matching-based-analyses… given that nearly all kids in a city like Newark qualify for Free OR REDUCED lunch, matching would have to be done on the basis of Free Lunch! If not, substantial precision/accuracy is lost and the comparisons invalid.

We might look outside of Newark for matched schools or students, but then other contextual factors might compromise the analysis quite substantially, and this might cut either for or against the charters.

Further, it appears that gender balance matters – not just a little – but a lot. Gordon McInnis tipped me off to this.  I hadn’t realized how big a deal it was in these schools.

Note that I’ve also left out attrition here, so that even if the schools were matched by poverty rates, gender and ELL concentration, there might be substantive differences in which students leave over time, altering the peer group composition over time (as weaker students leave).  Again, it may be most relevant to compare Newark Charters to Newark Magnets and/or children who attend them, which are most similar to these Newark Charters.

We could try to construct hypothetical or virtual matches based on similar individual children (to those in the charters) across the district who may or may not actually attend school together. But therein lies the problem, most other similar kids left in district schools would be attending school in substantively different peer groups than those in charters like North Star, Gray or Treat.

AND if we did find an “effect” on student achievement growth what the heck would it mean? And how would it inform our policy decisions?

Well, if we did, we would still have significant difficulty sorting out as to whether that effect has anything to do with school quality, or with student peer group  – quite possibly the largest in school factor affecting achievement.

Alternatively, one could attempt a lottery based analysis in which we look at the gains of kids lotteried in and lotteried out of the charters – left in their neighborhood schools. But in this case we would certainly have kids whose peer groups differ dramatically.  Again, we could try to “correct” for that uneven distribution, but the fact is that we simply can’t fully correct for the substantial contextual differences across these schools.  Too many Newark charters (and those in Jersey City and Hoboken) simply don’t even come close to resembling the student composition of traditional public schools in the same area.

So who cares? Well, it matters a great deal for policy implications whether the effect is created by concentrating less poor, English speaking females in a given school or by actually providing substantively better curriculum/instruction.  The latter might be scalable but the FORMER IS NOT! There just aren’t enough non-poor girls in Newark to create (or expand) a whole bunch of these schools!

The Commonwealth Triple-Screw: Special Education Funding & Charter School Payments in Pennsylvania

This post is the second in a series (of unknown number) focusing on how states harm local public school districts through illogical, ill-conceived state school finance systems and components of those systems. One goal of this post is to illustrate the types of problems/manipulations that exist in state school finance systems, how they work, and the severity of the problems they can cause.  I have written previously, for example, how states find ways to actually use state aid to make their finance systems less equitable (school finance pork). I have also written about policies like census based financing of special education and it’s adverse effects on high need districts. The Commonwealth Triple-Screw takes it to another level.

The Commonwealth of Pennsylvania has among the least equitable state school finance systems in the country. Pennsylvania operates a school funding system that on average provides systematically less state and local revenue per pupil to the state’s highest need large and mid-size city districts. Among the nation’s most “screwed” city districts are Philadelphia, Reading and Allentown.

But amazingly, in Pennsylvania, the pain doesn’t end there. Pennsylvania also has one of the least fair, least logical approaches to special education funding, both in terms of the way in which special education aid is distributed to local public school districts and in the calculations for determining how much should be paid by local public school districts to charter schools for serving special education students.

Apparently, this issue is of current interest in PA: http://www.mcall.com/news/local/parkland/mc-lehigh-valley-cyber-charter-schools-20120604,0,2970776.story

The hit comes in three parts and I call it the Commonwealth Triple-Screw. Here’s a run-down.

Screw 1: Census based financing of special education, assuming a uniform share of students in need on its face provides less support to districts with greater shares of students in need.

First, Pennsylvania is among a handful of states which continue to use an approach called Census Based financing of special education. In brief, what PA does is provide to each school district, in general regardless of their local wealth and regardless of their actual number of special education students, a flat, base allocation of special education funding per 16% of total enrollment (the assumed flat % special ed across districts).

The argument is that funding special ed in flat amounts avoids the incentive to over-classify students. This argument ignores the possibility – the simple reality – that populations of all types vary in their geographic distribution for a variety of reasons and that includes families of children with disabilities. Funding on this basis necessarily deprives districts that through no fault of their own have far more than 16% special education. Further, and more illogically, for districts having only about 7% special education, this approach arbitrarily over-funds their needs (at least relative to higher need districts). I have written extensively about the research and realities of Census Based funding in this recent article:

  • Baker, B.D., Ramsey, M.J. (2010) What we don’t know can’t hurt us? Evaluating the equity consequences of the assumption of uniform distribution of needs in Census Based special education funding. Journal of Education Finance 35 (3) 245-275

Here are a few quick snapshots of how this works out globally (Statewide) then locally (Chester Upland School District). The following figure is drawn from my Summer 2011 final update to my testimony in the case of C.G. v. Commonwealth (a Federal Court challenge to PA special education funding). The figure shows that districts with higher special education population shares and higher Market Value/Personal Income ratios (hence low wealth/income) generally receive less special education aid per special education pupil as a function of the underlying census based formula.

Figure 1. Special Education State Aid per Special Education Pupil

In the rest of this post, I will show in particular how this formula along with other calculations dramatically undercut the financial viability of Chester Upland School District, a high need district facing dire financial circumstances in recent years.

Table 1 provides a walk-through of Chester Upland’s position with respect to special education state funding. CUSD was reported to have been allocated just over $5 million in SEF funding for 2010-11, with that funding having been frozen for the past several years. CUSD’s actual percent of enrollment in special education programs has typically been about 22% over time (based on several sources, including the NCES common core of data). That would amount to over 1,500 special education students, consistent with counts reported later. This yields special education funding per actual special education pupil of about $3,200 (placing CUSD among the red squares in Figure 1 which lie at approximately 22% special education and have low wealth [high MVPI]).

Table 1 walks through the hypothetical difference in funding that would occur if CUSD was allocated SEF per actual pupil rather than per 16% ADM. CUSD’s SEF allocation per 16% ADM is $4,429, or $1,200 per pupil higher than it is per actual pupil in need. If CUSD received $4,429 per actual child in need, then CUSD would receive nearly $2 million more in special education funding, or a 38% increase. For a district with minimal local capacity to offset this loss, that’s a significant hit. But, it’s also the smallest hit of the triple-screw!

Table 1. Special Education Funding in Chester Upland

[a] SEF data from: http://www.portal.state.pa.us/portal/server.pt?open=514&objID=509062&mode=2

[b] MVPI Aid Ratios from: http://www.portal.state.pa.us/portal/server.pt/community/financial_data_elements/7672

[c] IEP % Data from: http://www.nces.ed.gov/ccd/bat

Screws 2 & 3: The charter school funding formula for special education students exacerbates these problems by requiring high need, under-resourced districts to pay special education tuition to charter schools at the rate of the average special education expenditure per special education child of the host district.

Pennsylvania’s formula for determining the amount of money that must be transferred from sending districts to charter schools for serving students with disabilities is poorly conceived, creates perverse incentives for charter school operators, and inappropriately drains disproportionate resources from sending districts. CUSD is perhaps more harmed by this ill-conceived mechanism than any other district in the state both because of the characteristics of CUSD and because of the enrollment practices of Chester Community Charter School.

Note that the following analyses present the hypothetical adverse effects of the Pennsylvania charter school funding formula on CUSD, using enrollments of Chester Community Charter School to illustrate those effects. CUSD may also be sending children with disabilities to other charter schools, where the effects would play out similarly to the extent that other charters also siphon off children having lower cost disabilities. Further, the hypotheticals are based on enrollment-by-classification data from 2008-09 and conditions may have become even more severe in recent years.

The funding hit comes in two parts. First, Table 2 shows how the spending special education tuition rate is set. CUSD spent in 2011-12 about $17.3 million in “selected” special education spending. Even though this spending was used for approximately 22% of the district population, the tuition sending rate is calculated per only 16%, incorrectly inflating the special education spending per special education pupil.  16% of CUSD ADM is just under 1,200 students. Thus, the special education expenditure divided by that figure is $14,670. This is the “additional expenditure” per special education child. Each special education child also has associated with him/her a “base” (regular education) expenditure. That figure for CUSD is $9,858. Therefore, the total including BASE + SE is $24,528. If we take the $24,528 and multiply it times the total number of special education students sent to Chester Community Charter School, that amounts to over $15 million.

Okay… so let’s stop for a minute. The district receives in SEF funding about $3,200 per special education student, and must send out over $24,000. That’s difficult enough… but… the $24,000 is miscalculated substantially in two ways!

If we consider that CUSD actually served about 22% special education (or about 1,620 students), the special education spending per pupil, with the base added in, would be $20,527. If we use this figure instead to determine the payment to the charter school, CUSD would send only $12.7 million to the charter.

This arbitrary use of the 16% figure to determine sending tuition rates costs CUSD nearly $2.5 million (or about 50% of its state special education funding)!!!!!!!

Table 2: Over-expenditure to Charter Special Education Part I

But this is only the first and smaller portion of the miscalculation of sending tuition rates for special education students. Table 3 shows distribution of counts of students by disability type for CUSD and for Chester Community Charter School. Also in Table 3 are the average additional expenditures per special education student based on analyses from the Special Education Expenditures Project (SEEP). I’ve chosen to use these average additional expenditure margins so that I can simulate the effects on host districts of charter schools choosing to serve only the least needy (and least costly) special education students. 92% of the children with disabilities in the charter school are those with the lowest cost disabilities, compared to only 66% in CUSD. Yet, CUSD must pay out to the charter on the basis of an already inflated average special education cost per special education pupil.

Table 3: Distribution of District & Charter Special Needs Students & Related Cost Margins

Table 4 provides a walkthrough of the estimated impact of this financial hit. First, if we take the actual special education spending per actual special education pupil in CUSD ($10,699) and express it with respect to the spending per average non-special education child in the district we get a ratio consistent with research literature over time. On average, in CUSD, the special education child is allocated just over 2.0 times the average non-special education child (where the base non-SE child is just under $10k and the special education spending margin is just over $10k, see Table 2). I have performed this check simply to see that average special education spending margins in CUSD are in line with prior research findings.

In Table 4, I estimate the additional expenditure of an SLD child by taking the research based additional expenditure ratio of 1.6 and multiplying it times the average additional expenditure of a non-special education child in CUSD (1.6 x $9,858).  That gives me an estimated additional expenditure per SLD child of $15,774. I do the same for children with speech impairment, yielding an estimated additional expenditure of $16,759. Indeed, even the charter serves some children with disabilities estimated to have higher than average additional expenditure, but very few of them.

Calculating the charter payment based on the inflated district average (based on the error mentioned in Table 3 above), CUSD must allocate over $15 million for children with disabilities in Chester Community Charter School.

But, if we re-calculate the charter allocation using the additional expenditure ratios from research, where some children will have higher than average cost and some lower than average cost, we find that CUSD would need to allocate just over $10 million.

In short, the district is overcharged by nearly 50%. The district is overcharged by an amount equal to nearly all of the district’s state special education funding, even though the district is left with more than half of the total special education children to serve and nearly all of those with more severe disabilities.

Table 4: Over-expenditure to Charter Special Education Part II

Arguably, this mechanism actually provides incentive for Pennsylvania charter schools to seek out, recruit and serve children with mild disabilities, creating similar budget pressures for other districts across the state.

Cumulative effects of the Commonwealth Triple Screw

Chester Upland School District’s expenditure budget for its own students is now approximately $54 million (after transfers to charters).  Note that if Chester Upland had received special education revenue from the state based on actual percent special education, the district would have received about $2 million more in revenue to spend, some of which would have been transferred to charters for serving special education kids. But let’s assume that about half should have stayed with the district. So, there’s a $1 million hit to start. Then there’s the big double hit, which amounts to $4.8 million!

So, we’re talking about a cumulative hit of, oh… hypothetically… about $5.8 million, or over 10% of the district’s budgeted expenditures (in other words, they should have received and kept roughly an additional million in state special education funding and should pay out a simulated/estimated $4.8 million less to charters for special education students).

And that, my friends, colleagues, co-bloggers, tweeters and avid readers is the Commonwealth Triple-Screw!