Blog

Reformy Disconnect: “Quality Based” RIF?

I addressed this point previously in my post on cost-effectiveness of quality based layoffs, but it was buried deep in the post.

Reformers are increasingly calling for quality based layoffs versus seniority based layoffs, as if a simple dichotomy. Sounds like a no brainer when framed in these distorted terms.

I pointed out in the previous post that if the proposal on the table is really about using value-added teacher effect estimates versus years of service, we’re really talking about the choice between significantly biased and error prone – largely random – layoffs versus using years of service. It doesn’t sound as much like a no brainer when put in those terms, does it? While reformers might argue that seniority based layoffs are still more “error prone” than effectiveness rating layoffs, it is actually quite difficult to determine which, in this case, is more error prone. Existing simulation studies identifying value-added estimates as the less bad option, use value-added estimates to determine which option is better. Circular logic (as I previously wrote)?

We’re having this policy conversation about layoffs now because states are choosing (yes choosing, not forced, not by necessity) to slash aid to high need school districts that are highly dependent on state aid, and will likely be implementing reduction in force (RIF) policies. That is, laying off teachers. So, reformy pundits argue that they should be laying off those dead wood teachers – those with bad effectiveness ratings, instead of those young, energetic highly qualified ones.

So, here are the basic parameters for quality-based RIF:

1. We must mandate test-score based teacher effectiveness ratings as a basis for teacher layoffs.

2. But, we acknowledge that those effectiveness ratings can at best be applied to less than 20% of teachers in our districts, specifically teachers of record – classroom teachers – responsible for teaching math and reading in grades 3 to 8 (4 to 8 if only annual assessment data)

3. Districts are going to be faced with significant budget cuts which may require laying off around 5% or somewhat more of their total staff, including teaching staff.

4. But, districts should make efforts to layoff staff (teachers) not responsible for the teaching of core subject areas.

Is anyone else seeing the disconnect here? Yeah, there are many levels of it, some more obvious than others. Let’s take this from the district administrator’s/local board of education perspective:

“Okay, so I’m supposed to use effectiveness measures to decide which teachers to lay off. But, I only have effectiveness measures for those teachers who are supposed to be last on my list for lay offs? Those in core areas. The tested areas. How is that supposed to work?”

Indeed the point of the various “quality based layoff” simulations that have been presented (the logic of which is problematic) is to layoff teachers in core content areas and rely on improved average quality of core content teachers over time to drive system wide improvements. These simulations rely on heroic assumptions of a long waiting list of higher quality teacher applicants just frothing at the mouth to take those jobs from which they too might be fired within a few years due to random statistical error (or biased estimates) alone.

That aside, reduction in force isn’t about choosing which teachers to be dismissed so that you can replace them with better ones. It’s about budgetary crisis mode and reduction of total staffing costs. And reduction in force is not implemented in a synthetic scenario where the choice only exists to lay off either core classroom teachers based on seniority, or core classroom teachers based on effectiveness ratings (the constructed reality of the layoff simulations). Reduction in force is implemented with consideration for the full array of teaching positions that exist in any school or district. “Last in, first out” or LIFO as reformy types call it, does not mean ranking all teachers systemwide by experience and RIF-ing the newest teachers regardless of what they teach, or the program they are in. Specific programs and positions can be cut, and typically are.

And it is unlikely that local district administrators in high need districts would, or even should, look first to cut deeply into core content area teachers. So, a 5% staffing cut might be accomplished before ever cutting a single teacher for whom an effectiveness rating occurs – or very few. So, in the context of RIF, layoffs actually based on effectiveness ratings are a drop in the bucket.

So now I’m confused. Why is this such a pressing policy issue here and now? Does chipping away at seniority based provisions really have much to do with improving the implementation of RIF policies? Perhaps some are using the current economic environment and reformy momentum to achieve other long-run objectives?

Pork Hunting 101: Shredding the Pork

I love a good pulled pork sandwich. In fact, it’s one of my favorite foods. And the best damn pork sandwich I can think of is the Hog Heaven at Oklahoma Joe’s in the side of a freakin’ gas station in Kansas City, Kansas. I expect that this statement will perhaps be the most controversial statement I’ve made on this blog thus far.  Really… it’s awesome… and one of the few things I really miss about being in Kansas City.

That aside, I’ve been blogging lately about a different kind of PORK – school finance pork.  I started this pork campaign with a posting on state aid in New York after the announcement of Governor Cuomo’s proposed budget and education cuts. Despite large amounts of aid still being allocated to some of the wealthiest school districts in the nation, the good democratic Governor of New York decided it was somehow most appropriate to largely protect that aid, and instead slam the highest need, large urban, mid-sized city and poor rural districts in the state.

This post provides a primer on finding pork in state school finance formulas. It’s a two part process that begins with screening for pork, and then involves more intensive investigative research.

What is School Finance Pork? School finance pork is state aid that is currently being allocated to districts that otherwise don’t really need that aid. In this case, need is defined in terms of the needs of the students to be served AND in terms of the ability of the local public school districts and its residents and property owners to pay the cost of those services. In overly simple terms, some local public school districts can easily pay for the full cost of their needed educational programs and services on their own and with much less effort (tax effort) than others. Allocating state aid to these districts while depriving others with greater student needs and the inability to meet those needs is inexcusable. Cutting aid to needier communities who are unable to replace those lost revenues, while retaining aid to the wealthy is inexcusable. That’s PORK. And like other political pork-barrel spending, it exists because state legislators negotiate for state aid formulas that bring something home to their own districts.

How do legislators generate PORK? Pork can be generated by at least two different approaches. In the first approach, legislators actually try to manipulate the general state aid formula to find ways to argue that it’s actually more expensive to educate kids in wealthier communities, or alternatively that its cheaper to educate kids in poor communities. If legislators can raise the foundation funding targets to wealthy communities they can increase the likelihood that they can drive state support to those districts. Seems a bit of a stretch, but it’s been done by many. A second “within the formula” approach to generating pork is to adopt “minimum state aid” and/or hold harmless aid provisions that guarantee that no matter how wealthy a district is, that the state will still pick up X% of its formula funding. That is, the state will cover X% even if the district can raise double what it needs with very little tax effort. Another type of pork is Outside the Formula pork. It can be tricky (though not impossible) for legislators to sufficiently manipulate a state aid formula to provide more aid to wealthier districts. It’s relatively easy for state legislators to adopt outside the formula aid programs that allocate aid in very different ways – like flat allocations per pupil across all districts, regardless of local wealth, or even like New York’s property tax relief targeted to wealthier communities.

How do we screen for Pork? Here, I provide a few examples of screening for pork, using a national data set – the U.S. Census Bureau’s Fiscal Survey of Local Governments. As we stated in our Is School Funding Fair? A bare minimum goal of a state school finance formula would be to achieve a FLAT relationship between poverty and state and local revenue per pupil. Ideally, state school finance formulas would result in additional resources targeted to higher poverty districts – that is – systematically higher state and local revenue per pupil in higher poverty districts. One can take and state’s school finance data – here from 2007-08 – and make a graph of the a) local revenue per pupil, b) state aid per pupil and c) total revenue per pupil across districts by poverty. Here’s Texas:

I’ve dropped very small districts because they tend to scatter the pattern for a variety of legitimate cost related reasons.  Blue circles represent the state and local revenue per pupil, which on average in Texas has a slight downward slope. Red triangles are the local revenue. Clearly, the lower poverty districts are raising quite a bit in local revenue but the higher poverty districts aren’t raising much. On average the state aid in green squares is being allocated in inverse proportion to the local revenue… but not that aggresively. For example, if we look way to the upper left…. we’ve got some red triangles – local revenue per pupil – above $10,000 per pupil. That is, some of these very low poverty districts are able to raise on their own, over $10k per pupil. But look at the trajectory of the state aid allotments – those districts are still getting a significant amount of state aid – enough to still raise their totals even higher as pointed out by yellow arrows. Would it not make more sense for this aid to be allocated to the districts at the far right hand side of the picture? Amazingly, the state aid distribution slope in Texas is quite gradual – to be kind. It would appear to include significant “flat” distributions of either or both minimum aid and/or outside the formula aid which goes to lower poverty and/or higher wealth districts the expense of higher need, lower wealth districts. THAT’s PORK!

Here’s New York, when using the same data set:

And here’s Pennsylvania:

In Pennsylvania, minimum aid provisions and flat distribution of Special Education aid are partly to blame.

And here’s Illinois:

Illinois also has minimum aid provisions in the form of alternative formulas for districts otherwise too wealthy to receive aid through the primary foundation formula. And Illinois allocates numerous categorical aids outside the formula in ways that reinforce the aggregate disparities.

So, here are four states and in each one, total state and local revenue per pupil is slightly to significantly lower per pupil in higher poverty than in lower poverty districts. And in each state, the state aid is distributed in ways that guarantee the provision of at least some – sometimes significant – aid per pupil to districts that on their own are raising and spending significantly more money per pupil than their much higher need counterparts.

How do we identify the source of the Pork? This part is a bit harder, and requires much more detailed investigation of the state school finance formulas and of the “runs” of state aid programs. I hope to get a chance to provide more information on pork finding at a later point. But for those interested in exploring these issues on their own, there are two sources of information that are critical to the search – a) the documentation that  explains the calculations behind the allocation of state aid and b) much more importantly, what are called the “runs” of state aid allocations to local public school districts that are used by legislators to negotiate changes to the aid formula and/or the distribution of cuts. It’s that time of year – legislative session season. And legislators are interested to see which pieces of the aid formula will bring home pork, or how cuts will affect their districts. If you’re as warped as I am about this stuff… and actually really like digging through these numbers… contact your state legislators and find out who to get in touch with in order to get an electronic spreadsheet run of the various district by district state aid allocations. And be sure to get other basic district data, including wealth measures (property wealth and income) and enrollment characteristics (Total enrollments and student needs).  Play around with graphs of aid (divided by pupils – per pupil aid) with respect to wealth and need measures – and look for those aid programs in particular that appear to allocate systematically more aid to districts that are otherwise less needy. Feel free to e-mail me spreadsheets of those runs. If I get a chance, I’ll see what pork I can find!

Cheers.

And here’s to Pork!

Disparate thinking: The administration’s blind eye & racially disparate impact

Apparently the U.S. Department of Education has decided to take on public education policies that not only are intentionally racially discriminatory, but also state and local policies that happen to have a racially disparate impact on certain populations. Now, these are departmental regulations, not statutes and not a constitutional protection, so this doesn’t mean that advocates can start filing lawsuits on behalf of groups disparately affected by education policies (except where other state laws prohibiting disparate impact exist, like Illinois). But, it does mean that the U.S. Department of Education can use its biggest available threat – denial of funding – to pressure state and local education agencies to change policies that result in racially disparate impact.

Statistically, what disparate impact means is that the policy in question results in a disproportionate effect on one group of individuals versus another, where “groups” are defined by race, ethnicity or national origin. There are many occurrences in public education where racially disparate impact rears its ugly head. For example, racially disparate classification rates of children with disabilities, or racially disparate rates of disciplinary action. Identifying appropriate policy changes to reduce racially disparate impact in any of these areas, while not compromising other interests is indeed important – such as making sure that kids with legitimate special education needs still get identified and served, or making sure appropriate discipline is handed out where necessary.

The administration’s renewed interest in racially disparate impact was announced almost a year ago, and has apparently crept back into the conversation in the past few weeks.

Here’s the Education Week synopsis:

At the Feb. 11 briefing, Ricardo Soto, the deputy assistant secretary for the Education Department’s office for civil rights, elaborated on the office’s new policy, saying that the Obama administration is “using all the tools at our disposal,” including “disparate-impact theory,” to ensure that schools are fairly meting out discipline to students. Some research shows, for instance, that suspension rates for African-American males in middle school can be nearly three times as high as those for their white, male peers.

So, the interest here is specifically the racially disparate distribution of disciplinary actions. That’s all well and good. We should explore these issues and resolve them appropriately if we can. Once again, the usual target is local public school districts because when it comes to any of today’s education policy issues – in the eyes of the current administration and their closest advisers – only local public districts and local administrators can be to blame (even when it comes to funding disparities?).

But, as my readers know, this a school finance blog, and that means that eventually this topic is coming back around to school finance (okay, not always). Could it possibly be that some states actually continue to operate state school finance formulas that produce “racially disparate” effects? That is, that districts serving larger shares of minority children have systematically (statistically) less state and local revenue (the revenue under control of state policy) than districts with fewer minority children? And if so, which states might those be?

Clearly, substantial disparities in the quality of education received (funding, class sizes, teacher credentials, etc.) by minority children as a function of state policies to provide, or not, sufficient financial support to their schools is at least equally relevant to rates of discipline referrals of minority versus non minority children attending the same school or district.

In our report Is School Funding Fair?, We evaluated states in terms of whether they provided for systematically more or less state and local revenue per pupil in higher versus lower poverty districts.  Now, which states did the worst on this measure? Among the big ones, the most poverty disparate states in funding were Illinois, Pennsylvania and New York! New York makes this list largely because of its Pork barrel finance policies. And believe, me, Pennsylvania and Illinois have similar pork to shred. But poverty related disparities while important, don’t fall under this racially disparate impact umbrella. The real question here is which states have the largest racial disparities in school funding?

A few years back, Robert Bifulco, now at the Maxwell School at Syracuse, did a nice quick number crunch on black-white funding disparities nationally, in the Journal of Education Finance. On average, Bob found that without any corrections for costs related to student needs, or other costs, districts with higher concentrations of black enrollments had marginally higher per pupil spending. But after correcting for costs and needs, districts with higher black enrollments had lower per pupil spending. But, these findings, while interesting and useful, look at the nation as a whole, and, if I recall, do some breakouts by region.

Just like the poverty related variation we show in the fairness report, racial disparities in funding also vary widely across states. And those disparities occur for a variety of reasons. Yes, to some extent those disparities exist because of differences in local property wealth in blacker versus whiter communities and the state’s failure to allocate sufficient aid to offset those disparities. And in many places, those disparities in property values are largely a function of carefully planned racial segregation of housing stock and distribution of other property types (Brilliant article on real estate development in the Kansas City metro: http://www.tulane.edu/~kgotham/RestrCovenants.pdf)

BUT CAN A STATE LIKE NEW YORK REALLY CLAIM THAT THERE’S  JUST NOT ENOUGH AID AVAILABLE TO FIX THE RACIAL DISPARITIES WHEN IT IS DUMPING TAX RELIEF AID AND MINIMUM FOUNDATION AID INTO THE WEALTHIEST DISTRICTS IN THE COUNTRY? Redistributing the pork might not erase the disparities entirely, but it’s a start.

Other disparities actually exist by the design of the aid formulas, and various Tricks of the Trade that create racial disparities in funding – deceptively and arguably quite intentionally. Here’s an outstanding sarcastic, critical analysis of how Kansas legislators gamed their finance system to embed racially disparate effects over time: http://www.pitch.com/2005-04-14/news/funny-math/ (after reading this, consider the role of real estate development addressed in the Kansas City article above!)

Quick side-bar – This brilliant piece of school finance journalism is written by Tony Ortega, when he was in Kansas City (written from the perspective of a KC Strip Steak… it’s a Kansas City thing). Now, Tony is editor and chief of the Village Voice in NY. He Tony, how ‘bout that New York finance stuff? Subsidies for Scarsdale, while cutting Utica and Middletown, or Mt. Vernon? (even if from the perspective of a NY Strip Steak?)

There are lots of Tricks of the Trade used to reduce aid to high minority districts. A favorite choice of state legislators is to allocate aid based on average daily attendance rather than enrollment. Higher poverty, higher minority concentration districts tend to have lower attendance rates… thus reducing their aid, compared to what they would receive if the aid was allocated on the number of enrolled pupils.

There was a brief period in the late 1990s when three separate federal court challenges were filed against racially disparate state school finance systems – in Pennsylvania (Powell v. Ridge), in New York (AALDF v. State) and in Kansas (Robinson v. Kansas). These cases were cut off by a series of related decisions in the early 2000s (which basically said that an individual does not have a right to sue over racially disparate effects, because the language of “racially disparate” effects appears only in regulations and not in the Civil Rights statutes themselves).

Interestingly, around that time, the Illinois legislature countered with state legislation that prohibits policies having racially disparate effect (Illinois Civil Rights Act of 2003). And as it turns out, Illinois school districts are applying this law to challenge the Illinois school funding formula – a formula that is among the most racially disparate in the nation.

http://www.jenner.com/news/news_item.asp?id=15009924

So, what’s my point here? I’m doing a bit of rambling. Well, here’s my best shot at a synopsis.

1. Yes, it is important that the current administration explore the reasons for, and possible resolutions of racially disparate effects in such areas as discipline referrals or special education placement rates (although they appear focused on the former, not latter).

2. Yes, there exists the likelihood that local public school districts are engaged in practices that harm minority populations, ranging from the previously mentioned issues, to others such as highly tracked curricular offerings which result in significant within school and within district segregation, as well as attempts by some local boards of education to undo long running diversity plans.

3. But, I would argue, that while these issues are important, there are other at least equally important issues to be addressed – substantial racial disparities in funding and resulting programs and services – in some states far more than others. AND ILLINOIS IS ONE OF THOSE STATES!

4. And those racial disparities in funding are not entirely a result of not enough money to offset differences in local wealth, or neglect of the state aid formula (underfunding the formula) – WHICH IS BAD ENOUGH – but are largely a function of maintaining PORK in affluent communities at the expense of poor minority districts, and of Tricks of the Trade – or specific provisions in state school funding formulas that drive money to whiter districts and create or reinforce racial disparities.

READINGS

Green, P.C., Oluwole, J., Baker, B.D. (2010) Getting their hands dirty: How Alabama’s public officials may have maintained separate and unequal education. West’s Education Law Reporter 253 (2) 503‐
520

Green, P.C., Baker, B.D., Oluwole, J. (2008) Obtaining racial equal educational opportunity through school finance litigation. Stanford Journal of Civil Rights and Civil Liberties IV (2) 283‐338

Baker, B.D., Green, P.C. (2005) Tricks of the Trade: Legislative Actions in School Finance that Disadvantage Minorities in the Post‐Brown Era American Journal of Education 111 (May) 372‐413

Baker, B.D., Green, P.C. (2003) Commentary: The Application of Section 1983 to School Finance Litigation. West’s Education Law Reporter. 173 (3) 679‐696

Green, P.C., Baker, B.D. (2002) Circumventing Rodriguez: Can plaintiffs use the Equal Protection Clause to challenge school finance disparities caused by inequitable state distribution policies? Texas
Forum on Civil Liberties and Civil Rights 7 (2) 141 – 165

Where’s the Pork? Mitigating the Damage of State Aid Cuts

This is a very long and complicated post, so I’ll give you a few take home points up front…

Equity Center Radio on School Finance Pork: http://216.246.105.5/Audio_Recordings/2011-0053_ECRadio_Bruce_Baker_02-18-2011.mp3

Take home points

  1. Crude assumptions promoted by the “new normal” pundit crowd that across the board state aid cuts are a form of shared sacrifice are misguided and dreadfully oversimplified.
  2. State aid cuts hurt some districts and the children they serve more than others, even when those aid cuts are “flat” across districts. Districts with greater capacity will readily recover their losses (and then some) with other revenue sources. Those who can’t are out of luck.
  3. State aid cuts as a proportion of state aid are particularly bad because they take the most from those who need the most.
  4. Some might find it surprising that many state school finance formulas contain special provisions that allocate relatively large sums of aid to very affluent school districts – districts that could easily pay for the difference on their own and serve relatively low need student populations. One can readily identify hundreds of millions of dollars in New York State being allocated as aid to some of the nation’s wealthiest school districts.
  5. State legislators and Governors often protect this aid – which I refer to as school finance PORK – even while slashing away disproportionately at aid for the neediest districts.
  6. That’s just wrong!

Now for the lengthy post…

It’s budget proposal and state-of-the-state time of year right now. And Governors from both parties are laying out their state budget cuts, many refusing to consider any type of “revenue enhancements” (uh… tax increases). These include New York’s Governor Cuomo suggesting a 7% cut to state aid.

There exists a baffling degree of ignorance being spouted by pundits about school budget cuts. Stuff like – NY’s Governor cut school funding by 7%… Now, school districts need to figure out how to cut their spending by 7%! Everyone, 7% less, across the board! Shared sacrifice! It will make everyone better and more efficient in the long run (especially those high poverty districts that we know are least efficient of all)! Pundits argue – Cuts have to be made. Cuomo’s cuts are a perfect example of this reality (from a Dem. Governor). Everyone will be cut. Just suck it up and learn to deal with the New Normal.

Wrong – at so many levels it’s hard to even begin explaining reality to those pundits who’ve clearly never even taken the most basic course on public finance or public school finance and have absolutely no understanding of the interplay between “local” property tax revenues and state aid, or the process of school budget planning and adoption. More on this later.

Thankfully, most readers of my blog and many in the general public actually seem to understand this stuff better than the blowhards (bloghards and tweethards) leading the “new normal” campaign from their DC think tanks. Particularly astute are those families with children who have interest in the quality of their local public schools and live in states where local school district budget setting remains at least partially an open public budgeting process.

And there a few good education writers out there who have developed a solid grip on the interplay between state and local revenues and the resulting effects of state aid cuts. Meghan Murphy, in the Hudson Valley in New York State has done some particularly nice writing on the topic: http://www.recordonline.com/apps/pbcs.dll/article?AID=/20100426/NEWS/100429738

My reason for this post is to expand on a point made by Meghan Murphy in her writing on Hudson Valley Districts and by David Sciarra in this Education Week Article:

But David G. Sciarra, the executive director of the Education Law Center, argued that if states cut funding to school districts during this difficult financial period, the pain will be felt most by disadvantaged students. Impoverished districts have little local property-tax wealth to draw from, and so state aid is a lifeline, said Mr. Sciarra, whose Newark, N.J.-based group advocates for poor students and schools. He urged state officials to work cooperatively with districts in the years ahead to set budget priorities so that current inequities aren’t made worse.

State officials “have an obligation to look for better ways to spend money,” Mr. Sciarra said, but “they’ve got to be very careful in how they do this. Across-the-board cuts and freezes have a negative impact on schools in need.” Ideas about how to cut spending are often proposed at “30,000 feet,” he said, but officeholders need to “take a serious look at how [cuts] would play out in their state.”

Yes, that’s the point. Cuts don’t mean “cuts” or “across the board” uniform distribution, “shared sacrifice,” at least not as they are typically implemented. In general, cuts to state aid lead to increased inequity. They hurt some more than others. Some districts, in fact, have little problem overcoming cuts – rebalancing their budgets, while others, well, to put it simply – are screwed. In many states, those most screwed by aid cuts are those who’ve been most screwed all along.

The model underlying local school budgets is that local voter/citizen/parent/homeowners (not entirely overlapping) desire a certain level or quality of schooling, usually in tangible terms like class sizes or specific programs they wish to see in their schools. That is, the local voter may not be able to “guess” the per pupil expenditure of their district (nor is it particularly relevant if they can) and evaluate whether it’s enough, too much or too little, but the local voter can evaluate whether that per pupil dollar buys the programs and services – and class sizes that voter wants to see in his/her local school district and whether they are willing to add another dollar to that mix.

When the state cuts aid to local school districts the usual first local response is to figure out how to raise at least an equal sum of funding – plus additional funding to accommodate increased costs – so as to maintain the desired schooling (class sizes, programs, etc.). Few local voters seem to really want to cut back service quality. Depending on the state (or type of district within the state), district officials put together a budget requiring a specific amount of revenue – which in turn dictates the property tax rate required to raise that revenue – given the state aid allotted – and then the budget is approved by referendum – or other adoption (or back-up mediation) process.

Clearly some communities have much greater capacity than others to offset their state aid losses with additional local revenues!

For example, when New Jersey handed down state aid cuts to 2010-2011 school budgets and when- for the first time in a long time- the majority of local district budgets statewide failed to achieve approval from local voters, it was still the case that the vast majority (72%) of local budgets passed in affluent communities – in most cases raising sufficient local property tax resources to cover the state aid cuts. In another case, local residents in an affluent suburban community raised privately $420,000 to save full day kindergarten programs. Meghan Murphy’s analysis of Hudson Valley school districts shows that New York State districts also have attempted to counterbalance state aid cuts with property tax increases, but that the districts have widely varied capacity to pull this off.  Parents in a Kansas district are suing in federal court requesting injunctive relief to allow them to raise their taxes for their schools (they use faulty logic and legal arguments, but their desire for better schools should be acknowledged!)

Distribution of State Sharing

There’s a bit of important background to cover here. State aid formulas drive state funding – usually from income and sales tax revenues collected to the state general fund – out to local public school districts based on a number of different factors. Typically these days, state funding formulas start with a calculation of the amount of money – state and local – that should be available at a minimum in order to provide adequate public schools. That is, each district is assigned a target amount of state and local funding. That target amount usually varies by the types of students served in a district and by other factors such as regional labor costs and remote, rural locations. In any case, each district ends up with a different estimate of total funding needs.

Next, the aid formula includes a calculation of the amount of that funding target that should be paid for with local property taxes – a local fair share, per se, or local contribution. One approach is to determine how much each district would raise if each district adopted a uniform property tax rate. For those districts that raise more than their target funding with that tax rate alone, the state would kick in nothing. For those districts that implement the local fair share tax rate and still come up short of their target funding, the state would apply aid to cover the difference.

So, for example, you might have three districts in a state, where:

  • The first district has a very low need student population (almost all from affluent, educated families), and has significant taxable property wealth. That district might have a target funding per pupil of $10,000, and might be able to raise all of it with an even lower tax rate than would be required. In fact, if they put up the local fair share tax rate, they might raise $20,000 per pupil. And the school finance system might allow them to do that.
  • A second, “middle class” district that has a modest share of children in poverty in the district, leading to an estimated target funding of $12,000 per pupil. The district adopts the local fair share tax rate and raises $8,000. The state allocates the additional $4000.
  • And the third district, a high need district with weak property tax base, might end up with an estimated target funding of $15,000 per pupil and after adopting the local fair share tax rate only raises $2,000 per pupil, so the state allocates $13,000.

THAT’S THE BASIC STRUCTURE OF A ‘FOUNDATION AID’ FORMULA!!!!

Now, I should be very clear here that the relationship between student needs and tax base is not really that simple. Both parts of this puzzle are critical to the formula and don’t always move in concert. There exist districts with high value tax base and high need student population and vice versa, and for many reasons.

Here’s an example of the distribution of the basic “sharing ratio” for New York State school districts, with respect to district Income/Wealth (IWI) ratio. For the lowest income/wealth districts, the state share is about 90%. For wealthier districts, that ratio – in theory – drops to 0.

Figure 1: NY State Sharing Ratio by District Income/Wealth


Types of Cuts

When it comes to state aid cuts, there’s really nothing for the state to cut from the first district above – that is – unless for some reason the state is giving other money to that district that can raise double it’s need target with the same local tax rate and no state support. But that would be silly, right? More later on that. Here are the two most common approaches to handing out cuts:

Option 1: Cut state aid proportionately across the board

This is actually usually the worst option and most regressive. Let’s take our districts above. A 5% cut to state aid for our first district is, of course, nothing. A 5% cut for our middle class district is $200 per pupil. A 5% cut to state aid for our high need district is $650 per pupil. Yes – the biggest cut comes down on the highest need district.

This distribution of cuts is problematic for two reasons. First, the biggest cut falls on the neediest kids. Second, the biggest cut falls on the district with the least capacity to offset that cut. I’m assuming here that these districts have all adopted the local fair share property tax rate. That rate only raises $2,000 per pupil in revenue for the high need district whereas the same rate raises 4X as much in the middle class district. Clearly for this reason alone, even if the cuts were in equal amounts per pupil, the middle class district would have a much easier time replacing the aid cut with local resources. Further, there a plethora of additional factors that increase the likelihood that the middle class district can offset the cut more easily than the high need, low-income district.

Option 2: Cut funding targets proportionately across the board

A better, though still problematic approach is for the state to recalculate the funding targets – – to reduce that level by just enough to result in the same state aid savings. Taking this approach leads to a constant per pupil cut in state aid across districts, for those districts receiving at least as much state aid per pupil as is being cut.

This too is of no consequence for our district that needs no foundation aid. Let’s say this approach leads to a $400 per pupil across the board reduction in target funding. If we still assume that districts are to raise the same amount of local revenue (local fair share of the fully funded target amount, as opposed to raising the local share of the lowered amount), this would result in a 10% aid cut to the middle class district ($400/$4000) and a 3.1% cut to the high need district. The per pupil target funding change would be the same. But the middle class district would certainly complain that their cut as percent of aid is much larger. But again, that district has 4X the local revenue raising capacity on a dollar per pupil basis, and in this case, they still got less than 4X the per pupil cut.

Even this approach is likely to lead to larger average budget reductions in higher need districts.

Again, there are tons of additional factors involved, and various ways that states might cut aid differently to yield different distributional effects across districts.

State Aid Formulas have Lots of Parts! Some better than others!

The state aid cut scenarios presented above assume a logical and oversimplified world of state aid and local district budgets in many ways. The cut scenarios presented above adopt one really big assumption about state aid to local schools – an assumption that may not be and usually isn’t entirely true:

That the only state aid available to cut is aid that is allocated in proportion to wealth and need across districts. That because state aid is allocated in greater proportion – if not almost in its entirety to poor and need school districts, then those districts must necessarily suffer most from the cuts! As a result, the best you can do is to spread evenly those cuts across wealthy and poor, higher and lower need districts and live with the fact that some will bounce back easier than others.

The fact is that state aid formulas may at their core be built on a seemingly logical foundation funding structure with state and local sharing as described above.  But rarely these days does a state aid formula make it into law without a multitude of adjustments and other PORK added on. Yes, Pork! School finance pork and lots of it. Clearly state reps from those towns that would otherwise get nothing from the general aid formula are going to search for a way to bring home some pork.

Here are a few examples of school finance pork from the New York State school finance formula:

1) Minimum Foundation Aid: Like many state funding formulas, even though the first (and most logical) iteration of calculations for estimating the district state share of funding would end up providing 0% state aid to many districts, those formulas include a floor of funding – a minimum guarantee of state aid. That’s right, even our district above that could raise double their target funding by applying the local fair share tax rate would get something. Perhaps this is a reasonable tradeoff when the money is there. But do we really want to keep allocating that money to a district that’s a) fine on its own and b) could easily replace the money with modest increases to local taxes – and would do so.

Here, for example, is the effect of New York State’s minimum threshold factor on foundation aid. The red diamonds indicate the foundation state aid that would be received if districts got what is initially calculated to be their state share. State share hits 0 at an income/wealth index around 1.0 in the basic calculation. But, the actual calculation of state share includes a few adjustments shown in blue squares. First, between income/wealth ratios of about 1.0 to 2.0, the actual state share cuts the corner providing more gradually declining aid rather than going straight to 0. Then, above IWI of 2.0 it never hits 0, but rather levels off providing a minimum allotment of several hundred to about $1,000 per pupil to even the wealthiest districts in the state (which, by the way, are among the wealthiest in the nation!).

Figure 2: Application of Original Calculation and Adjusted Calculation for State Aid Share


Altogether, the adjustments – which also yield additional aid for New York City  – add up to nearly $3.8 billion dollars. That’s right… $3.8 billion (where’s that NY Mega Millions guy when you need him?). Now, assuming that it’s hard to get the sharing ratio correct for NYC to begin with and that it is a very high need district that in fact needs this aid, it’s really just over $2.0 billion in potential excess allocation that could be redistributed. That ain’t chump change.

But let’s go really conservative here, and just look at the minimum aid being shuffled out to the richest communities. That alone is still $134 million dollars.

At the very least, if you’re going to cut state aid, cut this first! If you’re not going to cut, consider redistributing this.

2) School Tax Relief Aid: Many state aid formulas include a variety of other types of aid, some which are distributed in flat amounts across all districts regardless of need, and some which may be even allocated in inverse proportion to what most would consider needs – either local capacity related needs or educational programming and student needs. Such is the politics of school finance. For those really interested in this stuff, see the following two articles:

Baker, B.D., Green, P.C. (2005) Tricks of the Trade: Legislative Actions in School Finance that Disadvantage Minorities in the Post-Brown Era American Journal of Education 111 (May) 372-413

Baker, B.D., Duncombe, W.D. (2004) Balancing District Needs and Student Needs: The Role of Economies of Scale Adjustments and Pupil Need Weights in School Finance Formulas. Journal of Education Finance 29 (2) 97-124

New York State’s piece de resistance is a program called STAR, or School Tax Relief program. In simple terms, STAR provides state aid in disproportionate amounts to wealthy communities to support property tax relief. Here’s the distribution of STAR aid with respect to district income/wealth ratios:

Figure 3: STAR Aid per pupil and District Income/Wealth Ratio


Excluding STAR aid to NYC, the aid program in 2008-09 provided $642 million in aid to districts with an income/wealth ratio over 1.0! Even in New York State, that’s not chump change. It’s over $150 million to the wealthiest districts. Add these hundreds of millions to those above and we’re getting somewhere.

Once again, at the very least, if you’re going to cut state aid, cut this first! If you’re not going to cut, consider redistributing this.

For more information on the equity consequences of STAR (as well as simulated solutions), see: http://eus.sagepub.com/content/40/1/36.abstract

A closer look at aid to the wealthy in New York State

Here’s a closer look at minimum foundation aid and STAR aid received by several of the state’s wealthier communities (at least statistically wealthier). Note that our recent report on school funding fairness (www.schoolfundingfairness.org) identified New York State (along with Illinois and Pennsylvania) as having one of the most regressively financed systems in the nation. On average, low poverty districts have greater state and local revenue than high poverty ones, yet the state is still allocating significant sums of aid to low poverty districts!

Table 1: Aid to the Wealthy in New York

This table shows that many of these wealthier communities are picking up millions in STAR aid and upwards to a thousand dollars per pupil in basic foundation aid. Yes, the state is subsidizing the spending – quite significantly – of some of the wealthiest districts in the nation, while maintaining a regressive system as a whole. And now, while cutting aid disproportionately from poor districts.

The distribution of proposed aid cuts

Here is the distribution of the Governor’s proposed cuts in foundation aid to NY State school districts, on a per pupil basis, with respect to Income/Wealth Ratios of school districts:

Figure 4: Governor’s proposed cuts to foundation aid and income/wealth ratios


The income/wealth ratio is along the horizontal axis. The per pupil cuts in aid are on the vertical axis. Here, I’ve represented district size by the size of the “bubble” in the graph. New York City is the bowling ball here! And NYC gets a larger per pupil cut than many much wealthier districts – wealthy districts that actually receive aid! Excuse me, PORK!

Yes, districts with very high income/wealth ratios will experience little cut at all. $100 per pupil or so will look like a big cut of their $1000 per pupil in foundation aid, but some low wealth districts will actually see their foundation aid cut by $1000 per pupil.

And with respect to the Pupil/Need Index:

Figure 5: Governor’s proposed cuts to foundation aid and pupil need index


In this figure, as student needs increase, per pupil cuts in aid increase. Yep, that’s right, districts that NY State itself identifies as having higher needs receive larger per pupil cuts in aid, even while the PORK… yes PORK is retained in the system! All the while, the state continues to allocate minimum foundation aid and STAR funding to wealthy districts.

Summing it up

Yes, it may suck for legislators or the Governor to tell Scarsdale, Pocantico Hills or Locust Valley that there’s just not enough aid to continue to subsidize their “tax relief” (PORK)  or to subsidize their very small class sizes and rich array of elective courses and special programs with add-ons such as minimum foundation aid. But aid programs for these districts are merely the bells and whistles – or PORK – of state school finance policies, the political tradeoffs which are often needed to get a formula passed. It’s not a politically easy task, but it’s time to collect that pork and redistribute it to where it’s actually needed.

Further, it is highly unlikely that these districts will actually go without those bells and whistles – that they will forgo their furs and Ferraris. Yes, we all know that school finance formulas are a complicated mix of political tradeoffs. The goal of this post is to make that painfully clear. Let’s call it what it is – school finance Pork. And let’s take that pork and make better use of it. And let’s make it absolutely clear that protecting the pork while slashing basic needs is entirely unacceptable.

There may not be enough pork in the system to either cover all of the proposed cuts, or to be redistributed to fully resolve the funding deficits of higher need districts. But in New York State and many others, there’s quite a bit – quite a bit of aid that could be either used to make the formula fairer to begin with or to buffer the neediest districts and children they serve from suffering the most harmful and real funding cuts.

Note: On Cuts and Caps

Now, some of what I discuss herein is complicated by the current bipartisan political preference to show affluent communities that we’re not going to push these costs off onto them in the form of property tax increases – or backhanded property tax increases through state aid cuts. Instead, we’ll tell them they simply can’t raise their property taxes to cover the difference. That’ll learn ’em!

Capping property taxes while cutting state aid, is simply lying about the true cost of the programs and services desired by local citizens, who invariably have supported tax increases for their local schools and have reverted to major private giving when tax increases were not feasible. There are legitimate reasons to control local spending variation – a topic for another day – and states need to have such policy tools available. But slash-and-cap policies are generally shortsighted and ill-conceived.

Cutting and limiting property taxes merely shifts the burden in yet a different direction – increased fees, private fundraising, volunteering etc. All of which increase inequities in access to quality schooling.

Yeah, I know… all of the pundity types are saying that voters are fed up… they totally underestimate what’s being spent on their schools and they’re totally fed up with paying higher taxes. They don’t want any more of this school spending – bloated bureaucracy, etc. I might buy that if the local voter behavior in affluent communities with preferences for high quality schooling actually supported that argument. But it doesn’t!

If it doesn’t work, don’t do it! CAP’s ROI

The Center for American Progress released its new Return on Investment (ROI) Index for K-12 public school districts of greater than 250 students this week. I should note in advance that I had the opportunity to provide advice on this project early on, and occasionally thereafter and I do believe that at least some involved had and still have the best intentions in coming up with a useful way to represent the information at hand. I’ll get back to the validity of the information at hand in a moment.

First, I need to point out that the policy implications and proposals, or even general findings presented in the report cannot be supported by the analysis (however well or poorly done). The suggestion that billions of dollars might be saved nationally (little more than a back-of-the-napkin extrapolation) at no loss to performance outcomes, based on the models estimated is a huge, unwarranted stretch and quite simply arrogant, ignorant and irresponsible.

The method used provides no reasonable basis for the claim that all low “rate of return” districts could simply replicate the behaviors of high “rate of return” districts and achieve the same or better outcomes at lower cost. The limitations of these methods, when applied in their best and most rigorous complete possible form, no less in this woefully insufficient and incomplete form simply do not allow for such extrapolation.

Further, given the crudeness of the models and adjustments used in the analysis, it is inappropriate to make too much, if anything of supposed differences in the characteristics of districts with good versus bad rate of return indices. You can do some fun hunting and pecking through the maps, but that’s about it!

For example, one major finding is that districts with good ROI’s spend less on administration. However, much more rigorous studies using better data and more appropriate models exploring precisely the same question have found the opposite. The CAP ROI methods are insufficient to draw any conclusion in this regard.

There is little basis in this analysis that states need to provide fairer funding – except that funding does appear to vary within states. But how funding varies is not explored. I like the idea of improving funding fairness, but a better basis for that argument can be found here: www.schoolfundingfairness.org.

And there is no basis for suggesting that fairer funding would be accomplished by student-based funding. Evidence to the contrary might be found here: http://epaa.asu.edu/ojs/article/view/5. This is report writing 101. Be sure that your policy implications follow logically from your findings, and that your findings are actually justified by your analysis.

It was also concluded that on average, higher poverty districts are simply less efficient. When you estimate a model of this type, where you are trying to account for various factors, such as cost factors, outside the control of local school districts – but you aren’t really sure you’ve accomplished the task – when you see a result like this it is a rather basic step to ask yourself – Did I really control sufficiently for costs related to child poverty? Is this finding real? Or is it an indication of bias in my model? A failure to capture real, important differences in the characteristics of these districts?

That single bias – failure to fully account for poverty related costs – is pervasive throughout the entire CAP ROI analysis. There is a strong relationship between poverty and supposed inefficiency in most states in the analysis. That bias exists in states where the state has provided additional resources to higher poverty districts, making them higher spending on average, and that bias exists even in states where spending per pupil is systematically lower in higher poverty districts. And every map and scatterplot in the analysis must be viewed carefully with an understanding of the pervasive, uncontrolled bias against higher poverty districts. A bias that results largely from failure to fully account for cost variation.

Okay, now that I’ve said that rather bluntly, let’s walk through the three different ROI approaches, and what they are missing.

Basic ROI

In any of the ROI’s you’ve got two sides to the analysis. You’ve got the student outcome measures (which I’ll spend less time on), and you’ve got the per pupil spending measures. Within the per pupil spending measures, you’ve got cost adjustments for the “cost” of meeting student population needs and cost adjustments for addressing regional differences in competitive wages for school personnel. The Basic ROI uses an approach similar to that used by Education Week in Quality Counts as a basis for calculating “cost adjusted spending per pupil.”

Weighted Pupil Count = Enrollment + .4*Free Lunch Count + .4*ELL Count + 1.1*IEP Count

After using the weighted pupil count to generate a student need adjustment, CAP uses the NCES Comparable Wage Index to adjust for regional variation in wages. So, they try to adjust for student needs, using a series of arbitrary weights, and for regional wage variation.

The central problem with this approach is that it relies on setting rather arbitrary weights to account for the cost differences associated with poverty, ELL and special education. And in this case, CAP, like Ed Week, shot low – claiming those low weights to be grounded in research literature, but that claim is a stretch at best and closer to a complete misrepresentation. More below.

Adjusted ROI

For the adjusted ROI, CAP uses a regression equation which compares the actual spending of each district to the predicted spending of each district, given student population characteristics. Here’s their equation:

ln(CWI adjusted ppe)= β0 + β1% free lunch + β2 % ELL+ β3 % Special Ed + ε

Now, this method is a reasonable one for comparing how much districts spend, but has little or nothing to do with adjusting for the costs of achieving comparable educational outcomes – a true definition of cost. That is, one can use a spending regression model to determine if a state, on average spends more on high poverty than on low poverty districts. But this is a spending differential not a cost factor. It’s useful, and has meaning, but not the right meaning for this context. One would need to determine how much more or less needs to be spent in order to achieve comparable outcomes.

So, for example, using this approach it might be determined that within a state, higher poverty districts spend less on average than lower poverty districts. This negative or regressive poverty effect would become the cost adjustment. That is, it would be assumed that higher poverty districts have lower costs than lower poverty ones. NO. They have lower spending, but they still most likely have higher costs of achieving constant educational outcomes. Including outcomes and holding outcomes constant is the key – AND MISSING – step toward using this approach to adjust for costs.

Further, the overly simplistic equation above completely ignores significant factors that do affect cost differences and/or spending differences across districts, such as economies of scale and population sparsity as well as more fine grained differences in teacher wages needed to recruit or retain comparable teachers across districts of differing characteristics within the same labor market.

Predicted Efficiency

Finally, there’s the predicted efficiency regression equation, which attempts to generate a predicted achievement level based on a) cost adjusted per pupil spending, b) free lunch, ELL and special education shares. This one, like the others doesn’t attempt to adjust for economies of scale or sparsity and suffers from numerous potential problems with figuring out how and why each district’s actual performance differs from its predicted performance.

achievement = β0 + β1 ln(CWI adjusted ppe) + β2 % free lunch + β3 % ELL + β4 %Special Ed + ε

In this (dreadfully over-) simplified production function approach, any individual district’s actual outcomes could be much lower than predicted or much higher than predicted for any number of reasons. It would appear from scanning through the findings that this particular indicator is most biased with respect to poverty.

Summary of what’s missing or mis-specified

The table below summarizes the three ROI indices – or at least the “adjusted expenditure” side of those indices – with respect to what we know are the major cost factors that must be accounted for in any reasonable analysis of education spending data in relation to student outcomes. Here, the basic conception of cost, and cost difference is “what are the differences in cost toward achieving comparable outcome objectives?” Cost cannot be estimated without an outcome objective.

First, I would argue that the selected weights in the Basic ROI are simply too low, especially in certain parts of the country.

Second, none of the models address economies of scale. CAP notes this, but in a section of the report most will never read. Instead, we’ll all see the pretty maps that tell us that all of the rural districts in the upper Hudson Valley in NY State or in north Central Pennsylvania are really, really inefficient.

Third, recall that the “adjusted ROI” model really doesn’t control for cost at all, but rather for underlying spending variation, without respect for outcomes.

Table 1


Regarding pupil need weights in particular, there exists at least some literature – the most rigorous and direct literature on the question – which suggests the need for much higher weights than those used by CAP. For example, Duncombe and Yinger note that in two versions of their models:

Overall, this poverty weight ranges from 1.22 to 1.67 (x census poverty rate), the LEP weight ranges from 1.01 to 1.42, and the special education weight varies from 2.05 to 2.64.

Across several models produced in this particular paper, one might come to a rounded weight on Census poverty of about 1.5 or weight on subsidized lunch rates of about 1.0 (100% above average cost, or 2x average, more than double the CAP weight), a weight on limited English proficient students around 1.0 and on special education students over 2.0 (slightly less than double the CAP weight).

Other work by me, along with Lori Taylor and Arnold Vedlitz, done for the National Academies of Science, reviewing numerous studies also comes to higher average weights – using credible methods – for children in poverty.

While one can quibble over the selection of “cost” weights from literature, the bigger deal for me remains that the findings of the various ROIs reflect such a strong bias that any reasonable researcher would be obligated to explore further, and perhaps test out alternative research based weights as a way to reduce the bias. It’s a never ending battle, and when you’ve improved the distribution in one state, you’ve likely messed it up in another (because different patterns of poverty and distributions of ELL children lead to different appropriate weights in different settings – even within a state). But this happens – if it turns out to simply be unreasonable to identify a global method for estimating ROIs across school districts and across states, THEN STOP!!!!! DON’T DO IT!!!!! IT JUST DOESN’T WORK!!!!

Here is an example of how much a corrected cost adjustment might matter, when compared with the Basic ROI. The scatterplot below includes one set of dots (red triangles) which represent adjusted operating expenditures of Illinois school districts using the Basic ROI weights. The other set of dots (blue circles) uses a cost index derived from a more thorough statistical model of the costs of achieving statewide average outcomes for Illinois school districts. For the highest poverty districts, the adjusted spending figures drop by $4,000 to $5,000 per pupil when the more thorough cost adjustment method is used. This is substantial, and important, since the ROI is much more likely to identify these districts as inefficient and might be used by state policy makers to argue that cuts to these districts are appropriate (when they clearly are not).

Figure 1


How you specify models to identify efficient or inefficient districts matters, a lot!

Here’s one example of how this type of analysis can produce deceiving results, simply based on the “shape” of the line fit to the scatter of districts. Below is a scatterplot of the cost adjusted spending per pupil for Illinois school districts (unified K-12 districts) in 2008, and the proficiency rates (natural log) for those districts. In this case, I’m actually using much more fully cost adjusted spending levels, accounting for regional and more local wage variation, accounting for desired outcome levels, for poverty, language proficiency, racial composition and economies of scale. As a result, the graph actually shows a reasonable relationship between cost adjusted operating expenditures per pupil and actual outcomes. Spending – when appropriately adjusted – is related to outcomes.

Figure 2


Even then, it’s a bit hard to figure out what shape “best fit” line/curve should go through this scatter. If I throw a straight line in there, and compare each district against the straight line, those districts below the line at the left hand side of the picture are identified as really inefficient – getting much lower outcome than the trendline predicts. But, if I were to fit a curve instead (I’ve simply drawn this one, for illustrative purposes), I might find that some districts previously identified as below the line are now above the line. Are they inefficient, or efficient? Who really knows, in this type of anlaysis!

My biggest problem with the CAP production function analysis is that they came to a result that is so strongly biased on the basis of poverty and instead of questioning whether the model was simply biased – missing important factors related to poverty – they accepted as truth – as a major finding that higher poverty districts are less efficient. It is indeed possible that this is true, but the CAP analysis does not provide any compelling evidence to this effect.

Research literature on this stuff

Note that there is a relatively large literature on this stuff… on whether or not we can, with any degree of precision, classify the relative efficiency of schools or districts. There are believers and there are skeptics, but even among the believers and the skeptics, all are applying much more rigorous methods and refined models and more fully accounting for various cost factors than the present CAP analysis. Here are some worthwhile readings:

Robert Bifulco & William Duncombe (2000) Evaluating School Performance: Are we ready for prime time? In William Fowler (Ed) Developments in School Finance, 1999 – 2000. Washington, DC: National Center for Education Statistics, Office of Educational Research and Improvement.

Robert Bifulco and Stewart Bretschneider (2001) Estimating School Efficiency: A comparison of methods using simulate data. Economics of Education Review 20

Ruggiero, J. (2007) A comparison of DEA and Stochastic Frontier Model using panel data. International Transactions in Operational Research 14 (2007) 259-266

What, if anything, can we learn from those pretty maps and scatters?

Now, moving beyond all of my geeky technical quibbling is there anything we actually can learn from the cool maps and scatters that CAP presents to us. First, and most important, any exploration of the data has to be undertaken with the understanding that all 3 ROI’s suffer from a severe bias toward labeling high poverty urban districts as inefficient and affluent suburban districts as highly efficient (especially in Kansas!). But, with that in mind, one can find some interesting contrasts.

First, I think it would be useful for CAP to reframe and re-label their color schemes. Here’s my perspective on their scatters and color coding. The assumption with the ROI is that there exists an expected relationship between adjusted spending and student outcomes. That’s the diagonal line. Districts in the lower left and upper right are essentially where they are supposed to be. There is nothing particularly inefficient about being in the lower left, or upper right. The use of orange to represent the lower left makes it seem like the lower left is like the lower right. The lower left hand districts in the scatterplot, in theory, are those that got screwed on funding and have low outcomes. Arguably, the lower left hand quadrant of the scatterplots is where one should go looking for school districts wishing to sue their state over inequitable and inadequate funding. These districts aren’t to blame. They are getting what’s expected of them. They are getting slammed on funding and their kids are suffering the consequences – that is, if there really is any precision (which is a really, really suspect assumption) to these models.

Figure 3


Historically, Pennsylvania has operated one of the least equitable, most regressive state school finance formulas in the nation (www.schoolfundingfairness.org). Philadelphia has been one of the least well funded large poor urban core districts in the nation. Strangely, Pittsburgh has made out much better financially. Here’s what happens when we identify the locations of a few Pennsylvania school districts in the CAP ROI interactive tool. I’ve recreated the locations of 4 districts. The location of Philadelphia actually makes some sense on the basic ROI. Philly is royally screwed. Low funding and low outcomes. The implication of the orange shading seems problematic. But if we ponder the meaning of the lower left quadrant it all makes sense. Now, I’m not sure Pittsburgh is really overfunded and/or inefficient, as implied by being in the lower right quadrant – but at least relative to Philadelphia, it does make sense that Pittsburgh falls to the right of Philadelphia on the scatterplot. Lower Merion, an affluent high spending suburb of Philly seems to be in the right place too. I’m not sure, however, what to make of any of the districts, including affluent suburban Central Bucks, which fall in the upper left – the Superstars.

Figure 4

Because the various ROIs generally under-compensate for poverty related costs, if a district falls in that lower left hand quadrant, we can be pretty sure that the district is relatively underfunded as well as low performing. That is, the district shows up as underfunded even when we don’t fully adjust for costs. This is especially true for those districts that fall furthest into the lower left hand corner. The basic ROI is most useful in this regard, because you know what you’re getting (specific underlying weights). I’ve opened up the comments section so you all can help me identify those notable lower left quadrant districts!

Finally

This type of analysis is an impossible task, especially across all states and dealing with vastly different student outcome data as well as widely varied cost structures. Only precise state by state analysis can yield more useful information of this type. A really important lesson one has to learn when working with data of this type is to realize when the original idea just doesn’t work. I’ve been there a lot myself, even trying this very activity on more than one occasion. There comes a point where you have to drop it and move on. Sometimes you just can’t make it do what you want it to. And sometimes what you want it to do is wrong to begin with. Releasing bad information can be very damaging, especially information of this type in the current political context.

But even more disconcerting, releasing bad data, acknowledging many of the relevant caveats, but then drawing bold and unsubstantiated conclusions that fuel the fire… that endorse slashing funds to high need districts and the children they serve – on a deeply flawed and biased empirical basis – is downright irresponsible.

References

Andrews, M., Duncombe, W., Yinger, J. (2002). Revisiting economies of size in American education: Are we any closer to consensus? Economics of Education Review, 21, 245-262.

Baker, B.D. (2005) The Emerging Shape of Educational Adequacy: From Theoretical Assumptions to Empirical Evidence. Journal of Education Finance 30 (3) 277-305

Baker, B.D., Taylor, L.L., Vedlitz, A. (2008) Adequacy Estimates and the Implications of Common Standards for the Cost of Instruction. National Research Council.

Duncombe, W. and Yinger, J.M. (2008) Measurement of Cost Differentials In H.F. Ladd & E. Fiske (eds) pp. 203-221. Handbook of Research in Education Finance and Policy. New York: Routledge.

Duncombe, W., Yinger, J. (2005) How Much more Does a Disadvantaged Student Cost? Economics of Education Review 24 (5) 513-532

Taylor, L. L., Glander, M. (2006). Documentation for the NCES Comparable Wage Index Data File (EFSC 2006-865). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

Unspinning Data on New Jersey Charter Schools

Today’s (okay…yesterday… I got caught up in a few other things) New Jersey headlines once again touted the supposed successes of New Jersey Charter Schools:

http://www.nj.com/news/index.ssf/2011/01/gov_christie_releases_study_sh.html

The Star Ledger reporters, among others, were essentially reiterating the information provided them by the New Jersey Department of Education. Here’s their story.

http://www.state.nj.us/education/news/2011/0118chart.htm

And here’s a choice quote from the press release:

“These charter schools are living proof that a firm dedication to students and a commitment to best education practices will result in high student achievement in some of New Jersey’s lowest-income areas,” said Carlos Perez, chief executive officer of the New Jersey Charter School Association. He pointed to NJASK data for third grade Language Arts, where more than half the charters outperformed the schools in their home districts, and of those, more than 75 percent were located in former Abbott districts.

No spin there. Right? Just a balanced summary of achievement data, with thoughtful interpretation of what they might actually mean. Not really.

There are many, many reasons why the comparisons released yesterday are deeply problematic, and well, quite honestly, pretty darn meaningless. I could not have said it better than Matt DiCarlo of Shanker Blog did here:

“Unfortunately, however, the analysis could barely pass muster if submitted by a student in one of the state’s high school math classes (charter or regular public).”

Here are some guidelines I have posted in the past, regarding appropriate ways to compare New Jersey Charter Schools to their host districts on various measures including outcome measures:

  1. When comparing across schools within poor urban setting, compare on basis of free lunch, not free or reduced, so as to pick up variation across schools. Reduced lunch income threshold too high to pick up variation.
  2. When comparing free lunch rates across schools either a) compare against individual schools and nearest schools, OR compare against district averages by GRADE LEVEL. Subsidized lunch rates decline in higher grade levels (for many reasons, to be discussed later). Most charter schools serve elementary and/or middle grades. As such they should be compared to traditional public schools of the same grade level. High school students bring district averages down.
  3. When comparing test score outcomes using NJ report card data, be sure to compare General Test Takers, not Total Test Takes. Total Test Takers include scores/pass rates for children with disabilities. But, as we have seen time and time again, in charts above, Charters tend not to serve these students. Therefore, it is best to exclude scores of these students from both the Charter Schools and Traditional Public Schools.

Today’s (okay, yesterday – publication lag) primary violation involves #3 above, but also relates to the first two basic rules. Let’s do a quick walk through, using the 2009 data, because the 2010 school level school reports data are not yet posted on the NJDOE web site. The bottom line is that it is relatively meaningless to simply compare raw scores or proficiency rates of charter schools to host district schools – as done by NJDOE and the Star Ledger. That is, it is meaningless unless they actually serve similar student populations, which they do not.

Below, I walk through a few quick examples of student population differences in Newark, home to the state’s high-flying charter schools (North Star Academy and Robert Treat Academy). Next, I construct a statistical model of school performance including New Jersey Charter schools and traditional public schools in their host district, controlling for student demographics and location. I first used this same model here: Searching for Superguy in New Jersey. I use that model to show adjusted performance comparisons on a few of the tests, and then I use a variation of that model to test the proficiency rate difference – on average statewide – between charter schools and schools in the host district. Finally, I address one additional factor which I am unable to fully control for in the model – the fact that some New Jersey Charter Schools – high performing ones – seem to have unusually high rates of cohort attrition between grade 6 and 8, concurrent with rising test scores. I raise this point because pushing out of students is not an option available to traditional public schools. In fact, it is the traditional public schools that must take back those students pushed out.

Demographic Examples from Newark

Here are a few slides from previous posts on the demography of Newark Charter Schools in particular, compared to other Newark Public Schools. Here are the shares of kids who qualify for free lunch by school in Newark (city boundaries). Clearly, most of the charters fall toward the left hand side of the graph with far fewer of the lowest low-income children.

The shares of English Language Learners look similar if not more dramatic. Many NPS schools have very high rates of English Language Learners while few charters have even a modest share.

Finally, here’s a 4 year run of the most recent available special education classification rate data (More recent years of data have a dead link on the classification rates). This graph compares Essex County charter schools with Essex County public school districts. Charter Schools have invariably low special education rates, but for those focused on children with disabilities.

 

One cannot reasonably ignore these differences when comparing performance outcomes of kids across schools. It’s just silly and not particularly useful.

The Outcomes Corrected for the Demographics

So then, what happens if we actually use some statistical adjustments to evaluate whether the charter schools outperform (on average proficiency rate) other schools in the same city on the same test. Well, I’ve done this for charter data from 2009 and previous years and will do it again for the 2010 data when available. I use variables available in the Fall Enrollment Files and from the School Report Card and information on school location from the NCES Common Core of Data in order to create a model of the expected scores for each charter school and each other school in the same city. In the model, I use only the performance of GENERAL TEST TAKERS, so as to exclude those scores of special education students (who, for the most part don’t attend the charter schools). The model:

Outcome = f(Poverty, Race, Homelessness, City, Tested Grade, Subject)

Is use the model to create a predicted performance level (proficiency rate) for each school, considering which grade level test we are looking at, in which subject, the race/ethnicity of the students (where Hispanic concentration is highly correlated with available ELL data, and Hispanic concentration data are more consistently reported), the share of students qualifying for free lunch, the percent identified as homeless and the city of location for the school. That is, each charter school is effectively compared against only other schools in the same geographic context (city).

This is a CRUDE model, which can’t really account for other factors, such as the possibility that some charter schools actually shed, or push out, lower performing students over time.  More on that below. So, for each school, I get a predicted performance level – what that school is expected to achieve given the children it serves and the location. I can then compare the actual performance to the predicted performance to determine whether the school beats expectations or falls below expectations.

The next two graphs provide a visual representation of schools beating the odds and schools under-performing with respect to expectations. Charters are identified in red and named. Blue circles are traditional public schools in the same district. Note that there are about the same number of charters beating expectations as there are falling short. The same is true for non-charters. On average, both groups appear to be about average.

8th Grade Math performance looks much like 4th grade. Charters are evenly split between “good” and “bad,” as are the traditional public schools in their host districts.

The Overall Charter Difference (Or Not?)

Now, the above graphs don’t directly test whether the average charter performance is better or worse than the average non-charter performance on the same test, same grade and in the same location. But, conducting that test (for these purposes) is as simple as adding into the statistical model an indicator of whether a school is a charter school. Doing so creates a simple (oversimplified, in fact) comparison of the average performance of charters to the average performance of non-charters in the same city (on the same test, in the same grade level), while “correcting” statistically for differences in the student population. I SHOULD POINT OUT THAT ONE CAN NEVER REALLY FULLY CORRECT FOR THOSE DIFFERENCES!

Using this oversimplified method, the analysis (statistical output) below shows that the charter average proficiency rate is about 3% higher than the non-charter average – BUT THAT DIFFERENCE IS NOT STATISTICALLY SIGNIFICANT. That is, there really isn’t any difference. THAT IS, THERE REALLY ISN’T ANY DIFFERENCE.


Some Other Intervening Factors: Cohort Attrition, or Pushing Out

As I mentioned above, even the “tricky statistics” I used cannot sort out such things as a school that systematically dumps, or pushes out lower performing students, where those lower performing students end up back in the host district. Such an effect would simultaneously boost the charter performance and depress the host district performance (if enough kids were pushed back). I’ve written on this topic previously. So, I’ll reuse some of the older stuff – which isn’t really that old (last Fall).

In this figure, we can see that for the 2009 8th graders, North Star began with 122 5th graders and ended with 101 in 8th. The subsequent cohort also began with 122, and ended with 104. These are sizable attrition rates. Robert Treat, on the other hand, maintains cohorts of about 50 students – non-representative cohorts indeed – but without the same degree of attrition as North Star. Now, a school could maintain cohort size even with attrition if that school were to fill vacant slots with newly lotteried-in students. This, however, is risky to the performance status of the school, if performance status is the main selling point.

Here, I take two 8th grade cohorts and trace them backwards. I focus on General Test Takers only, and use the ASK Math assessment data in this case. Quick note about those data – Scores across all schools tend to drop in 7th grade due to cut-score placement (not because kids get dumber in 7th grade and wise up again in 8th). The top section of the table looks at the failure rates and number of test takers for the 6th grade in 2005-06, 7th in 2006-07 and 8th in 2007-08. Over this time period, North Star drops 38% of its general test takers. And, cuts the already low failure rate from nearly 12% to 0%. Greater Newark also drops over 30% of test takers in the cohort, and reaps significant reductions in failures (partially proficient) in the process.

The bottom half of the table shows the next cohort in sequence. For this cohort, North Star sheds 21% of test takers between grade 6 and 8, and cuts failure rates nearly in half  – starting low to begin with (starting low in the previous grade level, 5th grade, the entry year for the school). Gray and Greater Newark also shed significant numbers of students and Greater Newark in particular sees significant reductions in share of non(uh… partially)proficient students.

My point here is not that these are bad schools, or that they are necessarily engaging in any particular immoral or unethical activity. But rather, that a significant portion of the apparent success of schools like North Star is a) attributable to the demographically different population they serve to begin with and b) attributable to the patterns of student attrition that occur within cohorts over time.

Understanding Differing Perspectives

Some will say, why should I care if charters are producing higher outcomes with similar kids? What matters to me is that they are producing higher outcomes! Anyone who produces higher outcomes in Newark or Trenton should be applauded, no matter how they do it. It’s one more high performing school where there wasn’t one previously.

It is important to understand that comparisons of student outcomes that ignore differences in student populations reward – in the public eye – those schools that manage to find a way to serve more advantaged populations, either by achieving non-representative initial lottery pool or by selective attrition. As a result, there is a disincentive for charter operators to actually make greater effort to serve higher need populations – the ones who really need it! And there are many out there who see this as their real mission.  Those charter operators who do try to serve more ELL children, more children in severe multi-generational poverty, and children with disabilities often find themselves answering tough questions from their boards of directors and the media regarding why they can’t produce the same test scores as the high-flying charter on the other side of town. These are not good incentives from a public policy perspective. They are good for the few, not the whole.

Further, one’s perspective on this point varies whether one is a parent looking for options for his/her own child, or a policymaker looking for “scalable” policy options for improving educational opportunities for children statewide. From a parent (or child) perspective, one is relatively unconcerned whether the positive school effect is function of selectivity of peer group and attrition, so long as there is a positive effect. But, from a public policy perspective, the “charter model” is only useful if the majority of positive effects are not due to peer group selectivity and attrition, but rather to the efficacy and transferability of the educational models, programs and strategies. Given the uncommon student populations served by many Newark charters and even more uncommon attrition patterns among some… not to mention the grossly insufficient data… we simply have no way of knowing whether these schools can provide insights for scalable reforms.

As they presently operate, however, many of the standout schools do not represent scalable reforms. And on average, New Jersey charters are still… just… average.

Understanding Education Costs versus “Inflation”

We often see pundits arguing that education spending has doubled over a 30 year period, when adjusted for inflation, and we’ve gotten nothing for it. We’ve got modest growth in NAEP scores and huge growth in spending. And those international comparisons… wow!

The assertion is therefore that our public education system is less cost-effective now than it was 30 years ago. But this assumption is based on layers of flawed reasoning, on both sides of the equation.

Here’s a bit of School Finance 101 on this topic:

First, what are the two sides of the equation, or at least the two parts of the fraction? The numerator here is education spending and how we measure it now compared to previously. The major flaw in the usual reasoning is that we are making our comparison of the education dollar now to then by simply adjusting the value of that dollar for the average changes in the prices of goods purchased by a typical consumer (food, fuel, etc.), or the Consumer Price Index.

Unfortunately, the consumer price index is relatively unhelpful (okay, useless) for comparing current education spending to past education spending, unless we are considering how many loaves of bread or gallons of gas can be purchased with the education dollar.

If we wanted to maintain constant quality education over time, the main thing we’d have to do is maintain a constant quality workforce in schools – mainly a teacher workforce, but also administrators, etc. At the very least, if quality lagged behind we’d have to be able to offset the quality losses with additional workers, but the trade-offs are hard to estimate.

The quality of the teacher workforce is influenced much more by the competitiveness of the wages for teachers, compared to other professions, than to changes in the price of a loaf of bread or gallon of gas. If we want to get good teachers, teaching must be perceived as a desirable profession with a competitive wage. That is, to maintain teacher quality we must maintain the competitiveness of teacher wages (which we have not over time) and to improve teacher quality, we must make teacher wages (or working conditions) more competitive. On average, non-teacher wage growth has far outpaced the CPI over time and on average, teacher wages have lagged behind non-teacher wages, even in New Jersey!

Now to the denominator or the outcomes of our education system. First of all, if we allow for a decline in the quality of the key input – teachers – we can expect a decline in the outcomes however we choose to measure them. But, it is also important to understand that if we wish to achieve either higher outcomes, or to achieve a broader array of outcomes, or to achieve higher outcomes in key areas without sacrificing the broader array of outcomes, costs will rise. In really simple terms, the cost of doing more is more, not less. And yes, a substantial body of rigorous peer-reviewed empirical literature supports this contention (a few examples below).

So, as we ask our schools to accomplish more we can expect the costs of those accomplishments to be greater. If we expect our children to compete in a 21st century economy, develop technology skills and still have access to physical education and arts, it will likely cost more, not less, than achieving the skills of 1970. But, we must also make sure we are adequately measuring the full range of outcomes we expect schools to accomplish. If we are expecting schools to produce engaged civic participants, we may or may not see the measured effects in elementary reading and math test scores.

An additional factor that affects the costs of achieving educational outcomes is the student inputs – or who is showing up at the schoolhouse door (or logging in to the virtual school). A substantial body of research (see chapter by Duncombe and Yinger, here) explains how child poverty, limited English proficiency, unplanned mobility and even school racial composition may influence the costs of achieving any given level of student outcomes. Differences in the ways children are sorted across districts and schools create large differences in the costs of achieving comparable outcomes and so too do changes in the overall demography of the student population over time. Escalating poverty, and mobility induced by housing disruptions, increased numbers of children not speaking English proficiently all lead to increases of the cost of achieving even the same level of outcomes achieved in prior years. This is not an excuse. It’s reality. It costs more to achieve the same outcomes with some students than with others.

In short, the “cost” of education rises as a function of at least 3 major factors:

  1. Changes in the incoming student populations over time
  2. Changes in the desired outcomes for those students, including more rigorous core content area goals or increased breadth of outcome goals
  3. Changes in the competitive wage of the desired quality of school personnel

And the interaction of all three of these! For example, changing student populations making teaching more difficult (a working condition), meaning that a higher wage might be required to simply offset this change. Increasing the complexity of outcome goals might require a more skilled teaching workforce, requiring higher wages.

The combination of these forces often leads to an increase in education spending that far outpaces the consumer price index, and it should. Cost rise as we ask more of our schools, as we ask them to produce a citizenry that can compete in the future rather than the past. Costs rise as the student population inputs to our public schooling system change over time. Increased poverty, language barriers and other factors make even the current outcomes more costly to achieve. And costs of maintaining the quality of the teacher workforce change as competitive wages in other occupations and industries change, which they have.

Typically, state school finance systems have not kept up with the true increased costs of maintaining teacher quality, increased outcome demands or changing student demography. Nor have states sufficiently targeted resources to districts facing the highest costs of achieving desired outcomes. See www.schoolfundingfairness.org. And many states, with significantly changing demography including Arizona, California and Colorado have merely maintained or even cut current spending levels for decades (despite what would be increased costs of even maintaining current outcome levels).

Evaluating education spending solely on the basis of changes in the price of a loaf of bread and/or gallon of gasoline is, well, silly.

Notably, we may identify new “efficiencies” that allow us to produce comparable outcomes, with comparable kids at lower cost. We may find some of those efficiencies through existing variation across schools and districts, or through new experimentation. But it is downright foolish to pretend that those efficiencies are simply out there (even if we can’t see them, or don’t know them) and we can simply squeeze the current system into achieving comparable or better outcomes at lower cost.

Readings

Baker, B.D., Taylor, L., Vedlitz, A. (2008) Adequacy Estimates and the Implications of Common Standards for the Cost of Instruction. National Research Council.  http://www7.nationalacademies.org/CFE/Taylor%20Paper.pdf

Duncombe, W., Lukemeyer, A., Yinger, J. (2006) The No Child Left Behind Act: Have Federal Funds been Left Behind? http://cpr.maxwell.syr.edu/efap/Publications/costing_out.pdf

This second one is a really fun article showing the vast differences in the costs of achieving NCLB proficiency targets in two neighboring states which happen to have very different testing standards. In really simple terms, Missouri has a hard test with low proficiency rates and Kansas and easy test with high proficiency rates. The authors show the cost implications of achieving the lower, versus higher tested achievement standards.

Thinking through cost-benefit analysis and layoff policies


If you’re running a school district or a private school and you are deciding on what to keep in your budget and what to discard, you are making trade-offs. You are making trade-offs as to whether you want to spend money on X or on Y, or perhaps a more complicated mix of many options. How you come to your decision depends on a number of factors:

  1. The cost – the total costs of the various ingredients that go into providing X and providing Y. That is, how many people, at what salary and benefits, how much space at what overhead cost (per time used) and how much stuff (materials, supplies and equipment) and at what market prices?
  2. The benefits – the potential dollar return to doing X versus doing Y. For example, how much dollar savings might be generated in operating cost savings from reorganizing our staffing and use of space, if we spend up front (capital expenses) to reorganize and consolidate our elementary schools where they have become significantly imbalanced over time?
  3. The effects – the relative effectiveness of doing X versus doing Y. For example, in the simplest case, if we are choosing between two reading programs, what are the reading achievement gains, or effects, from each program? Or, more pertinent to the current conversation (but far more complex to estimates), what are the relative effects of reducing class size by 2 students when compared to keeping a “high quality” teacher.
  4. The utility – The utility of each option refers to the extent that the option in question addresses a preferred outcome goal. Utility is about preferences, or tastes. For example, in the current accountability context, one might be pressured to place greater “utility” on improving math or reading outcomes in grades 3 through 8. If the costs of a preferred program are comparable to the costs of a less preferred program… well… the preferred program wins. There are many ways to determine what’s “preferred,” and more often than not, public input plays a key role especially in smaller, more affluent suburban school districts. As noted above, federal and state policy have played a significant role in defining utility in the past decade (and arguably, distorting resource allocation to a point of significant imbalance in resource-constrained districts)

This basic cost analysis framework laid out by Henry Levin back in 1983 and revisited by Levin and McEwan since should provide the basis for important trade-off decisions in school budgeting and should provide the conceptual basis for arguments like those made by Petrilli and Roza in their recent policy brief. But such a framework is noticeably absent and likely so because most of the proposals made by Petrilli and Roza:

  1. are not sufficiently precise to apply such a framework  largely because little is known about the likely outcomes (which may in fact be quite harmful); and
  2. because they have failed entirely to consider in detail the related costs of proposed options, especially up-front costs of many of the options (like school reorganization or developing teacher evaluation systems). Note that the full length book (from which the brief comes) is no more thoughtful or rigorous.

Back of the Napkin Application to Layoff Options

Allow me to provide a back-of-the-napkin example of some of the pieces that might go into determining the savings and/or benefits from the BIG suggestion made by Pettrilli and Roza – which is to use quality based layoffs in place of seniority based layoffs when cutting budgets. This one would seem to be a no-brainer. Clearly, if we layoff based on quality, we’ll have better teachers left (greater effectiveness) and we’ll have saved a ton money or a ton of teachers. That is, if we are determined to layoff X teachers, it will save more money to lay off more senior, more expensive teachers than to lay off novice teachers. However, that’s not the likely what-if scenario. More likely is that we are faced with cutting X% of our staffing budget, so the difference will be in the number of teachers we need to lay off in order to achieve that X%, and the benefit difference might be measured in terms of the change in average class size resulting from laying off teachers by “quality” measures and laying off teachers by seniority.

Let’s lay out some of the pieces of this cost benefit analysis to show its complexity.

First of all, let’s consider how to evaluate the distribution of the different layoff policies.

Option 1 – Layoffs based on seniority

This one is relatively easy and involves starting from the bottom in terms of experience and laying off as many junior teachers as necessary to achieve 5% savings to our staffing budget.

Option 2 – Layoffs based on quality

Here’s the tricky part. Budget cuts and layoffs are here and now. Most districts do not have in place rigorous teacher evaluation systems that would allow them to make high stakes decisions based on teacher quality metrics. AND, existing teacher quality metrics where they do exist (NY, DC, LA) are very problematic. So, on the one hand, if districts rush to immediately implement “quality” based layoffs, districts will likely revert to relying heavily on some form of student test score driven teacher effectiveness rating, modeled crudely (like the LA Times model).  Recall that even in better models of this type, we are looking at a 35% chance of identifying an average teacher as “bad” and 20% chance of identifying a good teacher as “bad.”

In general, the good and bad value-added ratings fall somewhat randomly across the experience distribution. So, for simplicity in this example, I will assume that quality based firings are essentially random. That is, they would result in dismissals randomly distributed across the experience range. Arguably, value-added based layoffs are little more than random, given that a) there is huge year to year error even when comparing on the same test and b) there are huge differences when rating teachers using one test, versus using another.

Testing this out with Newark Public Schools – Elementary Classroom Teachers 2009-10

At the very least, one would think that randomly firing our way to a 5% personnel budget cut would create a huge difference when compared to firing our way to a 5% personnel budget cut by eliminating the newest and cheapest teachers. I’m going to run these numbers using salaries only, for illustrative purposes (one can make many fun arguments about how to parse out fixed vs. variable benefits costs, or deferred benefits vs. short run cost differences for pensions and deferred sick pay, etc.).

We start with just over 1,000 elementary classroom teachers in Newark Public Schools, and assume an average class size of 25 for simplicity. The number of teachers is real (at least according to state data) but the class sizes are artificially simplified. We are also assuming all students and classroom space to be interchangeable.  A 5% cut is about $3.7 million. Let’s assume we’ve already done our best to cut elsewhere in the district budget, perhaps more than 5% across other areas, but we are left with the painful reality of cutting 5% from core classroom teachers in grades K-8. In any case, we’re hoping for some dramatic saving here – or at least benefits revealed in terms of keeping class sizes in check.

Figure 1: Staffing Cut Scenarios for Newark Public Schools using 2009-10 Data

If we layoff only the least experienced teachers to achieve the 5% cut, we layoff only teachers with 3 or fewer years of experience when using the Newark data.  The average experience of those laid off is 1.8 years. And we end up laying off 72 teachers (a sucky reality no matter how you cut it).

If we use a random number generator to determine layoffs (really, a small difference from using Value-added modeling), we end up laying off only 54 teachers instead of 72. We save 18 teachers, or 1.7% of our elementary classroom teacher workforce.

What’s the class size effect of saving these 18 teachers? Well, under the seniority based layoff policy, class size rises from 25 to 26.86. Under the random layoff policy, class size rises from 25 to 26.37. That is, class size is affected by about half a student per class. This may be important, but it still seems like a relatively small effect for a BIG policy change. This option necessarily assumes no downside to the random loss of experienced teachers. Of course, the argument is that more of those classes now have a good teacher in front of them. But again, doing this here and now with the type of information available means relying not even on the “best” of teacher effectiveness models, but relying on expedited, particularly sloppy, not thoroughly vetted models. I would have continued concerns even with richer models, like those explored in the recent Gates/Kane report, which still prove insufficient.

Perhaps most importantly, how does this new policy affect the future teacher workforce in Newark – the desirability for up-and-coming teachers to pursue a teaching career in Newark, where their career might be cut off at any point, by random statistical error? And how does that tradeoff balance with a net difference of about half a student per classroom?

What about other costs?

Petrilli and Roza, among others, ignore entirely any potential downside to the teacher workforce – those who might choose to enter that workforce if school districts or states al-of-the-sudden decide to rely heavily on error prone and biased measures of teacher effectiveness to implement layoff policies.  This downside might be counterbalanced by increased salaries, on average and especially on the front end. That is, to achieve equal incoming teacher quality over time, given the new uncertainty, might require higher front end salaries. This cost is ignored entirely (or simply assumed to come from somewhere else, like cutting benefits… simply negating step increments, or supplements for master’s degrees, each of which have other unmeasured consequences).

I have assumed above that districts would rely heavily on available student testing data, creating error-prone, largely random layoffs, while ignoring the cost of applying the evaluation system to achieve the layoffs. Arguably, even contracting an outside statistician to run the models and identifying the teachers to be laid off would cost another $50,000 to $75,000, leading to reduction of at least one more teacher position under the “quality based” layoff model.

And then there are the legal costs of fighting the due process claims that the dismissals were arbitrary and the potential legal claims over racially disparate firings. Forthcoming law review article to be posted soon.

Alternatively, developing a more rigorous teacher evaluation system that might more legitimately guide layoff policies requires significant up-front costs, ignored entirely in the current overly simplistic, misguided rhetoric.

How can we implement quality based layoffs when we’re supposed to be laying off teachers NOT teaching math and reading in elementary grades?

Here’s another issue that Petrilli, Roza and others seem to totally ignore. They argue that we must a) dismiss teachers based on quality and b) must make sure we don’t compromise class sizes in core instructional areas, like reading and math in the elementary grades.

Let’s ponder this for a moment. The only teachers to whom we can readily assign (albeit deeply flawed) effectiveness ratings are those teaching math and reading between grades 3 and 8. So, the only teachers who we could conceivably layoff based on preferred “reformy” quality metrics are teachers who are directly responsible for teaching math and reading between grades 3 and 8.

That is, in order to implement quality based layoffs, as reformers suggest, we must be laying off math and reading teachers between grades 3 and 8, except that we are supposed to be laying off other teachers, not those teachers. WOW… didn’t think that one through very well… did they?

Am I saying seniority layoffs are great?

No. Clearly seniority layoffs are imperfect and arguably there is no perfect answer to layoff policies. Layoffs suck and sometimes that sucky option has to be implemented. Sometimes that that sucky option has to be implemented with a blunt and convenient instrument and one that is easily defined, such as years of service. It is foolish to argue that teaching is the only profession where those who’ve been around for a while – those who’ve done their time – have greater protection when the axe comes down. Might I suggest that paying one’s dues even plays a significant role in many private sector jobs? Really? And it is equally foolish to argue that every other profession EXCEPT TEACHING necessarily makes precise quality decisions regarding employees when that axe comes down.

The tradeoff being made in this case is a tradeoff  NOT between “keeping quality teachers” versus “keeping old, dead wood” as Petrilli, Roza and others would argue, but rather the tradeoff between laying off teachers on the unfortunately crude basis of seniority only, versus laying off teachers on a marginally-better-than-random, roll-of-the-dice basis. I would argue the latter may actually be more problematic for the future quality of the teaching workforce!  Yes, pundits seem to think that destabilizing the teaching workforce can only make it better. How could it possibly get worse, they argue? Substantially increasing the uncertainty of career earnings for teachers can certainly make it worse.

Bad Teachers Hurt Kids, but Salary Cuts Have no Down Side?

The assumption constantly thrown around in these policy briefs is that putting a bad teacher in front of the kids is the worst possible thing you could do. We have to fire those teachers. They are bad for kids. They hurt kids.

But, the same pundits argue that we should cut pay for the teachers in any number of ways (including paying for benefits) and subject teachers to layoff policies that are little more than random. Since so many teachers are bad teachers – and simply bad people – these policies are, of course, not offensive. Right? Kids good. Teachers bad. Treat kids well. Take it out on teachers. No harm to kids. Easy!

I’m having a hard time swallowing that. That’s just not a reasonable way to treat a workforce (if you want a good workforce), no less a reasonable way to treat a workforce charged with educating children. In fact, it’s bad for the kids, and just plain ignorant to assert that one can treat the teachers badly, lower their pay, morale and ultimately the quality of the teacher workforce and expect there to be no downside for the kids.

Petrilli and Roza make the assumption that there is big savings to be found from cutting teacher salaries directly and also indirectly by passing along benefits costs to teachers.  That’s a salary cut! Or at least a cut to the total compensation package and it’s a package deal! This argument seems to be coupled with an assumption that there is absolutely no loss of benefit or effectiveness from pursuing this cost-cutting approach (because we’ll be firing all of the sucky teachers anyway). That is, teacher quality will remain constant even if teacher salaries are cut substantially.  A substantial body of research questions that assumption:

  • Murnane and Olson (1989) find that salaries affect the decision to enter teaching and the duration of the teaching career;
  • Figlio (1997, 2002) and Ferguson (1991) find that higher salaries are associated with better qualified teachers;
  • Figlio and Reuben (2001) “find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits;”
  • Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”

To mention a few.

That is, in the aggregate, higher salaries (and better working conditions) can attract a stronger teacher workforce, and at a local level, having more competitive teaching salaries compared either to non-teaching jobs in the same labor market or compared to teaching jobs in other districts in the same labor market can help attract and especially retain teachers.

Allegretto, Corcoran and Mishel, among others, have shown that teacher wages have lagged over time – fallen behind non-teaching professions. AND, they have shown that the benefits differences are smaller than many others argue and certainly do not make up the difference in the wage deficit over time. I have shown previously on my blog that teacher wages in New Jersey have similarly lagged behind!

So, let’s assume we believe that teacher quality necessarily trumps reduced class size, for the same dollar spent. Sadly, this has been a really difficult trade-off to untangle in empirical research and while reformers boldly assume this, the evidence is not clear. But let’s accept that assumption. But let’s also accept the evidence that overall wages and local wage advantages lead to a stronger teacher workforce.

If that’s the case, then the appropriate decision to make at the district level would be to lay off teachers and marginally increase class sizes, while making sure to keep salaries competitive. After all, the aggregate data seem to suggest that over the past few decades we’ve increased the number of personnel more than we’ve increased the salaries of those personnel. That is, cut numbers of staff before cutting or freezing salaries. In fact, one might even choose to cut more staff and pay even higher salaries to gain competitive advantage in tough economic times. Some have suggested as much.  I’m not sold on that either, especially when we start talking about increasing class sizes to 30, 35 or even 50.  Note that class size may also affect the competitive wage that must be paid to a teacher in order to recruit and retain teachers of constant quality. Nonetheless, it is important to understand the role of teacher compensation in ensuring the overall quality of the teacher workforce and it is absurd to assume no negative consequences of slashing teacher pay across-the-board.

Take home point!

In summary, we should be providing thoughtful decision frameworks for local public school administrators to make cost-effective decisions regarding resource allocation rather than spewing laundry lists of reformy strategies for which no thoughtful cost-effectiveness analysis has ever been conducted.

Further, now is not the time to act in panic and haste to adopt these unfounded strategies without appropriate consideration of the up-front costs of making truly effective reforms.

A few references

Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271.

David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699.

Figlio (1997, 2002) and Ferguson (1991) find that higher salaries are associated with better qualified teachers

Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

Figlio, D.N., Reuben, K. (2001) Tax limits and the qualifications of new teachers Journal of Public Economics 80 (1) 49-71

Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York.  Public Finance Review 36 (1) 112-144

Stretching Truth, Not Dollars?

This week, Mike Petrilli (TB Fordham Institute) and Marguerite Roza (Gates Foundation) released a “policy brief” identifying 15 ways to “stretch” the school dollar. Presumably, what Petrilli and Roza mean by stretching the school dollar is to find ways to cut spending while either not harming educational outcomes or actually improving them. That goal in mind, it’s pretty darn hard to see how any of the 15 proposals would lead to progress toward that goal.

The new policy brief reads like School Finance Reform in a Can. I’ve written previously about what I called Off-the-Shelf school finance reforms, which are quick and easy – generally ineffective and meaningless, or potentially damaging – revenue-neutral school finance fixes. In this new brief, Petrilli and Roza have pulled out all the stops. They’ve generated a list, which could easily have been generated by a random search engine scouring “reformy” think tank websites, excluding any ideas actually supported by research literature.

The policy brief includes some introductory ramblings about district level practices for “stretching” the school dollar, but the policy brief focuses on state policies that can assist in stretching the school dollar at the state level and provide local districts greater options to stretch the school dollar. I will focus my efforts on the state policy list.

Here’s the state policy recommendation list:

1. End “last hired, first fired” practices.

2. Remove class-size mandates.

3. Eliminate mandatory salary schedules.

4. Eliminate state mandates regarding work rules and terms of employment.

5. Remove “seat time” requirements.

6. Merge categorical programs and ease onerous reporting requirements.

7. Create a rigorous teacher evaluation system.

8. Pool health-care benefits.

9. Tackle the fiscal viability of teacher pensions.

10. Move toward weighted student funding.

11. Eliminate excess spending on small schools and small districts.

12. Allocate spending for learning-disabled students as a percent of population.

13. Limit the length of time that students can be identified as English Language Learners.

14. Offer waivers of non-productive state requirements.

15. Create bankruptcy-like loan provisions.

This list can be lumped into four basic categories:

A) Regurgitation of “reformy” ideology for which there exists absolutely no evidence that the “reforms” in question lead to any improvement in schooling efficiency. That is, no evidence that these reforms either “cut costs” (meaning reduce spending without reducing outcomes) or improve benefits (or outcome effects).

  1. Creating a rigorous evaluation system
  2. Ending “last hired, first fired” practices
  3. Move toward weighted student funding

B) Relatively common “money saving” ideas, backed by little or no actual cost-benefit analysis – the kind of stuff you’d be likely to read in a personal finance column in magazine in a dentist’s office.

  1. Pool health-care benefits.
  2. Create bankruptcy-like loan provisions. (???)
  3. Tackle pensions
  4. Cut spending on small districts and schools (consolidate?)

C) Reducing expenditures on children with special needs by pretending they don’t exist.

  1. Allocate spending for learning-disabled students as a percent of population.
  2. Limit the length of time that students can be identified as English Language Learners.

D) Un-regulation

  1. eliminate class-size limits
  2. provide waivers for ineffective mandates
  3. eliminate seat time requirements
  4. merge categorical programs
  5. eliminate work rules
  6. eliminate mandatory salary schedules

So, let’s walk through a few of these in greater detail. Let’s address whether there is any evidence whatsoever that these policies a) would actually lead to reduced short run costs while not harming, or even improving outcomes, or b) are for any other reason a good idea.

Creating an Evaluation System

This likely requires significant up front spending- heavy front end investment to design the system and put the system into place. Yes, increased, not decreased spending. And in the short-term, while money is tight. AND, there is little or no evidence that what is being recommended – a Tennessee or Colorado-style teacher evaluation model (50% on value-added scores), would actually reduce spending and /or improve outcomes. Rather, I could make a strong case that such a model will lead to exorbitant legal fees for the foreseeable future (I have a forthcoming law review article on this topic).  The likelihood of achieving long run benefits from these short run expenses is questionable at best. In fact, the likelihood of significant harm seems equal if not greater (see my previous post on this topic: value-added teacher evaluation).

Ending “Last Hired, First Fired” layoff policies

In very crude terms, this approach might simply allow a district – or entire state – to layoff senior, higher salary teachers. Yeah… that could reduce the payroll. Good policy? Really questionable! Of course, Petrilli and Roza also argue that we simply shouldn’t be paying teachers for experience or degrees anyway. So I guess if we did that, we wouldn’t generate savings from this recommendation. Silly me. One or the other, I guess.

Now, we could generate performance increases (at lower spending, if we keep seniority pay, or at constant spending if we don’t) if, and only if, the future actually plays out as simulated in the various performance-based layoff simulations which I, and others have recently discussed. The assumptions in these simulations are bold (unrealistic), and much of the logic circular.

And then there are those short-term legal costs of defending the racially disparate firings, and random error firings.

Eliminating Class Size Limits

Yes, larger classes require less spending – on a per pupil basis. Smaller classes have greater benefit (greater “bang for the buck” shall we so boldly say) in higher poverty settings. A labor market dynamic problem realized in the late 1990s, when CA implemented statewide class size reduction, was that the policy stretched the pool of highly qualified teachers and ultimately made it even harder for high poverty schools to get high quality teachers (a dreadfully oversimplified and disputable version of the story).

Removing class size limits might be reasonable if only affluent districts agreed to increase their class sizes, putting more “high quality” teachers into the available labor pool… who might then be recruited into high poverty districts (another dreadfully oversimplified, if not absurd scenario).  But who really thinks it will play out this way? We already know that affluent school districts a) have strong preferences for very small class sizes and b) have the resources to retain those small class sizes or reduce them further. See Money and the Market for High Quality Schooling.

Eliminating mandatory salary schedules

It seems that in this recommendation, Petrilli and Roza are arguing against state policies that mandate the adoption by local public school districts of specific step and lane salary schedules. They really only provide one brief paragraph with little or no explanation regarding what the heck they are talking about.

I’ve personally never been much of a fan of state rigidity regarding local negotiated agreements – at least in terms of steps and lanes. Many problems can occur where states enact policies as rigid as those of Washington State, were teachers statewide are on a single salary schedule.

The best work on this topic (and I’ve worked on the same topic with Washington data) is by Lori Taylor of Texas A&M who shows that the Washington single salary schedule leads to non-competitive wages for teachers in metro areas, and also leads to non-competitive wages for teachers in math and science relative to other career opportunities in metro areas. The statewide salary schedule in Washington is arguably too rigid. Here’s a link to Taylor’s study:

Taylor, L. (2008) Washington Wages: An Analysis of Educator and Comparable Non-educator Wages in the State of Washington. Washington State Institute for Public Policy.

But this does not mean, by any stretch of the imagination, that removing this requirement would save money, or “stretch” the education dollar. It might allow bargaining units in metro areas in Washington to scale up salaries over time as the economy improves. And it might lead to some creative differentiation across negotiated agreements, with districts trying to leverage different competitive advantages over one another for teacher recruitment.

But, these competitive behaviors among districts may also lead to ratcheting of teacher salaries across neighboring bargaining units, and may lead to increased salary expense with small marginal returns (as clusters of districts compete to pay more for an unchanging labor pool). For an analysis of this effect, see Mike Slagle’s work on spatial relationships in teacher salaries in Missouri. In short, Slagle finds that changes to neighboring district salary schedules are among the strongest predictors of an individual district’s salary schedule. Ratcheting upward of salaries in neighboring districts is likely to lead to adjustment by each neighboring district (to the extent resources are available). Ratcheting downward does not tend to occur (not reported in this article).

Slagle, M. (2010) A Comparison of Spatial Statistical Methods in a School Finance Policy Context. Journal of Education Finance 35 (3)

[note: this article is a shortened version of Mike’s dissertation. The article addresses only the ratcheting of per pupil spending, but the full dissertation also addresses teacher salaries]

In any case, we certainly have no evidence that removing state level requirements for mandatory salary schedules would save money while holding outcomes harmless – hence improving efficiency. Like I said, I’m not a big fan of such restrictions either, but I have no delusion that removing them will save any district a ton of money – or any for that matter.

This recommendation seems to also be tied up in the notion that we shouldn’t be paying teachers for experience or degree levels anyway. Therefore, mandating as much would clearly be foolish. I’ve addressed this idea previously in The Research Question that Wasn’t Asked.

In addition, this recommendation seems to adopt the absurd assumption that we could immediately just pay every teacher in the current system the bachelor’s degree base salary (Okay, the salary of a teacher with 3 years and a bachelor’s degree, where marginal test-score returns to experience fade). We could immediately recapture all of that salary money dumped into differentiation by experience or differentiation by degree, and that we could have massive savings with absolutely no harm to the quality of schooling – or quality of teacher labor force in the short-run or in the long-term. Again, that’s the research question that was never asked. Previous estimates of all of the money wasted on the master’s degree salary “bump” are actually this crude.

For similarly absurd analysis by Marguerite Roza regarding teacher pay, see my previous post on “inventing research findings.”

Move toward Weighted Student Funding

Petrilli and Roza also advocate moving to Weighted Student Funding. They seem to argue that the “big” savings here will come from the ability of states and school districts to immediately take back funding as student enrollments decline. That is, a district in a state, or school in a district gets a certain amount per kid. If they lose the kid, they lose the money. This keeps us from wasting a whole lot of money on kids who aren’t there anymore.

Okay… Now… most state aid is allocated on a per pupil basis to begin with. And, in general, as enrollments fluctuate, state aid fluctuates. Lose a kid. Lose the state aid that is driven by that kid. Some states have recognized that the costs of providing education don’t actually decline linearly (or increase linearly) with changes in enrollment and have included safety valves to slow the rate of aid loss as enrollments decline. Such policies are reasonable.

Petrilli and Roza seem to be belligerently and ignorantly declaring that there is simply never a legitimate reason for a funding formula to include small school district or declining enrollment provisions. I have testified in court as an expert against such provisions when those provisions are completely “out of whack”, but would never say they are entirely unwarranted. That’s just foolish, and ignorant.

Local revenues in many states (and in many districts within states) still make up a large share of public school funding, and local revenues are typically derived from property taxes applied to the total taxable property wealth of the school district. As kids come and go, local revenues do not come and go. If a tax levy of X% on the district’s assessed property values raises $8,000 per pupil – and if enrollment declines, but the total assessed value stays constant, the same tax raises more per pupil, perhaps $8,100. The district would lose state funding because it has fewer pupils (and perhaps also because it can generate larger local share per pupil).  But that’s really nothing new.

There’s really no new “huge” savings to be had here.

UNLESS:

a) we are talking about kids moving to charter schools from the traditional public schools, and for each kid who moves to a charter school, we either require the district to pass along the local property tax share of funding associated with that child (Many states), or reduce state aid by the equivalent amount (Missouri).

b) there exists a property tax revenue limit tied specifically to the number of pupils served in the district (as in Wisconsin and other states) which then means that the district would have to reduce its local property taxes to generate only the per pupil revenue allowed. That’s not savings. It’s a state enforced local tax cut.

So then, why do Petrilli and Roza care about Weighted Student Funding as an option? The above two “Unless” scenarios are possible suspects. Blind reformy punditry regardless of logic is equally possible (WSF is cool… reformy… who cares what it does?).

It’s not really about “saving” money at all. Rather, it’s about creating mechanisms to enable local property tax revenues to be diverted in support of charter schools (even if the local taxpayers did not approve the charter), or to have local budgets forcibly reduced/capped when students opt-in to voucher programs (Milwaukee).

And this isn’t really a “weighted student funding” issue at all. In many states, it already works this way (WSF or not). Big savings? Perhaps an opportunity to reduce the state subsidy to charter schools by requiring greater local pass through – in those states where this doesn’t already occur. But these provisions face significant legal battles in some states. If a state is not already doing this, this policy change would also likely lead to significant up front legal expenses.

In fact, I can’t imagine a circumstance where adopting weighted student funding can be expected to either save money or improve outcomes for the same money. There’s simply no proof to this effect. Sadly, while it would seem at the very least, that adopting weighted funding might improve transparency and equity of funding across schools or districts, that’s not necessarily the case either.

My own research finds that districts adopting weighted funding formulas have not necessarily done any better than districts using other budgeting methods when it comes to targeting financial resources on the basis of student needs. See: http://epaa.asu.edu/ojs/index.php/epaa/article/view/5

Petrilli and Roza’s Weighted Funding recommendation for “stretching” the dollar is strange at best. As a recommendation to state policymakers, adoption of weighted funding provides few options for “stretching” the dollar, but may provide a mechanism for diverting districts’ local revenues to support choice programs (potentially reducing state support for those programs).

As a recommendation to local school district officials, adoption of weighted funding really provides no options for “stretching” the dollar, and may, in fact, increase centralized bureaucracy required to develop and manage the complex system of decentralized budgeting that accompanies WSF (see: http://epx.sagepub.com/content/23/1/66.short)

So,

No savings?

No improvements to equity?

No evidence of improved efficiency?

What then, does WSF have to do with “stretching” the school dollar?

Baker, B.D., Elmer, D.R. (2009) The Politics of Off‐the‐Shelf School Finance Reform. Educational Policy 23 (1) 66‐105

Baker, B.D. (2009) Evaluating Marginal Costs with School Level Data: Implications for the Design of Weighted Student Allocation Formulas. Education Policy Analysis Archives 17 (3)

Savings from Small Districts and Schools

I am one who believes in creating savings through consolidation of unnecessarily small schools and school districts. And, at the school or district level, some sizeable savings can be achieved by reorganizing schools into more optimal size configurations (elementary schools of 300 to 500 students and high schools of 600 to 900 for example, See Andrews, Duncombe and Yinger)

For other research on the extent to which consolidation can help cut costs, see Does School District Consolidation Cut Costs, also by Bill Duncombe and John Yinger (the leading experts on this stuff).

Now, Petrilli and Roza, however, seem to imply that the savings from these consolidations or simply from starving the small schools and districts can perhaps help states to sustain the big districts – STRETCHING that small school dollar. Note that Petrilli and Roza ignore entirely the possibility that some of these small schools and districts (in states like Wyoming, western Kansas, Nebraska) might actually have no legitimate consolidation options. Kill them all! Get rid of those useless small schools and districts, I say!

Here’s the thing about de-funding small schools and districts to save big ones. The total amount of money often is not much… BECAUSE THEY ARE SMALL SCHOOLS!!!!!  I learned this while working in Kansas, a state which arguably substantially oversubsidizes small rural school districts, creating significant inequities between those districts and some of the states large towns and cities with high concentrations of needy students. While the inequity can (and should) be reduced, the savings don’t go very far.

So, let’s say we have 6 school districts serving 100 kids each, and spending $16,000 per pupil to do so. Let’s say we can lump them all together and make them produce equal outcomes for only $10,000 per pupil. A bold, bold assumption. We just saved $6,000 per pupil (really unlikely), across 600 pupils. That’s not chump change… it’s $3,600,000 (okay… in most state budgets that is chump change).

So, now let’s take this savings, and give it to the rest of the kids in the state – oh – about 400,000. Well, we just got ourselves about $9 per pupil. Even if we try to save the mid-sized city district of 50,000 students down the road, it’s about $72 per pupil. That is something. And if we can achieve that, then fine. But slashing small districts and schools to save big, or even average ones, usually doesn’t get us very far. BECAUSE THEY ARE SMALL! GET IT! SMALL DISTRICTS WITH SMALL BUDGETS!

Similar issues apply to elimination of very small schools in large urban districts. It’s appropriate strategy – balancing and optimizing enrollment (reorganizing those too-small high schools created as a previous Gates-funded reform?). It should be done. But unless a district is a complete mess of tiny, poorly organized schools, the savings aren’t likely to go that far.

Let’s also remember that major reconfiguration of school level enrollments will require significant up front capital expense! Yep, here we are again with a significant increased expense in the short-term. Duncombe and Yinger discuss this in their work. Strangely, this slips right past Petrilli and Roza.

Use Census Based Funding for Special Education

So, what Petrilli and Roza are arguing here is that states could somehow save money by allocating their special education funding to school districts on an assumption that every school district has a constant share of its enrollment that qualifies for special education programs. Those districts that presently have more? Well, they’ve just been classifying every kid they can find so they can get that special education money. This flat-funding policy will bring them into line… and somehow “stretch” that dollar.

Let’s say we assume that every district has 16% (Pennsylvania) or 14.69% (New Jersey) children qualifying for special education. Let’s say we pick some number, like these, that is about the current average special education population.  Our goal is really to reduce the money flowing to those districts that have higher than average rates. Of course, if we pick the average, we’ll be reducing money to the districts with higher rates and increasing money to the districts with lower rates and you know what – WE’LL SPEND ABOUT THE SAME IN SPECIAL EDUCATION AID? “Stretching?” how?

And will we have accomplished anything close to logical? Let’s see, we will have slammed those districts that have been supposedly over-identifying kids for decades just to get more special ed aid. That, of course, must be good.

BUT, we will also be providing aid for 14.69% of kids to districts that have only 7% or 8% children with disabilities. Funding on a census basis or flat basis requires that we provide excess special education aid to many districts – unless we fund all districts as if they have the same proportion of special education kids as the district with the fewest special education kids. That is, simply cut special education aid to all districts except the one that currently receives the least.

How is that smart “stretching?”

The only way to “save” money with this recommendation is simply to “cut funding” and “cut services.” And, unless cut to the bare minimum, the “flat allocation” strategy requires choosing to “overfund” some districts while “underfunding” others. One might try to argue that this policy change would at least reduce further growth in special ed populations. But the article below suggests that this is not likely the case either. The resulting inequities significantly offset any potential benefits.

There exist a multitude of problems with flat, or census-based special education funding, which have led to declining numbers of states moving in this direction in recent years, New Jersey being an exception. I discuss this with co-authors Matt Ramsey and Preston Green in our forthcoming chapter on special education finance in the Handbook on Special Education Policy Research.

Of course, there also exists the demographic reality that children with disabilities are simply not distributed evenly across cities, towns and rural areas within states, leading to significant inequities when using Census Based funding. CB Funding is, in fact, the antithesis of Weighted Student Funding. How does one reconcile that?

For a recent article on the problems with the underlying assumptions of Census Based special education funding, see:

Baker, B.D., Ramsey, M.J. (2010) What we don’t know can’t hurt us? Evaluating the equity consequences of the assumption of uniform distribution of needs in Census Based special education funding. Journal of Education Finance 35 (3) 245‐275

Here’s a draft copy of our forthcoming book chapter on special education finance: SEF.Baker.Green.Ramsey.Final

Limit Time for ELL/LEP

This one is both absurd and obnoxious. Essentially, Petrilli and Roza argue that kids should be given a time limit to become English proficient and should not be provided supplemental programs or services – or at least the money for them – beyond that time frame. For example, a child might be funded for supplemental services for 2 years, and 2 years only. Some states have done this. Again, there is no clear basis for such cutoffs, nor is it clear how one would even establish the “right” time limit, or whether that time limit would somehow vary based on the level of language proficiency at the starting time.

Yes, this approach, like cutting special education funding can be used to cut spending and cut and reduce the quality of services. But that’s all it is. It’s not “stretching” any dollar.

Other Stuff

Now, the brief does list other state policy options as well as other district practices. Some of these are rather mundane, typical ideas for “cost saving.” But, of course, no evidence or citation of actual cost effectiveness, cost benefit or cut utility analysis is presented. Petrilli and Roza toss around ideas like a) pooling health care costs, b) redesigning sick leave policies or c) shifting health care costs to employees. These are the kind of things that are often on the table anyway.

I fail to see how this new policy brief provides any useful insights in this regard. Some actual cost-benefit analysis would be the way to go. As a guide for such analyses, I recommend Henry Levin and Patrick McEwan’s book on Cost Effectiveness Analysis in Education.

There are a handful of articles available on the topic of incentives associated with varied sick leave policies, including THIS ONE, School District Leave Policies, Teacher Absenteeism, and Student Achievement, by Ron Ehrenberg of Cornell (back in 1991).

One category I might have included above is that at least two of the recommendations embedded in the report argue for stretching the school dollar, so-to-speak, by effectively taxing school employees. That is, setting up a pension system that requires greater contribution from teacher salaries, and doing the same for health care costs. This is a tax – revenue generating (or at least a give back). This is not stretching an existing dollar. This is requiring the public employees, rather than the broader pool of taxpayers (state and/or local), to pay the additional share. One could also classify it as a salary cut. But Petrilli and Roza have already proposed salary cuts in half of the other recommendations. Just say it. Hey… why not just take the “master’s bump” money and use that to pay for pensions and health care? No-one will notice it’s even gone? We all know it was wasted and un-noticed to begin with.

I was particularly intrigued by the entirely reasonable point that school districts should NOT make the harmful cuts by narrowing their curriculum. I was intrigued by this point because this is precisely what Marguerite Roza has been arguing that poor districts MUST do in order to achieve minimum standards within their existing budgets. I wrote about this issue previously HERE. It is an interesting, but welcome about-face to see Roza no-longer arguing that poor, resource constrained school districts should dump all but the basics (while other districts, with more advantaged student populations and more adequate resources need not do the same).

Utter lack of sources/evidence for any/all of this junk

Finally, I encourage you to explore the utter lack of support (or analysis) that the policy brief provides for any/all of its recommendations. It won’t take much time or effort. Read the footnotes. They are downright embarrassing, and in some cases infuriating. At the very least, they border on THINK TANKY MALPRACTICE.

There is a reference to the paper by Dan Goldhaber simulating seniority based layoffs, but that paper provides no analysis of cost/benefit, the central premise of the dollar stretching brief. The Petrilli/Roza (not Goldhaber) assumption is simply that the results will be good, and because we are firing more expensive teachers, it will cost less to get those good results.

The policy brief makes a reference to “typical teacher contracts” (FN2) regarding sick leave, with no citation… no supporting evidence, and phrased rather offensively (18 weeks a year off? For all teachers? Everywhere! OMG???)

FN2: Typical U.S. teacher contracts are for 36.5 weeks per year and include 2.5 weeks sick and personal days for a total work year of 34 weeks, or 18 weeks time off.

The brief refers to work by NCTQ (not the strongest “research” organization) for how to restructure teacher pay.

The report self-cites The Promise of Cafeteria Style Pay (by Roza, non-peer reviewed… schlock), and makes a bizarre generalized attack in footnote 5 that school districts uniformly defend the use of non-teaching staff as substitutes (no evidence/source provided).

FN5: Districts requiring non-teaching staff to serve as substitutes argue that it is good practice to have all staff in classrooms at least a few days a year.

The brief cites policy reports (and punditry) on pension gaps (including the Pew Center report), and those reports refer to alternative plans for closing gaps over time. These are important issues, but the question of how this “stretches” the school dollar is noticeably absent.

And that’s it. That’s the entire extent of “research” and “evidence” used to support this policy brief.

Introducing the Reform-Inator!

Introducing the Coolest New Gadget of the Year – just in time for last-day shopping! The Reform-inator!

  1. Can be used to instantly fire and/or de-tenurize teachers. However, in order to use the reform-inator for these purposes you must line up 100 teachers including all of the good, bad and average ones. The reforminator is a bit touchy… and misfires quite frequently … hitting an average teacher instead of a truly bad one about 35% of the time, and hitting a good teacher instead of a truly bad one about 20% of the time. But what the heck… go for it. Thin the herd. Probabilities are in your favor, if only marginally. And besides, there will be plenty more teachers willing to step up and face the firing line next year.
  2. Can be used to instantly replicate (or new reformy term: scalify, or scalification) only the upper half of charter schools, because we all know that the upper half of charter schools are … well… better than average ones, and well… good charters are good… and bad ones bad (but no need to talk about those, just as there’s no need to talk about the good traditional public schools)… so we really want to replicate and expand only those good charters (primarily by reduced regulation, increased numbers of authorizers and reduced oversight requirements, even though the track record to date hasn’t really shown that to be easily accomplished).
  3. Can be used to take anything that is presently about 7% smaller than it was in the past, and make it disappear entirely – GONE… ALL GONE… just like all of the money for public schools. It’s not just recessed – temporarily diminished – It’s just gone. Vanished. Time to shut it all down! No more sweetheart deals (especially in those really crazy overspending states like Arizona and Utah)!
  4. Can instantly make value-added estimates of teacher effectiveness the “true” measure of teacher effectiveness, and further, can make value-added estimates of teacher effectiveness a stronger predictor of themselves… which of course, are the true measure of effectiveness (stronger than a weak to moderate correlation, that is). Use the special self-validation trigger for this particular effect. Also works for low self-esteem.
  5. Can be used to locate Superman (‘cuz I sure can’t find him in these scatterplots of NYC charter school performance compared to traditional public schools, or these from Jersey either).
  6. Will eliminate entirely anything that might be labeled as Status Quo! Because we all know that if it’s status quo – it’s got to go (or at the very least, the first reformy role of logic: “anything is better than the status quo”)
  7. Most importantly, like any good REFORMY tool, it’s got a Trigger!

Other ideas?