Who would really want to spend more than that? (Ed Next & Spending Preferences)

When Paul Peterson asks “Do we really need to spend more on schools?” we already know what he thinks the answer is – an unequivocal NO!  Knowing the answer you desire always makes it easier to frame the questions, and like previous years, this year’s Education Next survey of attitudes toward public education provides few surprises.

Before I even gained full access to Peterson’s most recent WSJ Op-ed (e-mailed to me by a family member), I was able to guess pretty much where he was going with it.  Here’s how Peterson explains the Ed Next public opinion survey findings:

At first glance, the public seems to agree with this position. In a survey released this week by Education Next, an education research journal, my colleagues and I reported that 65% of the public wants to spend more on our schools. The remaining 35% think spending should either be cut or remain at current levels. That’s the kind of polling data that the president’s political advisers undoubtedly rely upon when they decide to appeal for more education spending.

Yet the political reality is more complex than those numbers suggest. When the people we surveyed were told how much is actually spent in our schools—$12,922 per student annually, according to the most recent government report—then only 49% said they want to pony up more dollars. We discovered this by randomly splitting our sample in half, asking one half the spending question cold turkey, while giving the other half accurate information about current expenditure.

Later in the same survey, we rephrased the question to bring out the fact that more spending means higher taxes. Specifically, we asked: “Do you think that taxes to fund public schools around the nation should increase, decrease or stay about the same?” When asked about spending in this way, which addresses the tax issue frankly, we found that only 35% support an increase. Sixty-five percent oppose the idea, saying instead that spending should either decrease or stay about the same. The majority also doesn’t want to pay more taxes to support their local schools. Only 28% think that’s a good idea.

So there is the nation’s debt crisis in a nutshell. If people aren’t told that nearly $13,000 is currently being spent per pupil, or if they aren’t reminded that there is no such thing as a free lunch, they can be persuaded to think schools should be spending still more.

In other words… yeah… the ignorant general public thinks they want to spend more on schools, but only because they don’t realize how much we are already wasting on public schools! When we clue them into the egregious… no… outrageous… exorbitant spending already going on … and hold a gun to their head… and phrase our question just right… pointing out to them just how stupid we think they are… and how smart we are… then the fix their answer… and become much, much more reasonable!

This explanation is problematic at a number of levels.  First, let’s explore the basic model of local voter preferences for spending on local public schools – specifically the information on price and quality that informs those preferences. First, local public school revenue comes from two primary sources – local property taxes paid on various types of properties within school districts and state general funds derived largely from state sales and income taxes. The mix varies widely from state to state. Residential property owners frequently pay their property taxes embedded in monthly mortgage payments and renters pay their landlords’ property taxes embedded in rent prices. Homeowners and renters have at least some feel for the reasonableness of their aggregate monthly housing payments, and some feel for the quality of public services they receive (schools, fire, police, parks, etc.) for the aggregate price they pay. They also have some feel for a) whether they would like those services improved and b) whether they are willing to pay a bit more to support those improvements. In short, a typical taxpayer/survey respondent has a reasonable gut feel regarding their “tax price” paid for the quality of public service provided.

The local taxpayer/voter/survey respondent sufficiently involved with local public schools (having children in the schools, working in the schools, having children who are recent graduates of the schools, or having recently graduated themselves) probably has some indicators of schooling quality in his/her head that guide his/her preference to pay more (or less). Has class size risen, or does it just seem too large? Has the district cut visible programs like music, arts or athletics of late, or has the district increased fees to cover the costs of these programs? As a result, the respondent is at least somewhat able to piece together whether they wish to spend a little more to decrease class sizes, expand programs or reinstate programs previously cut.

But, the typical taxpayer/voter/survey respondent likely a) doesn’t give a damn about and b) is generally unable to contextualize the meaning of the Total per Pupil Expenditures for a local public school district. It’s an abstract concept. A number that relates in a meaningful sense only to those who really spend their days steeped in such numbers. A number most likely to do little more than bias a response in this case, and it seems to, though it is hard to know precisely why.

Even worse is when those numbers are used totally out of context, as in Peterson’s argument above. Peterson’s description above is actually even worse than the methods description provided at Ed Next (Interestingly, Peterson also adds over $600 per pupil to the average spending figure, and then rounds it up to $13,000 by the end of his op-ed, compared to the information in the paragraph below from Ed Next):

A segment of those surveyed were asked the same ques­tion except that they were first told the level of per-pupil expenditure in their community, which averaged $12,300 for the respondents in our sample. For every subgroup con­sidered, this single piece of information dampened public enthusiasm for increased spending. Support for more spend­ing fell from 59 percent to 46 percent of those surveyed. Among the well-to-do, the level of support dropped dramati­cally, from 52 percent to 36 percent. Among teachers, sup­port for expenditure increases fell even more sharply—from 71 percent to 53 percent (see Figure 7).

Surely, it would be completely absurd to ask (as implied by Peterson’s op ed) the average person in Tennessee if their schools should spend more, after telling that person what the average district spends nationally – implying to the respondent that the figure represents Tennessee spending (as seemingly implied by Peterson’s Op-Ed, and as in the online survey at Ed Next).  It is only marginally more useful, however to ask the average respondent in Tennessee whether they should spend more or less, given a completely out of context representation of their local spending per pupil.

Here’s how the 2008-09 actual national mean per pupil spending compares to the distribution of per pupil spending across Tennessee districts:

(national mean current spending per pupil in 2008-09 was $10,209.83 [w/outliers excluded])

Now, it might be interesting to show the average voter respondent in Tennessee this graph and then ask him/her whether they think more should be spent in Tennessee? This graph provides some context. Context that is completely absent when informing a Tennessee respondent either of their own local district spending WITH NO OTHER CONTEXT AVAILABLE or of the national spending WITH NO OTHER CONTEXT AVAILABLE.

Put very simply, a per pupil spending figure out of context is meaningless.  $17,000 I say! $17,000… an abomination I say. It’s  a huge number! Why would we ever consider spending more than that per pupil in New York City? Well, what if it just happened to turn out that in the same year, that $17,000 per pupil was lower, on average, than most of the surrounding districts with much less needy student populations? What if that $17,000 was only approximately 50% of what was being spent in private independent schools operating within the city?  It doesn’t sound so big any more does it?  How would survey respondents in New York City change their answer if this information was provided?

The Ed Next survey, while fun to ponder each year, isn’t particularly helpful for really understanding voter’s preferences or awareness regarding spending on public schools or perceived quality.

Actual data on local budget votes, including those involving tax increases (increasing the more voter-distasteful local property tax) tend to be a much more useful barometer and even in the worst of economic times, local voter support – especially where voters have the financial capacity to provide that support – remains overwhelmingly positive  (Example NY State Data & previous NJ Blog Post [over 70% pass rate in wealthy districts in worst year]).  Matt Di    Carlo provides further discussion of this topic here, explaining the general voter preferences. It is also worth noting that even the most poorly constructed and phrased polls do not find significant shares (if any) responding that less should be spent.  Yet that is precisely the argument advanced by many pundits in response to these surveys.

More Flunkin’ out from Flunkout Nation (and junk graph of the week!)

Earlier today I stumbled across this brilliant post by RiShawn Biddle over at Dropout Nation.

Biddle boldly claims:

Despite the arguments (and the pretty charts) of such defenders as Rutgers’ Bruce Baker, there is no evidence that spending more on American public education will lead to better results for children.

Now, regarding the “no evidence” claim, I would recommend reading this article from Teachers College Record, this year, which summarizes a multitude of rigorous empirical studies of state school finance reforms finding generally that increased funding levels have been associated with improved outcomes and that more equitable distributions of resources have been associated with more equitable distributions of outcomes.

In fact, even the Spring 2011 issue of the journal Education Finance and Policy includes an article by Joydeep Roy supporting the positive results of state school finance reforms (using Michigan data).

Proposal A was quite successful in reducing interdistrict spending disparities. There was also a significant positive effect on student performance in the lowest-spending districts as measured in state tests.(from abstract)

As Kevin Welner and I point out in our article, this study is not unique in its findings. Here are a few others:

Card & Payne (2002)

Using micro samples of SAT scores from this same period, we then test whether changes in spending inequality affect the gap in achievement between different family background groups. We find evidence that equalization of spending leads to a narrowing of test score outcomes across family background groups. (p. 49)

Deke (2003)

Using panel models that, if biased, are likely biased downward, I have a conservative estimate of the impact of a 20% increase in spending on the probability of going on to postsecondary education. The regression results show that such a spending increase raises that probability by approximately 5% (p. 275).

Papke (2001)

Focusing on pass rates for fourth-grade and seventh grade math tests (the most complete and consistent data available for Michigan), I find that increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance. (Papke, 2001, p. 821.)

Downes (2004) on VT

All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post–Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance. (p. 312)

Downes, Zabel & Ansel (2009) on Mass

The achievement gap notwithstanding, this research provides new evidence that the state’s investment has had a clear and significant impact. Specifically, some of the research findings show how education reform has been successful in raising the achievement of students in the previously low-spending districts. Quite simply, this comprehensive analysis documents that without Ed Reform the achievement gap would be larger than it is today. (p. 5)

Guryan (2003) on Mass

Using state aid formulas as instruments, I find that increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students. The magnitudes imply a $1,000 increase in per pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures. (p. 1)

Goertz & Weiss (2009) on NJ

State Assessments: In 1999 the gap between the Abbott districts and all other districts in the state was over 30 points. By 2007 the gap was down to 19 points, a reduction of 11 points or 0.39 standard deviation units. The gap between the Abbott districts and the high-wealth districts fell from 35 to 22 points. Meanwhile performance in the low-, middle-, and high-wealth districts essentially remained parallel during this eight-year period (Figure 3, p. 23).

I could go on. But that’s a fair share of evidence right there.

And what does Biddle provide as counter evidence to this – apparent lack of evidence I summarize above (I’ve sent the article link to Biddle on more than one occasion, but he apparently doesn’t read this kind of academic stuff)?

Biddle counters with a link to this graph – a true gem (I’ve added some annotation, not in his original)!

Yes, Biddle’s entire counter to the body of research he has not and will not read, is to use this graph of “promoting power” by student race group for Jersey City, NJ in 2004 and 2009. Note that the infusion of additional funds in NJ occurred mainly from 1998 to 2003, leveling off thereafter. But that’s a tangential point (not really).  So, Biddle’s absolute verification that more money doesn’t matter is to simply assert without verification that Jersey City got a whole lot more money and then to use this graph to argue that nothing improved!

First of all, that analysis wouldn’t pass muster in as a master’s degree level assignment (I teach a class on this stuff at that level), no less major research conclusions. From a graphing standpoint, I often criticize my students’ work for what I refer to as gratuitous use of 3d – especially where the use of 3d bars actually obscures the comparisons by making it hard to see where they align on the axis.

But, the really funny if not warped part of this graph is that there appear to be significant gains for black males between 2004 and 2009, but those gains are obscured by hiding the 2009 black male score behind the 2004 black female score.

Note that the graph also contains no information regarding the actual shares of the student population that fall into each group? Not very useful. Pretty damn amateur. Certainly fails to make any particular point, and certainly doesn’t refute the various citations above – all of which employ more rigorous analytic methods, apply to more than a single district, and most of which appear in rigorous peer reviewed journals.

References:

Card, D., and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284.

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (ed), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

Goertz, M., and Weiss, M. (2009). Assessing Success in School Finance Litigation: The Case of New Jersey. New York City: The Campaign for Educational Equity, Teachers College, Columbia University.

Guryan, J. (2003). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

Papke, L. (2005). The effects of spending on test pass rates: evidence from Michigan. Journal of Public Economics, 89(5-6). 821-839.

Roy, J. (2003). Impact of School Finance Reform on Resource Equalization and Academic Performance: Evidence from Michigan. Princeton University, Education Research Section Working Paper No. 8. Retrieved October 23, 2009 from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=630121(Forthcoming in Education Finance and Policy.)

Private Choices, Public Policy & Other People’s Children

I don’t spend much if any time talking about my personal decisions and preferences on this blog. It’s mostly about data and policy.  There’s been much talk lately about whether a Governor’s or President’s choice to send their children to elite private schools, or where Bill Gates, Mark Zuckerberg or prominent “ed reformers” attended school are at all relevant to the current policy conversation around  “reforming” public schools.  When those choices have been questioned publicly, they’ve often been met with the backlash that those are personal choices of no relevance to the current policy debate – just dirty personal attacks about personal, rational choices.

I have no problem with these personal choices. But, these personal choices may, in fact be relevant to the current policy debate.  I do keep in mind my own personal choices and preferences as I evaluate what I believe to be good policy for the children of others. And, I try to keep in mind what I know from my background in research and policy when I make my personal choices.   Like these prominent politicos and pundits, I too choose private independent schools – relatively expensive ones – for my children, and I have my reasons for doing so. As I’ve noted on my blog on a number of occasions, I taught at an exceptional private independent school in New York City, and have relatives and friends who continue to be involved in (and with) high quality private independent schools as teachers, administrators and parents. I did not, however, attend private school. I attended public school in Vermont, followed by private college (Lafayette College).

Why do I personally prefer private independent schools, which often come with a high price tag?  Here are a few reasons:

  1. The responsiveness that comes from a close-knit small community with not only small class sizes but also lower total student load for teachers (at middle and secondary level in particular)
  2. The depth and breadth of curricular offerings ranging from Latin in the middle school, to a diverse array of social science, advanced science and math courses at the high school level and a plethora of opportunities in the arts and athletics.
  3. The lack of emphasis on standardized testing – bubble tests and overemphasis on tested curricular areas and state standards.

Yes, I do consider it important that these schools are not test-whipped, specifically that they are not obsessed with basic reading and math bubble tests alone, or even more disturbing, tests of science and social studies content where the balance (or absence) of content is a function of partisan preferences of ill-informed politically motivated elected officials (e.g. Kansas science standards, or Texas social studies/history standards – thankfully, I’m not in KS anymore).

These days, I consider it especially important that my children not be in a school where teachers have to hang their hopes of achieving a living wage (or getting a bonus to afford cosmetic surgery as in “Bad Teacher”[hope to see that one soon!]) on whether or not my child gains X+Y points on those reading or math tests. In fact, these may now be my main reasons for opting out.

So yes, you might try to call me a hypocrite for preferring private schools for my own children while apparently being such a staunch defender and supporter of the public system (including voting yes on local district budgets, even when encouraged to vote no by public officials). But that would be a dreadful oversimplification and misrepresentation of my position.

I have worked in both public and private schools – one good and one bad of each – over a 10+ year period prior to my life in higher education.  I’ve studied and compared public and private schools in various locations and of various types for over 15 years and published numerous articles, papers and reports. What I’ve learned most from these studies is that private and/or less regulated markets are simply more varied than public and/or more regulated markets. Neither better nor worse on average – simply more varied.

Top notch private schools spend much more, and many financially strapped, relatively average to very low academic quality private schools do spend much less. Much more and much less than one another, and much more and much less than nearby public schools.  It is a massive bait and switch to suggest – look how great Sidwell Friends (DC),  Dalton or Fieldston (NYC) are compared to public schools, and look how much the average Catholic parish elementary school spends compared to the urban public district?  Of course, it’s never as obviously phrased as a bait and switch – suggesting that you can get a Sidwell or Dalton education at an urban Catholic elementary school price.  You can’t! Yes, the average Catholic parish elementary school likely spends less per pupil than the public district. But that school is no Sidwell, Dalton or Fieldston, which spend closer to and in excess of double the public schools in their area.

Private schools do not, as many assume, spend only about half what public schools do. This is urban legend, drawn from dated analyses that were misrepresented to begin with (over 10 years ago).  My extensive report on private school supply and spending covers these issues quite extensively.

To reiterate a major finding from my study of private school costs, private independent schools of the type I am talking about here (members of NAIS or NIPSA), spend ON AVERAGE, 1.96 times the average per pupil amount of public schools in the same labor market! (and have half the pupil to teacher ratio)

I am quite convinced that many of the policy makers who choose elite private schools for their own and advocate for scaling back the public system, really don’t understand the difference. They really don’t know that their private schools outspend nearby traditional public schools – by a lot – despite serving more advantaged student populations. Heck, I’ve talked to administrators in private independent schools who feel that their own budgets are tight (legitimately so), and assume that the public schools around them spend much more per child. But they are simply naïve in this regard (while wise in many other ways). No intent to harm. They’ve simply bought into the misguided rhetoric that private schools spend less and get more and they’ve never double-checked the facts. But even a few minutes of pondering their own budgets and looking up local public school spending brings them around. (Part of this perception is likely driven by differences in access to funding for capital projects, where heads of private schools recognize the heavy lifting of major fundraising campaigns, and envy the taxing authority of public school districts for these purposes).

In my view, the hypocrisy lies in what those who choose elite private schools for their own argue are the best solutions for public education for the children of others.  If the preferences are the same, there is no hypocrisy. The problem is when those preferences are vastly different – completely at odds – as they tend to be in the present “ed reform” and “new normal” debate.

It is hypocritical for pundits who favor for their own children, expensive schooling with diverse curriculum, small class size and little standardized testing (freeing teachers to be professionals), to argue for less money, class size increases and increased standardized testing (and teacher evaluation based on those tests) when it comes to other peoples’ children.

Yes, I too personally favor expensive private schooling for the reasons I’ve indicated above. And yes, my private school significantly outspends both the elite suburban public school district where I live and New Jersey’s reasonably well funded urban districts (compared to other states, see: http://www.schoolfundingfairness.org).   The way I see it, I would not just be a hypocrite, but a complete a-hole if I used my pulpit (what little pulpit I have) as a school finance expert to argue that we should be spending less on others, advocating different policies for others than I desire for myself.  But it’s precisely because I spend my day buried in data on school finance and education policy that I see this glaring hypocrisy.

The difference is that I believe that other children – those whose parents are not able to make this expensive choice – should have access to well-funded schools that also provide small class sizes, diverse curriculum, and for that matter, place less emphasis on standardized tests, and treat teachers as responsible, knowledgeable professionals (not script reading stand-ins and test proctors).

To clarify, this is not a criticism of individuals with personal preferences for high quality education for their own children who are otherwise unconcerned with (or oblivious to) the broader public policy questions pertaining to the children of others. Rather, this is a direct criticism of those public officials and vocal “ed reformers” who prefer high quality, well funded education for their own and then loudly and publicly advocate for a very different quality (and type) of education for the children of others.

If we could actually close the gap between public school resources and resource levels of elite private schools, there might be less demand for those elite private schools (though some would indeed respond with an arms race to outpace public schools).  Presently, however, elite private schools stand to benefit significantly from the “ed reform” and “new normal” movement which will likely make more public schools – including those in more affluent ‘burbs – even less desirable for parents currently on the fence.

So, here’s my challenge to all those policymakers who also prefer elite private independent schools for their children.  I urge you to make a list of all of the reasons why you chose a private independent school. Notably, many if not most parents list class size as a major factor (and most schools advertise class size as a major benefit).  Make a list of the specific attributes of your private school including:

  1. Average class size
  2. Teacher education levels
  3. Numbers and types of elective and advanced course offerings
  4. Numbers and types of extracurricular activities
  5. Whether they pay more experienced teachers more than less experienced ones (or more for teachers holding advanced degrees?)
  6. Whether they emphasize student test scores when evaluating or compensating teachers?

and whatever else you might think of. (here are a few sample NJ private schools)

Get a copy of the school’s IRS 990 tax filing from the school (or from:  http://foundationcenter.org/, or http://www.guidestar.org) to find out roughly how much your school spends each year, and divide that by the number of total enrolled pupils.

Then, gather similar information on surrounding public schools. Make your own comparisons. And after you’ve done so, let me know if you’re still comfortable making bold public proclamations that we need to reign in the absurd spending of public schools, increase class sizes and slash all of those frivolous extracurricular programs for other people’s children, but certainly not our own!

Video Extra:

And a Song:

Paul Mulshine, Amoral Self-Indulgence & New Jersey School Finance

On most days, I can simply laugh off a ridiculous Paul Mulshine column in the Star Ledger. Most of his claims regarding education, taxation and the intersection of the two range from flat-out incorrect to wacky and misguided. But Mulshine’s claims in his column on Wednesday June 22nd necessitate a response.

For several years, I have been a professor where one of my primary responsibilities has been to train future school administrators. I believe strongly that well-informed well prepared and knowledgeable school administrators can and should play a critical role in guiding public education policy.  As one might figure from the name of this blog, my emphasis is on teaching school finance – an inherently political and divisive topic that often pits one district against another or even one school against another. As a result, I believe it is particularly important that leading voices in education policy in a state understand not only how policies affect their own district and children but how those policies affect children statewide – that local school administrators can think beyond the boundaries of their own school district and local constituents, and be mindful of the good of the public as a whole.

Any local school administrator would likely want to find ways to manipulate the state formula for allocating aid in a way that drives more aid to their district. And over the years, I’ve seen many twisted and unethical arguments advocated and legislated to accomplish these goals – including Jackson Wyoming – the wealthiest district in Wyoming – arguing (successfully) that it needs 30% more funding than any other district in the state simply because it is so wealthy. Kansas similarly adopted provisions which provide for more funding in districts a) with higher priced houses and b) with more children attending school in new facilities. I’ve seen more money driven to wealthier districts in South Carolina on the argument that they have more gifted children. And I’ve seen more money targeted to white schools than black schools in Alabama (still in effect) on the basis that white schools have more teachers with advanced degrees and that teachers with advanced degrees cost more (built into the state aid calculation). I’ve written on this topic in peer-reviewed research.

I’ve often been frustrated to see local public school administrators in districts advantaged by these illogical policies either sit idly by, knowing the policies to be wrong, or advocate loudly on behalf of these policies, still knowing full well that the policies are built on flimsy if not absurd arguments.  In the politics of state school finance, self-interest is often hard to overcome.  It is a rare administrator who is able to balance these conflicts well – to not take the easy way out and accept an absurd or even unethical policy position simply because it drives more dollars to their constituents. Earl Kim of Montgomery Township is one of those rare administrators.

Mr. Mulshine’s view that the only role of the local public administrator is to get more for his or her constituents, and that local bureaucrats should never take any action to the contrary – regardless of ethical considerations – is not only absurd but is indicative of much of what is wrong in politics today and society in general.  Mulshine prefers his bureaucrats to be amoral sock puppets.

Here is a clip of what Mulshine had to say about Earl Kim:

“Let me offer a hint to this overpaid bureaucrat: An employee of the school board  has no say whatsoever in such public policy matters as the proper amount of property-tax relief.”

“If he did, however, he should not be advising his superiors to take a course of action that deprives the taxpayers of tens of millions of dollars that could lower their property taxes and help keep them in their houses.”

What Earl Kim understands and what Mulshine clearly doesn’t, is that while Doherty’s “Fair School Funding” plan might drive a lot more money into Earl Kim’s district, it would only do so at the expense of the system as a whole. And that is an ethical compromise that Earl Kim seems unwilling to support. To Mulshine, however, ethics seem inconsequential, when traded for millions of dollars.

Let’s actually take a simulated look at why Earl Kim might be concerned about the Doherty plan. Let’s start with a quick look at how school finance formulas work.

Local public school districts receive varied amounts of state aid based on two major types of factors:

  1. Differences in local school districts’ ability to raise local tax revenue to pay for schools;
  2. Differences in the needs and costs of providing adequate educational services to widely varied student populations.

In simple terms, the current formula – SFRA – accounts for both, and the Doherty plan accounts for neither.  Aw… what the heck, all that math is too complicated anyway!

In a typical state school finance formula, there is a target amount of revenue to be raised by each school district – based on the estimated differences in needs and costs of children attending each district and other factors such as variations in competitive wages for teachers. But, even if the target funding per pupil was the same for each district, the state aid share would be very different. Why? Because some districts have far greater capacity to raise local property tax revenues than others.

Here’s a New Jersey SFRA simulated (oversimplified) example using data from 2009 and 2010. Under the 2010 SFRA, the average target budget per pupil for an Abbott district was $16,387, based on the greater needs of children in these districts and the fact that the largest Abbott districts were also in higher cost north Jersey labor markets.

Applying equitable tax effort, Abbott districts are only able to raise about $4,300 per pupil compared to wealthy districts (to 2 deciles) which can raise, on average, over $13,000 (which is actually more than they would need).  State aid, as it currently stands in NJ (and in other states with similarly structured formulas) is used to fill the gap between what can be fairly raised locally, and what is estimated to be needed to provide an adequate education.

Expressed as effective tax rates, the local share for wealthy I&J districts appears to be slightly higher when expressed relative to property values, but these districts have the lowest effective rate with respect to income – even under SFRA. Overall, the distributions are relatively fair. I’ve written previously about this.

So then, how would it work to simply give every district the same amount of state aid per pupil? The Doherty plan argues for giving every district $7,481 per pupil a) regardless of need and costs and b) regardless of ability to raise local revenue. That would be unprecedented, even in Kansas, Alabama or Wyoming.*

This table shows one perspective on the Doherty plan – if the state simply gave every district the same amount of state aid per pupil, but if we then assumed that districts would need to raise the rest on their own if they really wanted to provide an adequate education (as estimated under SFRA). That is, if high need districts like Abbott districts still wished to try to raise what SFRA projected that they needed. Abbott districts would be expected to raise $8,906 per pupil toward their $16,387 and the wealthiest districts would be expected to raise on their own, about $5,000 per pupil. This creates a nearly nine-fold difference in the effective income tax equivalent across districts! And that’s “Fair School Funding?” One can understand Earl Kim’s concern, even if the proposal would bring home millions to Montgomery Township!

Here’s what it looks like in pictures. In the first picture, we see how SFRA operates pretty much like any state school finance formula built on a “foundation formula” approach. Each district has a target revenue per pupil. And the poorest districts – those with the least local fiscal capacity – are expected to raise the least toward this total. Wealthy districts even when applying equitable tax effort can raise far more than they need!

Here’s what the Doherty plan would look like. Here, every district gets the same regardless of need or capacity. This is rather like arguing that we should distribute food stamps and other financial assistance to residents of the estates of Far Hills in equal amounts to the distributions in Camden, or that we should pave well-conditioned and little used roadways with comparable frequency to heavily worn, highly traveled ones. When we place Doherty aid on top of 2009 local revenues per pupil, we see that the lowest income districts end up having combined state and local revenue per pupil well under $10,000 and that the wealthy districts now have combined state and local revenue per pupil approaching $25,000.

Here’s how it looks with respect to children qualifying for free or reduced price lunch. New Jersey’s school finance system has been praised in several national reports, including this one, for most effectively targeting additional resources toward greater needs. And there exists a significant body of research to validate that such school finance reforms actually do matter (regardless of political rhetoric to the contrary). Indeed the Doherty plan would turn New Jersey school finance on its head – making the system among the most regressive in the nation. That is, a system where higher need districts have systematically fewer resources per pupil.

Now, I don’t expect that this proposal really has much broad-based support, and I would not have typically bothered to critique or debunk it. I’ve stated my reasons above for why I needed to take this particular issue on at this time and under these circumstances.  It would simply make no sense for a well-informed local public school administrator like Earl Kim to advocate on behalf of a policy that is so clearly wrongheaded, so obviously unfair, simply because that policy would drive money into the pockets of his constituents.

(Finally, as an interesting aside, we also know from a series of studies of property tax relief aid for wealthy districts in New York State that increasing state aid to wealthy districts is among the surest ways to increase inefficiency in school district spending.  I often use the analogy that it’s like giving out $100 gift cards to Scarsdale residents to shop at Neiman Marcus. They take the $100 and spend $500 for something they didn’t really need. That is, these policies seem to encourage inefficient spending as much if not more than they provide tax relief. Meanwhile, we might have reallocated those $100 gift cards for basic needs in nearby Yonkers or Mount Vernon.)

*Note: It is conceivable that a state would attempt to create a fully state financed education system (that is, eliminate local share) in which case there is no need to correct for differences in capacity to raise local share. But, a completely flat allocation under these circumstances would fail to address differences in needs and costs.  Relying entirely on state source revenues (sales and income taxes) can, however, reduce the stability of revenue flow to schools (property tax revenues tend to be more stable in economic downturns).

School Finance through Roza-Tinted Glasses: 5 School Funding Myths from a single Misguided Source

I’ve reached a point after these past few years where I feel that I’ve spent way too much time  critiquing poorly constructed arguments and shoddy analyses that seem to be playing far too large a role in influencing state and federal (especially federal) education policy. I find this frustrating not because I wish that my own work got more recognition. I actually think my own work gets too much recognition as well, simply because I’ve become more “media savvy” than some of my peers in recent years.

I find it frustrating because there are numerous exceptional scholars doing exceptional work in school finance and the economics of education whose entire body of rigorous disciplined research seems drowned out by a few prolific hacks with connections in the current policy debate.It may come as a surprise to readers of popular media, but individuals like Mike Petrilli, Eric Osberg, Rick Hess (all listed on the USDOE resource web site) or Bryan Hassel wouldn’t generally be considered credible scholars in school finance or economics of education. I’d perhaps have less concern – and be able to blow this off – if many of the assertions being made by these individuals – and others – weren’t so often completely unsupported by reasonable analysis and if those assertions didn’t lead to potentially dangerous and damaging policies.

This post is specifically about the body of methodologically flimsy research produced in recent years by Marguerite Roza, previously of the Center on Reinventing Public Education and currently an advisor to the Gates Foundation.

Why this post now? I’ve simply lost my patience.

This post is in part a response to the recent unveiling of the U.S. Dept. of Education web site on improving educational productivity http://www.ed.gov/oii-news/resources-framing-educational-productivity. Amazingly, this site lists primarily non-peer reviewed, shoddy work by Marguerite Roza and colleagues and bypasses entirely more serious research on educational productivity or methods for evaluating it.  The quality of some of the examples on this site is particularly abysmal. Yet it is presented as “the work of leading thinkers in the field.” (interesting that “thinkers” is used in place of “researchers.”) Among the worst examples, this site lists as a credible resource the Center for American Progress Return on Investment analysis. (by Ulrich Boser, a great writer on the topic of art theft, but in this case, a bit out of field).

I don’t mind so much that this stuff exists. But it certainly doesn’t belong in a serious policy conversation, nor does it represent “the work of leading thinkers in the field.”

Let’s start with a few common attributes of the worst-of-the-worst types of policy research floating around out there and warping and misguiding the education policy debates in general and school finance debates in particular. For lack of a better term, let’s just call it “hack research.”

Perhaps most importantly, hack research fails to recognize all of the credible work that’s already been done on a topic, typically because the research hack who produced it lacks entirely the discipline to bother to understand that body of work and how to build on it in order to come to new, credible findings and conclusions.

Further, hack research displays little regard for the connection between rigorous analysis and conclusions that may be drawn from it. This stems in part from the lack of discipline to actually conduct rigorous analyses.

Particularly effective hacks will not just ignore the body of existing scholarship but will do so belligerently, proclaiming that no good work has ever been done, no credible methods of analysis do exist, and therefore the time is right for their own creative and new perspective! The hack research method substitute is usually some seemingly intuitive, completely shallow, poorly conceived back-of-the-napkin approach. In other words, the hack research motto is that we must think outside the box, because it’s just too much work to open and unpack that box!

Many of us start as hacks, but eventually grow out of it as we realize that there’s a lot of great stuff out there to read and exceptional scholars from which to learn. And, some non-hacky researchers will occasionally hack. Hack happens. It’s only really problematic when it’s a persistent pattern of hackyness or even gets worse over time.

The most dangerous hacks use their shtick to influence policy with catchy anecdotes, convincing policymakers and major players that they need look no further (at real research, for example) than their own hacky “research.”  And the most effective hacks can spin findings that never were into pure urban legend – well-accepted myths turned realities – with serious policy implications!

Let’s take a look at a number of mythical findings from shoddy research produced by Marguerite Roza in recent years, including a few sources cited on the USDOE resources page.

Myth #1: States have largely solved between district funding disparities and within district disparities are the remaining problem of the day.

Sources of the myth: See references in Baker/Welner article (cited below)

A now common myth in school finance reiterated in numerous sources produced by the Education Trust, Center for American Progress and other DC think tanks and pundits is that states have largely resolved disparities in funding between districts and that persistent disparities are primarily within districts, between schools – a function of illogical district allocation formulas.

In a recent article Kevin Welner and I tackle this argument and dig deeply into the sources behind this argument, which invariably find their way back to Marguerite Roza, then of the Center on Inventing Research Findings – excuse me – Center for Reinventing Public Education (CRPE).

Kevin and I conclude in our article:

 Two interlocking claims are being increasingly made around school finance: that states have largely met their obligations to resolve disparities between local public school districts and that the bulk of remaining disparities are those that persist within school districts. These local decisions are described as irrational and unfair school district practices in the allocation of resources between individual schools. In this article, we accept the basic contention of within-district inequities. But we offer a critique of the empirical basis for the claims that within-district gaps are the dominant form of persistent disparities in school finance, finding instead that claims to this effect are largely based on one or a handful of deeply flawed analyses.

Kevin Welner and I dissect in detail the problematic, “non-traditional” methods Roza and colleagues use for conducting their analyses (ignoring real methods used by real researchers in real publications), but perhaps more interesting are those cases where a narrow, measured finding pertaining to one specific estimate in one specific context becomes a national trend, a dominant reality soon thereafter. Op-Ed columns by Roza on the topic of within versus between district funding disparities include particularly egregious examples. Kevin Welner and I explain:

Following a state high court decision in New York mandating increased funding to New York City schools, Roza and Hill (2005) opined: “So, the real problem is not that New York City spends some $4,000 less per pupil than Westchester County, but that some schools in New York [City] spend $10,000 more per pupil than others in the same city.” That is, the state has fixed its end of the system enough.

This statement by Roza and Hill is even more problematic when one dissects it more carefully. What they are saying is that the average of per pupil spending in suburban districts is only $4,000 greater than spending per pupil in New York City but that the difference between maximum and minimum spending across schools in New York City is about $10,000 per pupil. Note the rather misleading apples-and-oranges issue. They are comparing the average in one case to the extremes in another.

In fact, among downstate suburban[1] New York State districts, the range of between-district differences in 2005 was an astounding $50,000 per pupil (between the small, wealthy Bridgehampton district at $69,772 and Franklin Square at $13,979). In that same year, New York City as a district spent $16,616 per pupil, while nine downstate suburban districts spent more than $26,616 (that is, more than $10,000 beyond the average for New York City). Pocantico Hills and Greenburgh, both in Westchester County (the comparison County used by Roza and Hill), spent over $30,000 per pupil in 2005.[2] These numbers dwarf even the purported $10,000 range within New York City (a range that we agree is presumptively problematic); our conclusion based on this cursory analysis is that the bigger problem likely remains the between-district disparity in funding.

For the full take down, see:

Baker, B. D., & Welner, K. G. (2010). “Premature celebrations: The persistence of interdistrict funding disparities” Educational Policy Analysis Archives, 18(9). Retrieved [date] from http://epaa.asu.edu/ojs/article/view/718

Myth #2: America’s public school system suffers from something called Baumol’s disease, therefore the only solutions must be found outside of public education

Source: Curing Baumol’s Disease: In Search of Productivity Gains in K–12 Schooling Paul Hill, Marguerite Roza

While I don’t think this one really ever caught on, it’s so absurd that it must be addressed. Further, it’s actually cited on the USDOE resources in educational productivity page despite the fact that it offers no useful guidance whatsoever on the topic.

The objective of this policy brief by Paul Hill and Marguerite Roza of CRPE is to explain how American public education suffers from Baumol’s disease, or “the tendency of labor-intensive organizations to become more expensive over time but not any more productive.” Hill and Roza’s attempt at empirical validation that American public education suffers from Baumol’s disease is presented in two oversimplified figures, a graph showing increased number of staff who are not core teachers (Figure 1) and a graph showing that student test scores on the National Assessment of Educational Progress have remained flat over time (Figure 2).  The latter claim that we’ve seen no improvement in NAEP scores over time is contested.[1] And the former claim, when aggregated nationally is not particularly meaningful. The authors provide no empirically rigorous link between the two.

Rather, the casual reader is simply to assume that public schools have added a lot of non-teaching staff and have, on average, nationally seen no yield for that increase costs. Hill and Roza posit:

“While these indicators clearly point to increased costs for education, efforts to quantify productivity changes have been hampered by measurement challenges on the outputs side of the equation. By most accounts, key indicators of outcomes have not shown comparable gains. A thirty-year look at NAEP performance for seventeen year-olds, for instance, suggests that test scores have changed very little.” (p. 3)

While this may, in fact, not be entirely untrue, the authors provide no rigorous validation that “Baumol’s Disease” is a persistent problem of American public schools.

However, without a disease with a catchy name, there would be little reason for their proposed cure. But the proposed cure is no more thoroughly vetted or precisely articulated than the disease.  A central assumption in the Baumol’s disease policy brief is that American public education systems take on one single form, as represented by national averages in the TWO graphs provided, that there is little or no variation within the public education system in terms of resource use or outcomes achieved (e.g. that it all suffers Baumol’s disease), and that therefore the only possible cures are those that come from outside the public education system or at its fringes. That is, that we have nothing to learn from variation within the public education system itself, because there is no such variation. Instead, for example, the authors suggest a closer look at “home schooling, distance learning systems, foreign language learning, franchise tutoring programs, summer content camps, parent-paid instructional programs (music, swimming lessons, etc.), armed services training, industry training/development, apprentice programs, education systems abroad.” (p. 10)

Numerous more credible researchers have spent a great deal of time learning from the heterogeneity of how schools, school districts, and charter schools operate, as well as across states, including studying the relative efficiency of schools that either operate differently or change how they operate. The assumption that the only solutions must come from outside the system is patently absurd, when the “system” consists of 51 policy contexts, over 100,000 schools, 5,000 charter schools and about 15,000 public districts. And it’s just lazy, hack thinking.

While one might gain insights from other labor-intensive industries, or education at the fringes of the current public system, it would be foolish to ignore the extent of variation within the current American public education system, and across traditional public, magnet, charter and private schooling. Arguably, the authors present the view that there is little or nothing to learn from the current system specifically in order to avoid the need for conducting rigorous analysis of it. Further, while such policy briefs may be generously considered as useful conversation starters, we take serious issue with the U.S. Department of Education’s identification of sources of this type, which are purely speculative, and severely lacking in intellectual or empirical rigor, as “Key Readings on Educational Productivity.”

Myth #3: Poor, failing school districts have plenty of money but are squandering too much on Cheerleading and Ceramics when they need to be spending on basics!

Original Source of (the anecdote behind the) myth: “Now is a Great Time to Consider the Per Unit Cost of Everything in Education.”

As I explain in my recent conference paper:

Authors including Marguerite Roza and colleagues of the Center for Reinventing Public Education encourage public outrage that any school district not presently meeting state outcome standards would dare to allocate resources to courses like ceramics or activities like cheerleading. To support their argument, the authors provide anecdotes of per pupil expense on cheerleading being far greater than per pupil expense on core academic subjects like math or English.

  • Imagine a high school that spends $328 per student for math courses and $1,348 per cheerleader for cheerleading activities. Or a school where the average per student cost of offering ceramics was $1,608; cosmetology, $1,997; and such core subjects as science, $739.1

These shocking anecdotes, however, are unhelpful for truly understanding resource allocation differences and reallocation options, and are an unfortunate and unnecessary distraction. For example, the major reason why cheerleading or ceramics expenses per pupil are seemingly high is the relatively small class sizes, compared to those in English or Math. In total, the funds allocated to either cheerleading of ceramics are unlikely to have much if any effect if redistributed to reading or math.

Now, this myth is a rather strange one, because the source from which it comes, which is authored by Marguerite alone, really isn’t totally unreasonable. It’s not useful in any way shape or form, but it’s not unreasonable either. This wacky anecdote about cheerleading and ceramics spending comes from a piece in which Roza is trying to explain the importance of comparing unit costs of providing specific programs/opportunities. This is a rather “no duh” idea, and the working paper and eventual book chapter is built on uninteresting anecdotes, at best. The original point of the paper is that if administrators look at the per unit cost of everything, they might find some things that stand out, and some things that might be reasonably reorganized to be offered at a lower unit cost (for example, the cost of cheerleading was reduced by moving it from a class period drawing on salaried time, to an after school activity, paid by small stipend).

But, the spin from this piece has been that this is all that low performing, poor urban districts need to do. They’ve all got enough. They themselves are responsible for the most persistent inequities – not the states. And they are the ones wasting way too much on things like cheerleading and ceramics. Given that this spin has had far more traction than the more reasonable paper behind it, one might assert that this is precisely what Roza intended.

In my paper, I conclude:

Rather, the emergent story from the data in both states was the contrast between high spending, high outcome districts, and low spending low outcome districts and their respective high schools. On average, high spending, high outcome districts were as one might expect much lower in student poverty concentration and low spending, low outcome districts much higher in poverty. That is, after applying thorough cost adjustment including adjustments for differences in student needs. Interestingly, the most striking differences between these groups of districts were not in the availability of assigned teachers or courses in the arts, but rather in the distribution of advanced versus basic course offerings in curricular areas such as math and physical science.

Note that to begin with, low spending, low outcome schools had fewer teacher main assignments and fewer course assignments per pupil. As such, they were, from the outset, more constrained in their allocation options. Further, there is at least some evidence that when evaluating district wide resource allocation, low resource, low outcome districts see greater necessity or feel greater pressure to allocate a larger overall share of resources to elementary classrooms (based on Illinois findings).

More thorough analyses of this issue see:

Baker, B.D. (2011) Cheerleading, Ceramics and the Non-Productive Use of Educational Resources in High Need Districts: Really? Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA 2011

Myth #4: High schools in Washington State pay math and science teachers less than other teacher despite public interest and state policies which encourage paying them more

Source: Washington State High Schools Pay Less for Math and Science Teachers than for Teachers in Other Subjects Jim Simpkins, Marguerite Roza, Cristina Sepe

This is one that suffers from both major issues identified at the beginning of this rant. First, the disconnect between the “study” and the press release:

The Press Release
http://www.crpe.org/cs/crpe/view/news/111

The analysis finds that in twenty-five of the thirty largest districts, math and science teachers had fewer years of teaching experience due to higher turnover—an indication that labor market forces do indeed vary with subject matter expertise. The subject-neutral salary schedule works to ignore these differences.

The Study
http://www.crpe.org/cs/crpe/download/csr_files/rr_crpe_STEM_Aug10.pdf

That said, the lower teacher experience levels are indicative of greater turnover among the math and science teaching ranks, lending support to the hypothesis that math and science teachers may have access to more compelling non-teaching opportunities than do their peers. (p. 5)

That is, the conclusions of the study itself and the press release are, well, not consistent. But this pattern of behavior is entirely consistent for Roza and CRPE.

In a previous post I address just how ridiculous the methods in this analysis are, in which she compares STEM teacher salaries with non-STEM teacher salaries without any controls for other factors that affect salaries (on the argument that salaries shouldn’t be based on those things – experience and degree level – anyway).

All that Roza really found in this paper was that STEM teachers tend to be younger and as a result have lower average salaries than non-STEM teachers. From that, she spun the argument that because STEM teachers don’t earn more than other teachers, but STEM fields are more competitive, STEM teachers must be leaving teaching at a higher rate, leading to a less experienced pool and lower average salaries (a vicious cycle indeed! But one that’s never validated by the ridiculous analysis).

In my post, I actually evaluate several years of teacher level data on all teachers in Washington State, finding most of her conclusions to be flat out wrong. Here’s the figure on mean STEM and non-STEM teacher salaries over time: https://schoolfinance101.com/wp-content/uploads/2010/08/slide42.jpg

I also point out that credible researchers like Lori Taylor of Texas A&M have actually done better analyses of Washington teacher wages and addressed variations in labor market competitiveness by field:

Report on Taylor Study:

http://www.wsipp.wa.gov/rptfiles/08-12-2201.pdf

Taylor Study:

http://www.leg.wa.gov/JointCommittees/BEF/Documents/Mtg11-10_11-08/WAWagesDraftRpt.pdf

Somehow, not surprisingly, Roza was unaware of either this better research or the more credible methods used in this research.

For the full take down, see: https://schoolfinance101.wordpress.com/2010/08/20/new-from-the-center-on-inventing-research-findings/

Myth #5: With our handy-dandy basket of reformy fixes, we can cut significant funding from American public schools and dramatically increase productivity!

Source: Petrilli and Roza

Stretching the School Dollar (Brief)

http://www.edexcellence.net/publications-issues/publications/stretching-the-school-dollar-policy-brief.html

In their policy brief on Stretching the School Dollar, Mike Petrilli of Thomas B. Fordham Institute and Marguerite Roza of the Gates Foundation provide a lengthy laundry list of strategies by which school districts and states might arguably increase their productivity at lower expense, or “stretch the dollar” so to speak.  This policy brief is an extension of the Frederick Hess (American Enterprise Institute) and Eric Osberg (Fordham Institute) edited book by the same title.  We highlight this source because of repeated specific references to this source in Secretary Duncan’s “New Normal” speeches during the Fall of 2010.[2]

Because this policy brief and book specifically list strategies that are intended to improve productivity at comparable or lower expense, it would be particularly relevant for the book or brief to either provide directly or summarize from other sources, rigorous cost-effectiveness analysis of these options, or relative efficiency comparisons of schools and districts employing these options.   But that is apparently asking way too much of Roza or Petrilli. I’ll cut Mike some slack here, because he isn’t the one actually presenting himself as a school finance expert/scholar. That’s Roza’s role in this partnership, therefore the burden falls on her.  But after reading enough work by Roza and colleagues, I’m no-longer convinced that she is even aware that there is a body of research out there on Cost-effectiveness analysis or relative efficiency (more on this later). I certainly encourage her to go buy a copy of Hank Levin and Patrick McEwan’s book, not so subtly titled Cost-Effectiveness Analysis: Methods and Applications. It’s a relatively easy, non-academic read.

I’ll offer a primer on these methods and their application to these questions in a future post. There’s no need to beat a dead horse on this topic. I’ve taken down Roza and Petrilli’s reformy gift basket in two previous posts to which you can refer.

For the full take down, see:

Part 1 – Stretching the Truth, Not Dollars: School Finance in a Can: Unproven and Unsubstantiated Dollar-Stretching State Policies

Part 2 – Stretching the Truth, Not Dollars: Considering the Application of Cost-Benefit Analysis to Teacher Layoff Alternatives

Addendum (and a catchy tune): Ethics, Social Science Research and VAMing Teachers

A few days ago, I posted my concerns regarding the contorted logic of the Brookings report on evaluating teacher evaluation systems. More recently, NEPC posted a slightly revised version of that blog post here: http://nepc.colorado.edu/files/Passing%20muster%20fails%20muster.pdf

Below is an addition to the NEPC version which was not in my original post, but rather, a comment I had made in response to a comment in my post.

The awkward issue here is that this brief and calculator are prepared by a truly exceptional group of scholars, and not just reform-minded pundits. It strikes me that we technocrats have started to fall for our own contorted logic – that the available metric is the true measure – and the quality of all else can only be evaluated against that measure. We’ve become myopic in our analysis, and we’ve forgotten all of the technical caveats of our own work, simply assuming the
technical caveats of any/all alternatives to be far greater.

Beyond all of that, I fear that technicians working within the political arena are deferring judgment on important technical concerns that have real ethical implications. When a technician knows that one choice is better (or worse) than another, one measure or model better than another, and that these technical choices affect real lives, the technician should – MUST – be up front/honest about these preferences.

Of course, this all got me thinking about our responsibilities as social science researchers and especially as social science researchers attempting to use complex statistical models to affect public policy in ways that in turn has real consequences for real people.

Now, I’m no expert in ethics, so I’ll not opine much further on the topic. However, I believe that I’ve become somewhat sensitized to ethical concerns and dilemmas that occur in such contexts, perhaps by various interactions with some pretty good ethical thinkers over time and perhaps even by my time working at the Ethical Culture Schools in NYC. Interestingly, one noted alum of ECS was J. Robert Oppenheimer (“father of the atomic bomb”), for whom a physics lab at the school is named.

This all reminds me of a song

(RE)Ranking New Jersey’s Achievement Gap

New Jersey’s current commissioner of education seems to stake much of his arguments for the urgency of implementing reform strategies on the argument that while New Jersey ranks high on average performance, New Jersey ranks 47th in achievement gap between low-income and non-low income children (video here: http://livestre.am/M3YZ). To be fair, this is classic political rhetoric with few or no partisan boundaries.

As I have been discussing on this blog, comparisons of achievement gaps across states between children in families above the arbitrary 185% income level and below that income level are very problematic.  In my last post on this topic, I showed that states where there is a larger gap in income between these two groups (the above and below the line groups), there is also a larger gap in achievement.  That is, the size of the achievement gap is largely a function of the income distribution in each state.

Let’s take this all one more, last step and ask – If we correct for the differences in income between low and higher income families – how do the achievement gap rankings change? And, let’s do this with an average achievement gap for 2009 across NAEP Reading and Math for Grades 4 and 8.

First, here are the differences in income for lower and higher income children, with states ranked by the income gap between these groups:

Massachusetts, Connecticut and New Jersey have the largest income gaps between families above and below the arbitrary Free or Reduced Price Lunch income cut off.

Now, let’s take a look at the raw achievement gaps averaged across the four tests:

New Jersey has a pretty large gap, coming in 5th among the lower 48 states (note there are other difficulties in comparing the income distributions in Alaska and Hawaii, in relation to free/reduced lunch cut points). Connecticut and Massachusetts also have very large achievement gaps.

One can see here, anecdotally that states with larger income gaps in the first figure are generally those with larger achievement gaps.

Here’s the relationship between the two:

In this graph, a state that falls ON THE LINE, is a state where the achievement gap is right on target for the expected achievement gap, given the difference in income for those above and below the arbitrary free or reduced price lunch cut-off. New Jersey falls right on that line. States falling on the line have relatively “average” (or expected) achievement gaps.

One can take  this the next step to rank the “adjusted” achievement gaps based on how far above or below the line a state falls. States below the line have achievement gaps smaller than expected and above the line have achievement gaps larger than expected. At this point, I’m not totally convinced that this adjustment is capturing enough about the differences in income distributions and their effects on achievement gaps. But it makes for some fun adjustments/comparisons nonetheless. In any case, the raw achievement gap comparisons typically used in political debate are pretty meaningless.

Here are adjusted achievement gap rankings:

Here, if I counted my bars right, NJ comes in 27th in achievement gap. That is 27th from largest. That is, New Jersey’s adjusted achievement gap between higher and lower-income students, when correcting for the size of the income gap between those students, is smaller than the gap in the average state.

More on NAEP Poverty Gaps & Why State Comparisons Don’t Work

This post is a follow-up to a recent post on how income distributions differ across states and how those income distributions thwart our ability to make reasonable comparisons across states in the size of achievement gaps in relation to low-income status. This series of posts on NAEP poverty gaps comes in response to a tweet on May 4 from Lisa Fleisher of the WSJ.  Lisa was quoting NJ Education Commissioner Cerf on NJ school performance.

  • @lisafleisher Lisa Fleisher
  • Cerf on performance of NJ schools compared w/nation: 5th best in country. But gap btwn rich/poor = 47th highest gap. An “astounding figure”

Cerf has had some difficulties in the past making reasonable (honest) presentations of achievement data – specifically with respect to the influence of poverty measurement.

To review (so you don’t have to necessarily go back and read the other post, which is here):

Here’s the basic framing adopted by most who report on this stuff:

Non-Poor Child Test Score – Poor Child Test Score = Poverty Achievement Gap

Non-Poor Child in State A = Non-Poor Child in State B

Poor Child in State A = Poor Child in State B

These conditions have to be met for there to be any validity to rankings of achievement gaps.

Now, here’s the problem.

Poor = child from family falling below 185% income level relative to income cut point for poverty

Therefore, the measurement of an achievement gap between “poor” and “non-poor” is:

Average NAEP of children above 185% poverty threshold – Average NAEP of children below 185% poverty threshold = “Poverty” achievement Gap

But, the income level for poverty is not varied by state or region. See: https://schoolfinance101.com/wp-content/uploads/2011/03/slide1.jpg

As a result, the distribution of children and their families above and below the specified threshold varies widely from state to state, and comparing the average performance of the groups of children above that threshold and below it is not particularly meaningful.  Comparing those gaps across states is really problematic.

While I showed how different the poverty and income distributions were in Texas and New Jersey as an example, I didn’t necessarily go far enough in that post to explain how/why these distribution differences thwart comparisons of low-income vs. non-low income achievement gaps. Yes, it should be clear enough that the above the line and below the line groups just aren’t similar across these two states and/or nearly every other.

A logical extension of the analysis in that previous post would be to look at the relationship between:

Gap in average family total income between those above and below the free or reduced price lunch cut-off

AND

Gap in average NAEP scores between children from families above and below the free or reduced price lunch cut-off

If there is much of a relationship between the income gaps and the NAEP gaps – that is, states with larger income gaps between the poor and non-poor groups also have larger achievement gaps – such a finding would call into question the usefulness of state comparisons of these gaps.

So, let’s walk through this step by step.

First, here is the relationship across states between the  NAEP Math Grade 8 scores and family total income levels for children in families ABOVE the free or reduced cutoff:

There is a modest relationship between income levels of non-low income children and NAEP scores. Higher income states generally have higher NAEP scores. No adjustments are applied in this analysis to the value of income from one location to another, mainly because no adjustments are applied in the setting of the poverty thresholds. Therein lies at least some of the problem. The rest lies in using a simple ABOVE vs. BELOW a single cut point approach.

Second, here’s the relationship between the average income of families below the free or reduced lunch cut point and the average NAEP scores on 8th Grade Math (2009).

This relationship is somewhat looser than the previous relationship and for logical reasons – mainly that we have applied a single low-income threshold to every state and the average income of individuals below that single income threshold does not vary as widely across states as the average income of individuals above that threshold. Further, the income threshold is arbitrary and not sensitive do the differences in the value of any given income level across states.  But still, there is some variation, with some stats have much larger clusters of very low-income families below the free or reduced price lunch threshold (Mississippi).

BUT, HERE’S THE PUNCHLINE:

This graph shows the relationship between income gaps estimated using the American Community Survey data (www.ipums.org) from 2005 to 2009 and NAEP Gaps. This graph addresses directly the question posed above – whether states with larger gaps in income between families above and below the arbitrary low-income threshold also have larger gaps in NAEP scores between children from families above and below the arbitrary threshold.

In fact, they do. And this relationship is stronger than either of the two previous relationships. As a result, it is somewhat foolish to try to make any comparisons between achievement gaps in states like Connecticut, New Jersey and Massachusetts versus states like South Dakota, Idaho or Wyoming. It is, for example, more reasonable to compare New Jersey and Massachusetts to Connecticut, but even then, other factors may complicate the analysis.

Grading the Governors’ Cuts: Cuomo vs. Kasich vs. Corbett (revised AGAIN!)

Here’s a quick data driven post on Governor’s state aid cuts – or aid changes. So far, I’ve been able to compile data from a few states which make it relatively easy to access and download data on district by district runs of state aid (and one state that does not, but I have good sources of assistance). Here, I compare changes in state aid to K-12 public school districts in Ohio, Pennsylvania and New York.

Let’s start with a review of types of cuts or distributions of cuts that might be applied:

First, cuts might be implemented as  percent of state aid, but might be implemented across different aid programs. States typically have different clumps of state aid that goes out to school districts, some of which are progressively allocated with respect to need and wealth and others which may be allocated flat across districts regardless of local capacity or wealth. And some, like New York State actually still maintain very large aid programs that are distributed in greater amounts to wealthier districts (STAR aid). If one makes proportionate cuts to need based aid, or equalized aid, that generally means making larger cuts to needier districts (on  a per pupil basis). The cuts alone are regressive on their face, and because the cuts are larger for districts with less capacity to replace locally the state cuts the effect tends to be highly regressive. Smaller cuts on wealthier districts are easily replaced with local source funds.

Alternatively, a state might cut a flat percent of flatly allocated aid, or a state might distribute aid cuts as a flat percent of per pupil budgets. The distributional effects – at face value – of these cuts does depend on the distribution of state budgets.  If the overall system is progressive to begin with (higher need districts having larger per pupil budgets) then the cuts are larger on a per pupil basis in higher need districts. If the overall system is flat, or neutral, the proportionate cuts will be flat or neutral on their face. If applied to flatly allocated aid, the cuts are flat, on their face. However, because wealthier districts can more easily replace the same size cut, the distribution effect will likely remain regressive – though not as absurdly regressive as the first option.

Most cuts fall into these two above categories (first three in table), but the possibility exists that a state would actually cut state aid in greater amounts to those districts that either have less need to begin with or districts that can most easily replace that aid with local resources. These would, on their face, be progressively distributed cuts. But, because those districts receiving the largest cuts would be the ones with greatest capacity to bounce back on their own, the distribution effect would likely be flat.

The baseline conditions in a state matter!

This table draws on the School Funding Fairness report I worked on and released last year, which characterizes the baseline conditions for states. It would be particularly problematic, for example, to make the first type of cuts on a state school finance system that is regressive to begin with. It would arguably also be quite offensive to make flat cuts on a regressive system. For more explanation regarding these baseline conditions, see http://www.schoolfundingfairness.org.

New York, while having high average spending per pupil, IS AMONG THE MOST REGRESSIVELLY FUNDED STATE EDUCATION SYSTEMS IN THE NATION. In fact, funding in New York State is only as high as it is because of the very high spending of very affluent suburban districts – suburban districts that, by the way, continue to receive substantial state aid for property tax relief.  New Jersey and Ohio are two of the only states which, in our report, showed systematic positive relationships between funding (state and local) and poverty, albeit Ohio’s funding was much less systematic than that of New Jersey and less progressive overall. Still, Ohio was far more progressive on funding distribution than many other states. Pennsylvania was right down their with New York, among the most regressive in the nation – but PA had begun to phase in a new basic education funding formula which would, if implemented, lead to improvements.

How do the Governor’s cuts play out? Who’s “best” and who’s “worst”

Below are the district by district distributions of per pupil aid changes with respect to student need measures, for Ohio, NY and PA.

In New York, the aid cuts per pupil ARE REGRESSIVE ON THEIR FACE, and fall into the first and worst category above. Higher need districts will have their aid cut nearly $500 per pupil, while many very low need districts see negligible cuts per pupil.

AND NOW FOR THE REAL KASICH CUTS. IF THE CORBETT CUTS WERE SUSPECT AS REPORTED IT ONLY MADE SENSE TO TAKE A SECOND LOOK AT THE KASICH GAME. AND THE PLAYBOOK IS THE SAME!

The playbook is to ignore that federal stabilization money that was intended to be replaced with state aid as it disappeared. Well, here are Kasich’s REAL regressive cuts when comparing 2012 to 2011, with 2011 including the stabilization money:


Ohio’s cuts are particularly interesting. On a per pupil basis, the cuts are systematically smaller in higher poverty districts. The cuts are actually larger in lower need and higher wealth districts (but for a few outliers). These cuts are, on their face, progressive, and will likely lead to a relatively flat distribution of overall per pupil budget changes. I’ve not yet run the second year of aid changes though.

As reported on the PA state portal web size, Basic Education Funding is set to increase by about 2% across PA districts. I’ve certainly heard news of cuts, but the data and official documentation at this point do not show those cuts. The overall state budget data do show huge cuts to other areas of the budget. But BEF funding receives a small boost and SEF (special education funding) is frozen. Because the boost is proportionate to 2010-11 BEF funding which is equalized, the bust is larger in higher need districts. Nonetheless the boost is quite small.

NOW FOR THE REAL PENNSYLVANIA CORBETT CUTS, COURTESY OF THE ED LAW CENTER OF PA:

Why the big difference? Well, I should have caught this one. Indeed the first graph above which shows a 2% increase over prior year is, in fact, a 2% increase over the prior year STATE + FED JOBS money portions of BEF. What they failed to mention is that they chose not to replace the FEDERAL STABILIZATION FUNDING.  In 2010-11:

BEF = STATE AID + SFSF + JOBS

The idea was, that as SFSF disappeared, state aid would be raised to replace that money, or else districts would face substantial budget holes. Corbetts 2012 funding is:

Corbett BEF Aid = 1.02 x (STATE AID 2010-11 + JOBS 2010-11)

Leaving out that other $650 million or so that was also in BEF (from SFSF) in the prior year, and was distributed through the equalized formula.

PA ELC Spreadsheet here!

So, the winner of the worse cuts award in ROUND1 – the battle of Corbett, Kasich, Cuomo – is Cuomo.   Cuomo’s cuts are large and Cuomo’s cuts are regressive on their face! That’s one heck of an accomplishment!

SO, AS IT TURNS OUT BOTH KASICH AND CORBETT ACTUALLY DO MARGINALLY WORSE THAN CUOMO.


BONUS GRAPH – CHRISTIE’s Prior Year New Jersey Cuts


Why comparing NAEP poverty achievement gaps across states doesn’t work

Pundits love to make cross-state comparisons and rank states on a variety of indicators (I’m guilty too). A favorite activity is comparing NAEP test scores across subjects, including comparing which states have the biggest test score gaps between children who qualify for subsidized lunch and children who don’t. The simple conclusion – States with big gaps are bad – inequitable – and states with smaller gaps must being doing something right!

It is generally assumed by those who report these gaps and rank states on achievement gaps that these gaps are appropriately measured – comparably measured – across states. That a low-income child in one state is similar to a low-income child in another. That the average low-income child or the average of low-income children in one state is comparable to the average of low-income children in another, and that the average of non-low income children in one state is comparable to the average of non-low income children in another.  LITTLE COULD BE FURTHER FROM THE TRUTH.

Let’s review the assumption. Here’s the basic framing adopted by most who report on this stuff:

Non-Poor Child Test Score – Poor Child Test Score = Poverty Achievement Gap

Non-Poor Child in State A = Non-Poor Child in State B

Poor Child in State A = Poor Child in State B

These conditions have to be met for there to be any validity to rankings of achievement gaps.

Now, here’s the problem.

Poor = child from family falling below 185% income level relative to income cut point for poverty

Therefore, the measurement of an achievement gap between “poor” and “non-poor” is:

Average NAEP of children above 185% poverty threshold – Average NAEP of children below 185% poverty threshold = “Poverty” achievement Gap

But, the income level for poverty is not varied by state or region. See: https://schoolfinance101.com/wp-content/uploads/2011/03/slide1.jpg

As a result, the distribution of children and their families above and below the specified threshold varies widely from state to state, and comparing the average performance of the groups of children above that threshold and below it is not particularly meaningful.  Comparing those gaps across states is really problematic.

Here are graphs of the poverty distributions (using a poverty index where 100 = 100%, or income at the poverty level) for families of 5 to 17 year olds in New Jersey and in Texas. These graphs are based on data from the 2008 American Community Survey (from http://www.ipums.org). They include children attending either/both public and private school.

To put it really simply, comparing the above the line and below the line groups in New Jersey means something quite different from comparing the above the line and blow the line groups in Texas, where the majority are actually below the line… but where being below the line may not by any stretch of the imagination be associated with comparable economic deprivation. Further, in New Jersey, much larger shares of the population are distributed toward the right hand end of the distribution – the distribution is overall “flatter.” These distributional differences undoubtedly have significant influence on the estimation of achievement gaps. As I often point out, the size of an achievement gap is as much a function of the height of the highs as it is a function of the depth of the lows.

For further explanation of the problems with poverty measurement across states, using constant thresholds, and proposed solutions see:

Renwick, Trudi. Alternative Geographic Adjustments of U.S. Poverty Thresholds: Impact on State Poverty Rates. U.S. Census Bureau, August 2009.

https://xteam.brookings.edu/ipm/Documents/Trudi_Renwick_Alternative_Geographic_Adjustments.pdf

Income distributions for each state: