Blog

The When, Whether & Who of Worthless Wonky Studies: School Finance Reform Edition

I’ve previously written about the growing number of rigorous peer reviewed and other studies which tend to show positive effects of state school finance reforms. But what about all of those accounts to the contrary? The accounts that seem so dominant in the policy conversations on the topic. What is that vast body of research that suggests that school finance reforms don’t matter? That it’s all money down the rat-hole. That in fact, judicial orders to increase funding for schools actually hurt children?

Beyond utterly absurd graphs and tables like Bill Gates’ “turn the curve upside down” graph, and Dropout Nation’s even more absurd graph, there have been a handful of recent studies and entire books dedicated to proving that court ordered school finance reforms simply have no positive effect on children. Some do appear in peer reviewed journals, despite egregious (and really obvious) methodological flaws. And yes, some really do go so far as to claim that court ordered school finance reforms “harm our children.”[1]

The premise that additional funding for schools often leveraged toward class size reduction, additional course offerings or increased teacher salaries, causes harm to children is, on its face, absurd. Further, no rigorous empirical study of which I am aware actually validates that increased funding for schools in general or targeted to specific populations has led to any substantive, measured reduction in student outcomes or other “harm.”

But questions regarding measurement and validation of positive effects versus non-effects are complex. That said, while designing good research analyses can be quite complex, the flaws of bad analyses are often absurdly simple. As simple as asking three questions: a) whether the reform in question actually happened? b) when it happened and for how long? and c) who was to be affected by the reform?

  • Whether: Many analyses argue to show that school funding reforms had no positive effects on outcomes, but fail to measure whether substantive school funding reforms were ever implemented or whether they were sustained. Studies of this type often simply look at student outcome data in the years following a school funding related ruling, creating crude classifications of who won or lost the ruling. Yet, the question at hand is not whether a ruling in-and-of-itself leads to changes in outcomes, but whether reforms implemented in response to a ruling do. One must, at the very least, measure whether reform actually happened!
  • When: Many analyses simply pick two end points, or a handful of points of student achievement to cast as a window, or envelop around a supposed occurrence of school finance reform or court order, often combining this strategy with the first (not ever measuring the reform itself). For example, one might take NAEP scores from 1992 and 2007 on a handful of states, and indicate that sometime in that window, each state implemented a reform or had a court order. Then one might compare the changes in outcomes from 1992 to 2007 for those states to other states that supposedly did not implement reforms or have court orders. This, of course provides no guarantee that states from the non-reform group (a non-controlled control group?) didn’t actually do something more substantive than the reform group. But, that aside, the casting of a large time window and the same time window across states ignores the fact that reforms may come and go within that window, or may be sufficiently scaled up only during the latter portion of the window. It makes little sense, for example to evaluate the effects of New Jersey’s school finance reforms which experienced their most significant scaling up between 1998 and 2003, by also including 6 years prior to any scaling up of reform. Similarly, some states which may have aggressively implemented reforms at the beginning of the window may have seen those reforms fade within the first few years. When matters!
  • Who: Many analyses also address imprecisely the questions of “who” is expected to benefit from the reforms. Back to the “whether” question, if there was no reform, then the answer to this question is no-one. No-one is expected to benefit from a reform that didn’t ever happen. Further, no-one is expected to benefit today from a reform that may happen tomorrow, nor is it likely that individuals will benefit twenty years from now from a reform that is implemented this year, and gone within the next three years. Beyond these concerns, it is also relevant to consider whether the school finance reform in question, if and when it did happen, benefited specific school districts or specific children. Reforms that benefit poorly funded school districts may not also uniformly benefit low income children who may be distributed, albeit unevenly, across well-funded and poorly-funded districts. Not all achievement data are organized for appropriate alignment with funding reform data. And if they are not, we cannot know if we are measuring the outcomes of who we would actually expect to benefit.

In 2011, Kevin G. Welner of the University of Colorado and I published an extensive review of the good, the bad and the ugly of research on the effectiveness of state school finance reforms.[2] In our article we identify several specific examples of empirical studies claiming to find (not just “find” but prove outright) that school funding reforms and judicial orders simply don’t matter. That is, they don’t have any positive effects on measured student outcomes. But, as noted above, many of those studies suffer from basic flaws of logic in their research design, which center on questions of whether, when and who.

As one example of a whether problem, consider an article published by Greene and Trivett (2008). Greene and Trivitt claim to have found “no evidence that court ordered school spending improves student achievement” (p. 224).  The problem is that the authors never actually measured “spending” and instead only measured whether there had been a court order. Kevin Welner and I explain:

The Greene and Trivitt article, published in a special issue of the Peabody Journal of Education, proclaimed that the authors had empirically estimated “the effect of judicial intervention on student achievement using standardized test scores and graduation rates in 48 states from 1992 to 2005” and had found “no evidence that court ordered school spending improves student achievement” (p. 224, emphasis added). The authors claim to have tested for a direct link between judicial orders regarding state school funding systems and any changes in the level or distribution of student outcomes that are statistically associated with those orders. That is, the authors asked whether a declaration of unconstitutionality (nominally on either equity or adequacy grounds) alone is sufficient to induce change in student outcomes. The study simply offers a rough indication of whether the court order itself, not “court-ordered school spending,” affects outcomes. It certainly includes no direct test of the effects of any spending reforms that might have been implemented in response to one or more of the court orders.

Kevin Welner and I also raise questions regarding “who” would have benefited from specific reforms and “when” specific reforms were implemented and/or faded out. In our article, much of our attention regarding who and when questions focused on Chapter 6, The Effectiveness of Judicial Remedies of Eric Hanushek and Alfred Lindseth’s book Courting Failure.[3] A downloadable version of the same graphs and arguments can be found here: http://edpro.stanford.edu/Hanushek/admin/pages/files/uploads/06_EduO_Hanushek_g.pdf.  Specifically, Hanushek and Lindseth identify four states, Kentucky, Massachusetts, New Jersey and Wyoming as states which have by order of their court systems, (supposedly) infused large sums of money into school finance reforms over the past 20 years. Given this simple classification, Hanushek and Lindseth take the National Assessment (NAEP) Scores for these states, including scores for low income children, and racial subgroups, and plot those scores against national averages from 1992 to 2007.

No statistical tests are performed, but graphs are presented to illustrate that there would appear to be no difference in growth of scores in these states relative to national averages. Of course, there is also no measure of whether and how funding changed in these states compared to others. Additionally, there is no consideration of the fact that in Wyoming, for example, per pupil spending increased largely as a function of enrollment decline and less as a function of infused resources (the denominator shrunk more than the numerator grew).

Setting these other major concerns aside, which alone undermine entirely the thesis of Hanushek and Lindseth’s chapter, Kevin Welner and I explain the problem of using a wide time window to evaluate school finance reforms which may ebb and flow throughout that window:

As noted earlier, the appropriate outcome measure also depends on identifying the appropriate time frame for linking reforms to outcomes. For example, a researcher would be careless if he or she merely analyzed average gains for a group of states that implemented reforms over an arbitrary set of years. If a state included in a study looking at years 1992 and 2007 had implemented its most substantial reforms from 1998 to 2003, the overall average gains would be watered down by the six pre-reform years – even assuming that the reforms had immediate effects (showing up in 1998, in this example). And, as noted earlier, such an “open window” approach may be particularly problematic for evaluating litigation-induced reforms, given the inequitable and inadequate pre-reform conditions that likely led to the litigation and judicial decree.

There also exist logical, identifiable, time-lagged effects for specific reforms. For example, the post-1998 reforms in New Jersey included implementation of universal pre-school in plaintiff districts. Assuming the first relatively large cohorts of preschoolers passed through in the first few years of those reforms, a researcher could not expect to see resulting differences in 3rd or 4th grade assessment scores until four to five years later.

Further, as noted previously, simply disaggregating NAEP scores by race or low income status does not guarantee by any stretch that one has identified the population expected to benefit from specific reforms. That is, race and poverty subgroups in the NAEP sample are woefully imprecise proxies for students attending districts most likely to have received additional resources. Kevin Welner and I explain:

This need to disaggregate outcomes according to distributional effects of school funding reforms deserves particular emphasis since it severely limits the use of the National Assessment of Educational Progress – the approach used in the recent book by Hanushek and Lindseth. The limitation arises as a result of the matrix sampling design used for NAEP. While accurate when aggregated for all students across states or even large districts, NAEP scores can only be disaggregated by a constrained set of student characteristics, and those characteristics may not be well-aligned to the district-level distribution of the students of interest in a given study.

Consider, for example, New Jersey – one of the four states analyzed in the recent book. It might initially seem logical to use NAEP scores to evaluate the effectiveness of New Jersey’s Abbott litigation, to examine the average performance trends of economically disadvantaged children. However, only about half (54%) of New Jersey children who receive free or reduced-price lunch – a cutoff set at 185% of the poverty threshold – attend the Abbott districts. The other half do not, meaning that they were not direct beneficiaries of the Abbott remedies. While effects of the Abbott reforms might, and likely should, be seen for economically disadvantaged children given that sizeable shares are served in Abbott districts, the limited overlap between economic disadvantage and Abbott districts makes NAEP an exceptionally crude measurement instrument for the effects of the court-ordered reform.16

Hanushek and Lindseth are not alone in making bold assertions based on insufficient analyses, though Chapter 6 of their recent book goes to new lengths in this regard. Kevin Welner and I address numerous comparably problematic studies with more subtle whether, who and when problems, including the Greene and Trivitt study noted above.  Another example is a study by Florence Neymotin of Kansas State University, which purports to find that the substantial infusion of funding into Kansas school districts which supposedly occurred between 1997 and 2006 as a function of the Montoy rulings never led to substantive changes in student outcomes. I blogged about this study when it was first reported. But, the most relevant court orders in Montoy did not come until January of 2005, June of 2005 and eventually July of 2006. Remedy legislation may be argued to have begun as early as 2005-06, but primarily from 2006-07 on, before its dismantling from 2008 on. Regarding the Neymotin study, Kevin Welner and I explain:

A comparable weakness undermines a 2009 report written by a Kansas State University economics professor, which contends that judicially mandated school finance reform in Kansas failed to improve student outcomes from 1997 to 2006 (Neymotin, 2009).13 This report was particularly egregious in that it did not acknowledge that the key judicial mandate was issued in 2005 and thus had little or no effect on the level or distribution of resources across Kansas schools until 2007-08. In fact, funding for Kansas schools had fallen behind and become less equitable from 1997 through 2005.14 Consequently, an article purporting to measure the effects of a mandate for increased and more equitable spending was actually, in a very real way, measuring the opposite.[4]

Kevin Welner and I also review several studies applying more rigorous and appropriate methods for evaluating the influence of state school finance reforms. I have discussed those studies previously here. On balance, it is safe to say that a significant body of rigorous empirical literature, conscious of whether, who and when concerns, validates that state school finance reforms can have substantive positive effects on student outcomes including reduction of outcome disparities or increased overall outcome level.

Further, it is even safer to say that analyses provided in sources like the book chapter by Hanushek and Lindseth (2009), or research articles by Neymotin (2009), Greene and Trivett, provide no credible evidence to the contrary, due to significant methodological omissions. Finally, even the boldest, most negative publications regarding state school finance reforms provide no support for the contention that school finance reforms actually “harm our children,” as indicated in the title of a 2006 volume by Eric Hanushek.

Sometimes, even when a research report or article seems really complicated, relatively simple questions like when, whether and who allow the less geeky reader to quickly evaluate and possibly debunk the study entirely.  Sometimes, the errors of reasoning regarding when, whether and who, are so absurd that it’s hard to believe that anyone would actually present such an absurd analysis. But these days, I’m rarely shocked. My personal favorite “when” error remains the Reason Foundation’s claim that numerous current reforms positively affected past results! http://nepc.colorado.edu/bunkum/2010/time-machine-award. It just never ends!

Further reading:

B. Baker, K.G. Welner (2011) Do School Finance Reforms Matter and How Can We Tell. Teachers College Record. http://www.tcrecord.org/content.asp?contentid=16106

Card, D., and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

Roy, J. (2003). Impact of School Finance Reform on Resource Equalization and Academic Performance: Evidence from Michigan. Princeton University, Education Research Section Working Paper No. 8. Retrieved October 23, 2009 from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=630121(Forthcoming in Education Finance and Policy.)

Papke, L. (2005). The effects of spending on test pass rates: evidence from Michigan. Journal of Public Economics, 89(5-6). 821-839.

Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

Guryan, J. (2003). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284.

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (ed), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

Resch, A. M. (2008). Three Essays on Resources in Education (dissertation). Ann Arbor: University of Michigan, Department of Economics. Retrieved October 28, 2009, from http://deepblue.lib.umich.edu/bitstream/2027.42/61592/1/aresch_1.pdf

Goertz, M., and Weiss, M. (2009). Assessing Success in School Finance Litigation: The Case of New Jersey. New York City: The Campaign for Educational Equity, Teachers College, Columbia University.


[1] See, for example: E.A. Hanushek (2006) Courting Failure: How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm Our Children. Hoover Institution Press.  Reviewed here: http://www.tcrecord.org/Content.asp?ContentId=13382

[2] Baker, B.D., Welner, K. (2011) School Finance and Courts: Does Reform Matter, and How Can We Tell? Teachers College Record 113 (11) p. –

[3] Hanushek, E. A., and Lindseth, A. (2009). Schoolhouses, Courthouses and Statehouses. Princeton, N.J.: Princeton University Press.

[4] B. Baker, K.G. Welner (2011) Do School Finance Reforms Matter and How Can We Tell. Teachers College Record. http://www.tcrecord.org/content.asp?contentid=16106

Who would really want to spend more than that? (Ed Next & Spending Preferences)

When Paul Peterson asks “Do we really need to spend more on schools?” we already know what he thinks the answer is – an unequivocal NO!  Knowing the answer you desire always makes it easier to frame the questions, and like previous years, this year’s Education Next survey of attitudes toward public education provides few surprises.

Before I even gained full access to Peterson’s most recent WSJ Op-ed (e-mailed to me by a family member), I was able to guess pretty much where he was going with it.  Here’s how Peterson explains the Ed Next public opinion survey findings:

At first glance, the public seems to agree with this position. In a survey released this week by Education Next, an education research journal, my colleagues and I reported that 65% of the public wants to spend more on our schools. The remaining 35% think spending should either be cut or remain at current levels. That’s the kind of polling data that the president’s political advisers undoubtedly rely upon when they decide to appeal for more education spending.

Yet the political reality is more complex than those numbers suggest. When the people we surveyed were told how much is actually spent in our schools—$12,922 per student annually, according to the most recent government report—then only 49% said they want to pony up more dollars. We discovered this by randomly splitting our sample in half, asking one half the spending question cold turkey, while giving the other half accurate information about current expenditure.

Later in the same survey, we rephrased the question to bring out the fact that more spending means higher taxes. Specifically, we asked: “Do you think that taxes to fund public schools around the nation should increase, decrease or stay about the same?” When asked about spending in this way, which addresses the tax issue frankly, we found that only 35% support an increase. Sixty-five percent oppose the idea, saying instead that spending should either decrease or stay about the same. The majority also doesn’t want to pay more taxes to support their local schools. Only 28% think that’s a good idea.

So there is the nation’s debt crisis in a nutshell. If people aren’t told that nearly $13,000 is currently being spent per pupil, or if they aren’t reminded that there is no such thing as a free lunch, they can be persuaded to think schools should be spending still more.

In other words… yeah… the ignorant general public thinks they want to spend more on schools, but only because they don’t realize how much we are already wasting on public schools! When we clue them into the egregious… no… outrageous… exorbitant spending already going on … and hold a gun to their head… and phrase our question just right… pointing out to them just how stupid we think they are… and how smart we are… then the fix their answer… and become much, much more reasonable!

This explanation is problematic at a number of levels.  First, let’s explore the basic model of local voter preferences for spending on local public schools – specifically the information on price and quality that informs those preferences. First, local public school revenue comes from two primary sources – local property taxes paid on various types of properties within school districts and state general funds derived largely from state sales and income taxes. The mix varies widely from state to state. Residential property owners frequently pay their property taxes embedded in monthly mortgage payments and renters pay their landlords’ property taxes embedded in rent prices. Homeowners and renters have at least some feel for the reasonableness of their aggregate monthly housing payments, and some feel for the quality of public services they receive (schools, fire, police, parks, etc.) for the aggregate price they pay. They also have some feel for a) whether they would like those services improved and b) whether they are willing to pay a bit more to support those improvements. In short, a typical taxpayer/survey respondent has a reasonable gut feel regarding their “tax price” paid for the quality of public service provided.

The local taxpayer/voter/survey respondent sufficiently involved with local public schools (having children in the schools, working in the schools, having children who are recent graduates of the schools, or having recently graduated themselves) probably has some indicators of schooling quality in his/her head that guide his/her preference to pay more (or less). Has class size risen, or does it just seem too large? Has the district cut visible programs like music, arts or athletics of late, or has the district increased fees to cover the costs of these programs? As a result, the respondent is at least somewhat able to piece together whether they wish to spend a little more to decrease class sizes, expand programs or reinstate programs previously cut.

But, the typical taxpayer/voter/survey respondent likely a) doesn’t give a damn about and b) is generally unable to contextualize the meaning of the Total per Pupil Expenditures for a local public school district. It’s an abstract concept. A number that relates in a meaningful sense only to those who really spend their days steeped in such numbers. A number most likely to do little more than bias a response in this case, and it seems to, though it is hard to know precisely why.

Even worse is when those numbers are used totally out of context, as in Peterson’s argument above. Peterson’s description above is actually even worse than the methods description provided at Ed Next (Interestingly, Peterson also adds over $600 per pupil to the average spending figure, and then rounds it up to $13,000 by the end of his op-ed, compared to the information in the paragraph below from Ed Next):

A segment of those surveyed were asked the same ques­tion except that they were first told the level of per-pupil expenditure in their community, which averaged $12,300 for the respondents in our sample. For every subgroup con­sidered, this single piece of information dampened public enthusiasm for increased spending. Support for more spend­ing fell from 59 percent to 46 percent of those surveyed. Among the well-to-do, the level of support dropped dramati­cally, from 52 percent to 36 percent. Among teachers, sup­port for expenditure increases fell even more sharply—from 71 percent to 53 percent (see Figure 7).

Surely, it would be completely absurd to ask (as implied by Peterson’s op ed) the average person in Tennessee if their schools should spend more, after telling that person what the average district spends nationally – implying to the respondent that the figure represents Tennessee spending (as seemingly implied by Peterson’s Op-Ed, and as in the online survey at Ed Next).  It is only marginally more useful, however to ask the average respondent in Tennessee whether they should spend more or less, given a completely out of context representation of their local spending per pupil.

Here’s how the 2008-09 actual national mean per pupil spending compares to the distribution of per pupil spending across Tennessee districts:

(national mean current spending per pupil in 2008-09 was $10,209.83 [w/outliers excluded])

Now, it might be interesting to show the average voter respondent in Tennessee this graph and then ask him/her whether they think more should be spent in Tennessee? This graph provides some context. Context that is completely absent when informing a Tennessee respondent either of their own local district spending WITH NO OTHER CONTEXT AVAILABLE or of the national spending WITH NO OTHER CONTEXT AVAILABLE.

Put very simply, a per pupil spending figure out of context is meaningless.  $17,000 I say! $17,000… an abomination I say. It’s  a huge number! Why would we ever consider spending more than that per pupil in New York City? Well, what if it just happened to turn out that in the same year, that $17,000 per pupil was lower, on average, than most of the surrounding districts with much less needy student populations? What if that $17,000 was only approximately 50% of what was being spent in private independent schools operating within the city?  It doesn’t sound so big any more does it?  How would survey respondents in New York City change their answer if this information was provided?

The Ed Next survey, while fun to ponder each year, isn’t particularly helpful for really understanding voter’s preferences or awareness regarding spending on public schools or perceived quality.

Actual data on local budget votes, including those involving tax increases (increasing the more voter-distasteful local property tax) tend to be a much more useful barometer and even in the worst of economic times, local voter support – especially where voters have the financial capacity to provide that support – remains overwhelmingly positive  (Example NY State Data & previous NJ Blog Post [over 70% pass rate in wealthy districts in worst year]).  Matt Di    Carlo provides further discussion of this topic here, explaining the general voter preferences. It is also worth noting that even the most poorly constructed and phrased polls do not find significant shares (if any) responding that less should be spent.  Yet that is precisely the argument advanced by many pundits in response to these surveys.

Inexcusable Inequalities! This is NOT the post funding equity era!

I’ve heard it over and over again from reformy pundits. Funding equity? Been there done that. It doesn’t make a damn bit of difference. It’s all about teacher quality! (which of course has little or nothing to do with funding equity?).  The bottom line is that equitable and adequate financing of schools is a NECESSARY UNDERLYING CONDITION FOR EVERYTHING ELSE!

I’m sick of hearing, from pundits who’ve never run a number themselves and have merely passed along copies of the meaningless NCES Table showing national average spending in high poverty districts slightly greater than that for lower poverty ones. 

I’m sick of the various iterations of the “we’ve tripled spending and gotten nothing for it” argument and accompanying bogus graphs.  And further, the implication put forward by pundits that these graphs and table taken together mean that we’ve put our effort into the finance side for kids in low-income schools, but it’s their damn lazy overpaid teachers who just aren’t cutting it.

I’m intrigued by those pundits who would point out that perhaps outcomes of low-income children have improved over the past few decades and that the improvement is entirely attributable to increased accountability measures (when the same pundits have argued previously that the massive increases in funding led to no improvement. Perhaps there has been improvement, and perhaps there has been some increase in funding on average… and perhaps that’s the connection? More insights on achievement gap closure and shifting resources here!).

I’m also sick of those who would so absurdly argue that districts serving low-income and minority children really have more than enough money to deliver good programs, but they’ve squandered it all on useless stuff like cheerleading and ceramics.

Anyway, the goal of this post is  to point out some of the inexcusable inequalities that persist in K-12 education, inequalities that have real consequences for kids. Let’s take a look, for example, at two states that have persistently large achievement gaps between low-income and non-low income students – Illinois and Connecticut. These two states have somewhat different patterns of overall funding disparity, but suffice it to say, both states have their winners and losers, and the differences between them are ugly and unacceptable.

Let’s start with Connecticut. Below is a graph of Connecticut school district “need and cost adjusted current spending per pupil” and standardized test outcomes on the Connecticut Mastery Test (CMT). Expenditures are adjusted for differences on labor market competitive wages and for shares of children qualifying for free or reduced price lunch and for children with limited English language proficiency (based on estimates reported here). I’ve used essentially the same methods I discussed in this previous post.

What we see here is that resources – after adjustment for needs and costs – vary widely. Heck, they vary quite substantially even without these adjustments! What we also see is that we’ve got some really high flyers, like Weston, New Canaan and Westport, and we’ve got some that, well, are a bit behind in both equitable resources and outcomes (Bridgeport and New Britain in particular). To be blunt, THIS MATTERS!  Yeah.. okay, reformy pundits are saying, but they really have enough anyway. Why put anything else into those rat-holes.

Let’s break it down a bit further. Here are the characteristics of a few of the most advantaged and most disadvantaged districts in the above figure.

But of course, all we need to do is reshuffle the deck chairs  in Bridgeport and New Britain – fire their bottom 5% – heck let’s go for 20% teachers – pay the new ones based on test scores… and all will be fixed! Those deficits in average salaries might be a bit problematic. And even the nominal (no adjustments) spending figures fall well short of their advantaged neighbors. But bring on those reformy fixes, and throw in some funding cuts while you’re at it!

I’m sure… absolutely sure that the only reason those salaries are low is because they’ve wasted too much money on administrators and reducing class size… which we all know doesn’t accomplish anything???? But wait, here are the elementary class sizes?

Well, there goes that ridiculous reformy assumption. Class sizes are actually larger in these higher need districts! and Salaries lower. Damn cheerleading costs! Killing us! Perhaps it’s even going into  junk like band and art which are obviously a waste of time and money on these kids!

Well, here are the staffing structures of the schools, with staffing positions reported per 100 pupils.

Hmmm… disadvantaged districts have far fewer total positions per child, and if we click and blow up the graph, we can see some striking discrepancies! Those high need districts have far more special education and bilingual education teachers (squeezing out other options, from their smaller pot!). Those high need districts have only about half the access to teachers in physical education assignments or art, much less access to Band (little or none to Orchestra), and significantly less access to math teachers!

But, okay… this Connecticut thing is a freakin’ anomaly, right?  These kind of disparities – savage inequalities – are surely a thing of the past. This is, after all, THE POST-FUNDING EQUITY ERA? Been there and done that!

Let’s do the same walk through for a few Illinois districts. First, here are the graphs of need and cost adjusted (based on a cost model used in my previous post and related working paper) operating expenditures and outcomes –

For unified K-12 districts


For High School districts

Here are the basic stats on these districts

In this case, imagine trying to recruit and retain teachers of comparable quality in JS Morton to those in New Trier at $20k less on average, or in Aurora East compared to Barrington, at nearly $20k less. Ahh…you say… Baker… you’re making way too much of the funding issue. First, we know their wasting it all on small class size and cheerleading. Second, Baker… you’re missing the point that if we fire the bad teachers and pay the good teachers based on student test scores, those New Trier teachers will be banging down the door to get into J S Morton! That’s real reform dammit! And we know it works (even though we don’t have an ounce of freakin’ evidence to that effect!).

Clearly, if schools in Aurora East and JS Morton are slated for closure under NCLB (I’ve not checked this actually), it’s not because of poverty. It’s not for lack of resources… Clearly it’s their lazy, overpaid teachers who refuse to pull all-nighters with their kids to beat those odds????? To get those kids into calculus and trig classes presently filled with empty seats (and their own overpaid under-worked teachers!)

So, here’s what the staffing ratios look like.

First, those advantaged districts just have a lot more teacher assignments (position assignments) than the disadvantaged ones. And they especially have far more assignments in advanced math, advanced science, Library/Media, Art and music. There’s not a whole lot of squandering on extras going on in JS Morton and Aurora East. Like CT though, the disadvantaged districts do have bilingual education and special education teachers!  The staffing disparities are baffling – Savage in fact!

In fact, I must be making this stuff up right. After all, THIS IS THE POST-FUNDING DISPARITY ERA? This kind of stuff is just pulled from the chapters of an old Kozol book!  Teachers matter. Not funding. We all know that (except perhaps the various researchers who’ve actually explored the relationship between school funding reforms and student outcomes, only to find that it does matter).

Clearly, this matters. These funding disparities are substantial. And while these examples are selected from the extremes of the distributions, these districts have plenty of company at the extremes, and these districts fall along a clearly patterned continuum. And, with enough data and enough space, I could keep going and going here. CT and IL are not unique – though IL is clearly among the worst in the nation. New York anyone?

Utica is quite possibly one of the most financially screwed local public school districts in the nation (Poughkeepsie isn’t far behind)!

Arguably, there are entire states – like Tennessee and Arizona that are approaching (if they’ve not already surpassed) the conditions of districts like Utica, JS Morton, Bridgeport or New Britain.

Until we take these disparities seriously and stop counting on miracles and superman to give us a free ride, we’re not likely to make real progress on the “Scarsdale-Harlem” achievement gap.

Treating teachers like crap, cutting state funding, basing teacher salaries on student test scores will do nothing to correct these disparities, and will likely only make them worse. Nor can we expect to close the gap by simply replacing the current underfunded schools with comparably underfunded schools under new management (or simply paying parents of kids in these districts a discount rate to just go somewhere else, and never follow up on the kids). This reformy goo is a dangerous distraction from the real issues!

THIS IS NOT THE POST FUNDING EQUITY ERA.

FUNDING MATTERS.

GOOD EDUCATION IS EXPENSIVE & WORTH IT!

EQUITABLE AND ADEQUATE FUNDING IS A NECESSARY UNDERLYING CONDITION FOR THE FUTURE SUCCESS OF AMERICAN PUBLIC EDUCATION.


Teacher Selection: Smart Selection vs. Dumb Selection

I had a twitter argument the other day about a blog posting that compared the current debate around “de-selection” of bad teachers to eugenics. It is perhaps a bit harsh to compare Hanushek  (cited author of papers on de-selecting bad teachers) to Hitler, if that was indeed the intent. However, I did not take that as the intent of the posting by Cedar Riener.  Offensive or not, I felt that the blog posting made 3 key points about errors of reasoning that apply to both eugenecists and to those promoting empirical de-selection of fixed shares of the teacher workforce.  Here’s a quick summary of those three points:

  • The first error is a deterministic view of a complex and uncertain process.
  • The second common error becomes apparent once the need arises to concretely measure quality.
  • The third error is a belief that important traits are fixed rather than changeable.

These are critically important, and help us to delineate between smart selection and, well, dumb selection.  These three errors of reasoning are the basis for dumb selection.  A selection that is, as the author explains, destined to fail.  But, I do not see this particular condemnation of dumb selection to be a condemnation of selection more generally. By contrast, the reformy pundit with whom I was arguing continued to claim that Riener’s blog was condemning any and all forms of selection as doomed to fail, a seemingly absurd proposition (and not how I read it at all).

Clearly, selection can and should play a positive role in the formation of the teacher workforce or in the formation of that team of school personnel that can make a school great.

Smart Selection: In nearly every human endeavor, in every and any workforce or labor market activity exists some form of selection. Selection of individuals into specific careers, jobs and roles and de-selection of individuals out of careers, jobs and roles. Selection in and of itself is clearly not a bad thing. In fact, the best of organizations necessarily select the best available individuals over time to work within those organizations. And, individuals attempt to select the best organizations, careers, jobs and roles to suit their interests, motivation and needs. That is, self-selection. Teacher selection or any education system employee selection is no different. And good teacher selection is obviously important for having good schools. Like any selection process on the labor market, teacher selection involves a two-sided match. On the one hand, there are the school leaders and existing employees (to the extent they play a role in recruitment and selection) who may play a role in determining among a pool of applicants which ones are the best fit for their school and the specific job in question. On the other hand, there are the signals sent out by the school (some within and some outside the control of existing staff and leaders) which influence the composition of the applicant pool and for that matter, whether an individual who is selected decides to stay. These include signals about compensation, job characteristics and work environment. Managing this complex system well is key to having a great school. Sending the right signals. Creating the right environment. Making the right choices among applicants. Knowing when a choice was wrong. And handling difficult decisions with integrity.

There has also been much discussion of late about a recent publication by Brian Jacob of the University of Michigan, who found that when given the opportunity to play a strong role in selecting which probationary teachers should continue in their schools, principals generally selected teachers who later proved to generate good statistical outcomes (test scores). Note that this approach to declaring successful decision making suffers the circular logic I’ve frequently bemoaned on this blog. But, at the very least, Jacob’s findings suggest that decisions made by individuals – human beings considering multiple factors – are not counterproductive when measured against our current batch of narrow and noisy metrics. Specifically, Jacob found:

Principals are more likely to dismiss teachers who are frequently absent and who have previously received poor evaluations. They dismiss elementary school teachers who are less effective in raising student achievement. Principals are also less likely to dismiss teachers who attended competitive undergraduate colleges. It is interesting to note that dismissed teachers who were subsequently hired by a different school are much more likely than other first-year teachers in their new school to be dismissed again.

That to me seems like good selection. And it seems that principals are doing it reasonably well when given the chance. And this is why I also support using principals as the key leverage point in the process (with the caveat that principal quality itself is very unequally distributed, and must be improved).

Dumb “Selection:” Dumb selection on the other hand – the kind of selection that is destined to fail if applied en masse in public schooling or any endeavor suffers the three major flaws of reasoning addressed by Cedar Riener in his blog post.  Now, you say to yourself, but who is really promoting dumb selection and what more specifically are the elements of dumb selection when it comes to the teacher workforce? Here are the elements:

  1. Heavy (especially a defined fixed, large share) weight in making teacher evaluation, compensation or dismissal decisions placed on Value-Added metrics, which can be corrupted, may suffer severe statistical bias, and are highly noisy and error prone.
  2. Explicit, prior specification of the exact share of teachers who should be de-selected in any given year, or year after year over time OR prior specification of exact scores or ratings (categories) derived from those scores requiring action to be taken – including de-selection.

Sadly, several states have already adopted into policy the first of these dumb selection concepts – the mandate of a fixed weight to be place on problematic measures.  See this post by Matt Di Carlo at ShankerBlog for more on this topic.

Thus far, I do not know of states or districts that have, for example, required that 5% of the bottom scoring teachers in any given year be de-selected. But, states and districts have established categorical rating systems for teachers from high to low rating groups, based arbitrary cut points applied to these noisy measures, and have required that dismissal, intervention and compensation decisions be based on where teachers fall in the fixed, arbitrary classification scheme in a given year, or sequence of three years.

To some extent, the notion of de-selecting fixed shares of the teacher workforce based on noisy metrics comes from economists simulations based on convenience of measures than on active policy conversations. But in the past year, the lines between these simulations and reality have become blurred as policy conversations have indeed drifted toward actually using fixed values based on noisy achievement measures in place of seniority as a blunt tool to deselect teachers during times of budget cuts.  If and when these simplified social science thought exercises are applied as public policy involving teachers, they do reek of the disturbingly technocratic, “value-neutral” mindset pervasive in eugenics as well.

One other recent paper that’s gotten attention, applies this technocratic (my preference over eugenic) approach to determine whether using performance measures instead of seniority would result a) in different patterns of layoffs and b) in different average “effectiveness” scores (again, that circular logic rears its ugly head) Now, of course, if you lay off based on effectiveness scores rather than seniority, the average effectiveness scores of those left should be higher. The deck is stacked in this reformy analysis. But, even then, the authors find very small differences, largely because a) seniority based layoffs seem to be affecting mainly first and second year teachers, and b) effectiveness scores tend to be lower for first and second year teachers. Overall, the authors find:

We next examine our value-added measures of teacher effectiveness and find that teachers who received layoff notices were about 5 percent of a standard deviation less effective on average than the average teacher who did not receive a notice. This result is not surprising given that teachers who received layoff notices included many first and second-year teachers, and numerous studies show that, on average, effectiveness improves substantially over a teacher’s first few years of teaching.

Perhaps most importantly, these thought experiments, not ready for policy implementation prime time (nor will they ever be?) necessarily ignore the full complexity of the system to which they are applied, and as Riener noted, assume that individual’s traits are fixed – how you are rated by the statistical model today is assumed to correct (despite a huge chance it’s not) and assumed to be sufficient for classifying your usefulness as an employee, now and forever (be it a 1 or 3 year snapshot). In that sense, Riener’s comparison, while offensive to some, was right on target.

To summarize: Smart selection good. Dumb selection bad. Most importantly, selection itself is neither good nor bad. It all depends on how it’s done.

More Flunkin’ out from Flunkout Nation (and junk graph of the week!)

Earlier today I stumbled across this brilliant post by RiShawn Biddle over at Dropout Nation.

Biddle boldly claims:

Despite the arguments (and the pretty charts) of such defenders as Rutgers’ Bruce Baker, there is no evidence that spending more on American public education will lead to better results for children.

Now, regarding the “no evidence” claim, I would recommend reading this article from Teachers College Record, this year, which summarizes a multitude of rigorous empirical studies of state school finance reforms finding generally that increased funding levels have been associated with improved outcomes and that more equitable distributions of resources have been associated with more equitable distributions of outcomes.

In fact, even the Spring 2011 issue of the journal Education Finance and Policy includes an article by Joydeep Roy supporting the positive results of state school finance reforms (using Michigan data).

Proposal A was quite successful in reducing interdistrict spending disparities. There was also a significant positive effect on student performance in the lowest-spending districts as measured in state tests.(from abstract)

As Kevin Welner and I point out in our article, this study is not unique in its findings. Here are a few others:

Card & Payne (2002)

Using micro samples of SAT scores from this same period, we then test whether changes in spending inequality affect the gap in achievement between different family background groups. We find evidence that equalization of spending leads to a narrowing of test score outcomes across family background groups. (p. 49)

Deke (2003)

Using panel models that, if biased, are likely biased downward, I have a conservative estimate of the impact of a 20% increase in spending on the probability of going on to postsecondary education. The regression results show that such a spending increase raises that probability by approximately 5% (p. 275).

Papke (2001)

Focusing on pass rates for fourth-grade and seventh grade math tests (the most complete and consistent data available for Michigan), I find that increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance. (Papke, 2001, p. 821.)

Downes (2004) on VT

All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post–Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance. (p. 312)

Downes, Zabel & Ansel (2009) on Mass

The achievement gap notwithstanding, this research provides new evidence that the state’s investment has had a clear and significant impact. Specifically, some of the research findings show how education reform has been successful in raising the achievement of students in the previously low-spending districts. Quite simply, this comprehensive analysis documents that without Ed Reform the achievement gap would be larger than it is today. (p. 5)

Guryan (2003) on Mass

Using state aid formulas as instruments, I find that increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students. The magnitudes imply a $1,000 increase in per pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures. (p. 1)

Goertz & Weiss (2009) on NJ

State Assessments: In 1999 the gap between the Abbott districts and all other districts in the state was over 30 points. By 2007 the gap was down to 19 points, a reduction of 11 points or 0.39 standard deviation units. The gap between the Abbott districts and the high-wealth districts fell from 35 to 22 points. Meanwhile performance in the low-, middle-, and high-wealth districts essentially remained parallel during this eight-year period (Figure 3, p. 23).

I could go on. But that’s a fair share of evidence right there.

And what does Biddle provide as counter evidence to this – apparent lack of evidence I summarize above (I’ve sent the article link to Biddle on more than one occasion, but he apparently doesn’t read this kind of academic stuff)?

Biddle counters with a link to this graph – a true gem (I’ve added some annotation, not in his original)!

Yes, Biddle’s entire counter to the body of research he has not and will not read, is to use this graph of “promoting power” by student race group for Jersey City, NJ in 2004 and 2009. Note that the infusion of additional funds in NJ occurred mainly from 1998 to 2003, leveling off thereafter. But that’s a tangential point (not really).  So, Biddle’s absolute verification that more money doesn’t matter is to simply assert without verification that Jersey City got a whole lot more money and then to use this graph to argue that nothing improved!

First of all, that analysis wouldn’t pass muster in as a master’s degree level assignment (I teach a class on this stuff at that level), no less major research conclusions. From a graphing standpoint, I often criticize my students’ work for what I refer to as gratuitous use of 3d – especially where the use of 3d bars actually obscures the comparisons by making it hard to see where they align on the axis.

But, the really funny if not warped part of this graph is that there appear to be significant gains for black males between 2004 and 2009, but those gains are obscured by hiding the 2009 black male score behind the 2004 black female score.

Note that the graph also contains no information regarding the actual shares of the student population that fall into each group? Not very useful. Pretty damn amateur. Certainly fails to make any particular point, and certainly doesn’t refute the various citations above – all of which employ more rigorous analytic methods, apply to more than a single district, and most of which appear in rigorous peer reviewed journals.

References:

Card, D., and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284.

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (ed), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

Goertz, M., and Weiss, M. (2009). Assessing Success in School Finance Litigation: The Case of New Jersey. New York City: The Campaign for Educational Equity, Teachers College, Columbia University.

Guryan, J. (2003). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

Papke, L. (2005). The effects of spending on test pass rates: evidence from Michigan. Journal of Public Economics, 89(5-6). 821-839.

Roy, J. (2003). Impact of School Finance Reform on Resource Equalization and Academic Performance: Evidence from Michigan. Princeton University, Education Research Section Working Paper No. 8. Retrieved October 23, 2009 from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=630121(Forthcoming in Education Finance and Policy.)

Private Choices, Public Policy & Other People’s Children

I don’t spend much if any time talking about my personal decisions and preferences on this blog. It’s mostly about data and policy.  There’s been much talk lately about whether a Governor’s or President’s choice to send their children to elite private schools, or where Bill Gates, Mark Zuckerberg or prominent “ed reformers” attended school are at all relevant to the current policy conversation around  “reforming” public schools.  When those choices have been questioned publicly, they’ve often been met with the backlash that those are personal choices of no relevance to the current policy debate – just dirty personal attacks about personal, rational choices.

I have no problem with these personal choices. But, these personal choices may, in fact be relevant to the current policy debate.  I do keep in mind my own personal choices and preferences as I evaluate what I believe to be good policy for the children of others. And, I try to keep in mind what I know from my background in research and policy when I make my personal choices.   Like these prominent politicos and pundits, I too choose private independent schools – relatively expensive ones – for my children, and I have my reasons for doing so. As I’ve noted on my blog on a number of occasions, I taught at an exceptional private independent school in New York City, and have relatives and friends who continue to be involved in (and with) high quality private independent schools as teachers, administrators and parents. I did not, however, attend private school. I attended public school in Vermont, followed by private college (Lafayette College).

Why do I personally prefer private independent schools, which often come with a high price tag?  Here are a few reasons:

  1. The responsiveness that comes from a close-knit small community with not only small class sizes but also lower total student load for teachers (at middle and secondary level in particular)
  2. The depth and breadth of curricular offerings ranging from Latin in the middle school, to a diverse array of social science, advanced science and math courses at the high school level and a plethora of opportunities in the arts and athletics.
  3. The lack of emphasis on standardized testing – bubble tests and overemphasis on tested curricular areas and state standards.

Yes, I do consider it important that these schools are not test-whipped, specifically that they are not obsessed with basic reading and math bubble tests alone, or even more disturbing, tests of science and social studies content where the balance (or absence) of content is a function of partisan preferences of ill-informed politically motivated elected officials (e.g. Kansas science standards, or Texas social studies/history standards – thankfully, I’m not in KS anymore).

These days, I consider it especially important that my children not be in a school where teachers have to hang their hopes of achieving a living wage (or getting a bonus to afford cosmetic surgery as in “Bad Teacher”[hope to see that one soon!]) on whether or not my child gains X+Y points on those reading or math tests. In fact, these may now be my main reasons for opting out.

So yes, you might try to call me a hypocrite for preferring private schools for my own children while apparently being such a staunch defender and supporter of the public system (including voting yes on local district budgets, even when encouraged to vote no by public officials). But that would be a dreadful oversimplification and misrepresentation of my position.

I have worked in both public and private schools – one good and one bad of each – over a 10+ year period prior to my life in higher education.  I’ve studied and compared public and private schools in various locations and of various types for over 15 years and published numerous articles, papers and reports. What I’ve learned most from these studies is that private and/or less regulated markets are simply more varied than public and/or more regulated markets. Neither better nor worse on average – simply more varied.

Top notch private schools spend much more, and many financially strapped, relatively average to very low academic quality private schools do spend much less. Much more and much less than one another, and much more and much less than nearby public schools.  It is a massive bait and switch to suggest – look how great Sidwell Friends (DC),  Dalton or Fieldston (NYC) are compared to public schools, and look how much the average Catholic parish elementary school spends compared to the urban public district?  Of course, it’s never as obviously phrased as a bait and switch – suggesting that you can get a Sidwell or Dalton education at an urban Catholic elementary school price.  You can’t! Yes, the average Catholic parish elementary school likely spends less per pupil than the public district. But that school is no Sidwell, Dalton or Fieldston, which spend closer to and in excess of double the public schools in their area.

Private schools do not, as many assume, spend only about half what public schools do. This is urban legend, drawn from dated analyses that were misrepresented to begin with (over 10 years ago).  My extensive report on private school supply and spending covers these issues quite extensively.

To reiterate a major finding from my study of private school costs, private independent schools of the type I am talking about here (members of NAIS or NIPSA), spend ON AVERAGE, 1.96 times the average per pupil amount of public schools in the same labor market! (and have half the pupil to teacher ratio)

I am quite convinced that many of the policy makers who choose elite private schools for their own and advocate for scaling back the public system, really don’t understand the difference. They really don’t know that their private schools outspend nearby traditional public schools – by a lot – despite serving more advantaged student populations. Heck, I’ve talked to administrators in private independent schools who feel that their own budgets are tight (legitimately so), and assume that the public schools around them spend much more per child. But they are simply naïve in this regard (while wise in many other ways). No intent to harm. They’ve simply bought into the misguided rhetoric that private schools spend less and get more and they’ve never double-checked the facts. But even a few minutes of pondering their own budgets and looking up local public school spending brings them around. (Part of this perception is likely driven by differences in access to funding for capital projects, where heads of private schools recognize the heavy lifting of major fundraising campaigns, and envy the taxing authority of public school districts for these purposes).

In my view, the hypocrisy lies in what those who choose elite private schools for their own argue are the best solutions for public education for the children of others.  If the preferences are the same, there is no hypocrisy. The problem is when those preferences are vastly different – completely at odds – as they tend to be in the present “ed reform” and “new normal” debate.

It is hypocritical for pundits who favor for their own children, expensive schooling with diverse curriculum, small class size and little standardized testing (freeing teachers to be professionals), to argue for less money, class size increases and increased standardized testing (and teacher evaluation based on those tests) when it comes to other peoples’ children.

Yes, I too personally favor expensive private schooling for the reasons I’ve indicated above. And yes, my private school significantly outspends both the elite suburban public school district where I live and New Jersey’s reasonably well funded urban districts (compared to other states, see: http://www.schoolfundingfairness.org).   The way I see it, I would not just be a hypocrite, but a complete a-hole if I used my pulpit (what little pulpit I have) as a school finance expert to argue that we should be spending less on others, advocating different policies for others than I desire for myself.  But it’s precisely because I spend my day buried in data on school finance and education policy that I see this glaring hypocrisy.

The difference is that I believe that other children – those whose parents are not able to make this expensive choice – should have access to well-funded schools that also provide small class sizes, diverse curriculum, and for that matter, place less emphasis on standardized tests, and treat teachers as responsible, knowledgeable professionals (not script reading stand-ins and test proctors).

To clarify, this is not a criticism of individuals with personal preferences for high quality education for their own children who are otherwise unconcerned with (or oblivious to) the broader public policy questions pertaining to the children of others. Rather, this is a direct criticism of those public officials and vocal “ed reformers” who prefer high quality, well funded education for their own and then loudly and publicly advocate for a very different quality (and type) of education for the children of others.

If we could actually close the gap between public school resources and resource levels of elite private schools, there might be less demand for those elite private schools (though some would indeed respond with an arms race to outpace public schools).  Presently, however, elite private schools stand to benefit significantly from the “ed reform” and “new normal” movement which will likely make more public schools – including those in more affluent ‘burbs – even less desirable for parents currently on the fence.

So, here’s my challenge to all those policymakers who also prefer elite private independent schools for their children.  I urge you to make a list of all of the reasons why you chose a private independent school. Notably, many if not most parents list class size as a major factor (and most schools advertise class size as a major benefit).  Make a list of the specific attributes of your private school including:

  1. Average class size
  2. Teacher education levels
  3. Numbers and types of elective and advanced course offerings
  4. Numbers and types of extracurricular activities
  5. Whether they pay more experienced teachers more than less experienced ones (or more for teachers holding advanced degrees?)
  6. Whether they emphasize student test scores when evaluating or compensating teachers?

and whatever else you might think of. (here are a few sample NJ private schools)

Get a copy of the school’s IRS 990 tax filing from the school (or from:  http://foundationcenter.org/, or http://www.guidestar.org) to find out roughly how much your school spends each year, and divide that by the number of total enrolled pupils.

Then, gather similar information on surrounding public schools. Make your own comparisons. And after you’ve done so, let me know if you’re still comfortable making bold public proclamations that we need to reign in the absurd spending of public schools, increase class sizes and slash all of those frivolous extracurricular programs for other people’s children, but certainly not our own!

Video Extra:

And a Song:

Zip it! Charters and Economic Status by Zip Code in NY and NJ

There’s no mystery or proprietary secret among academics or statisticians and data geeks as to how to construct simple comparisons of school demographics using available data.  It’s really not that hard. It doesn’t require bold assumptions, nor does it require complex statistical models. Sometimes, all that’s needed to shed light on a situation is a simple descriptive summary of the relevant data.  Below is a “how to” (albeit sketchy) with links to data for doing your own exploring of charter and traditional public school demographics, by grade level and location.

Despite the value of a simple, direct and relevant comparison using accessible data providing for easy replication, many continue to obscure charter-non-charter comparisons with convoluted presentations of less pertinent information.  Matt DiCarlo recently published a very useful post (at Shanker Blog) explaining the various convoluted descriptions from Caroline Hoxby’s research on charter schools that make it difficult to discern whether the charter schools in her comparisons really had comparable student populations to nearby, same grade level traditional public schools.

 As I’ve discussed in the past, charter advocate researchers tend to avoid these basic comparisons, instead showing that students selected through the lottery were comparable to those not selected but who still entered the lottery (excluding all those who didn’t enter the lottery). While this information is relevant to the research question at hand (comparing effectiveness among lottery winners and losers), it skips over entirely another potentially relevant tidbit – whether, on average, the charter students are comparable to students in surrounding schools.

Alternatively, charter advocate researchers will compare charter characteristics to district wide averages, or whatever comparison sheds the most favorable light.  For example, Matt DiCarlo explains of Caroline Hoxby’s NYC charter research that:

“The authors compare the racial composition of charter students to that of students throughout the whole city – not to that of students in the neighborhoods where the charters are located, which is the appropriate comparison (one that is made in neither the summary nor the body of the report). For example, NYC charter schools are largely concentrated in Harlem, central Brooklyn and the South Bronx, where regular public schools are predominantly non-white and non-Asian (just like the charters).”

The better approach is, of course, to compare against the, well, most comparable schools – or those serving similar grade levels in the same general proximity – or even to be able to identify each individual school (such that one can determine comparable grade levels) among districts in similar locations.

Here’s my general guide to making your own comparisons using a readily available data source.

Go to: www.nces.ed.gov/ccd

Use the Build a Table function: http://nces.ed.gov/ccd/bat/

  1. Select as many years of data you want/need (first screen toggle)
  2. Select the “school” as your unit of analysis for your data (first screen, drop down)
  3. Select “contact information” from the drop down menu on next screen
    1. Select location zip code
    2. Select location city
  4. Select “classification information” from the drop down menu
    1. Select the “charter” indicator
    2. Select the “magnet” indicator (in case you want to include/exclude these)
  5. Select “total enrollment” from the drop down menu
    1. Select total enrollment
  6. Select “students in special programs” from the drop down menu
    1. Select students qualifying for free lunch
    2. Select students qualifying for reduced price lunch
  7. Select “Grade Span Information” from the drop down menu
    1. Select “school level” identifier
    2. Select “High Grade” and “Low Grade” indicators if you want more flexibility in comparing “like” schools
  8. Pick the state or states you want (you can’t use this tool to pull all schools nationally because the data set will be too large for this tool. Complete data are downloadable at: http://nces.ed.gov/ccd/pubschuniv.asp )

Calculate Percent Free Lunch and Percent Free & Reduced Lunch (divide groups by total enrollment)!

Play…

Here are some examples…

First, here are a handful of New Jersey Charter Schools compared to other schools (comparable and not) in their same zip code.

In this first figure, from a Newark, NJ zip code, we can see quite plainly and obviously that the shares of children qualifying for free lunch in Robert Treat Academy are much lower than all other surrounding schools, including the high school in the zip code (Barringer), where high schools typically have lower rates of students qualifying (or filing relevant forms) for free lunch.

Here are a few more.

Other “high flying” charters in Newark including North Star Academy, Gray Charter School and Greater Newark Academy, in a zip code with fewer traditional public schools, tend to have poverty concentrations more similar to specialized/magnet schools than to neighborhood schools in Newark. Other charter schools like Maria Varisco Rogers and Adelaide Sanford have populations more comparable to traditional neighborhood schools.  But, we don’t tend to hear as much about these schools – or their great academic successes.

Things aren’t too different over in Jersey City.  In the area (zip code) around Learning Community Charter School, other charters and neighborhood schools have much higher rates of children qualifying for free lunch than LCCS. Only the special Explore 2000 school has a lower rate.

Ethical Community Charter also stands out like a sore thumb when compared to all other schools in the same zip code, including those serving upper grades which typically have lower rates.

But what about those NYC KIPP schools? How about some KIPP BY ZIP?

So much has been made of the successes of KIPP middle schools, coupled with much contentious debate over whether KIPP schools really serve representative populations and/or whether they are advantaged by selective attrition. I included some links to relevant studies on those points here. But even those studies, which make many relevant and interesting comparisons, don’t give the simple demographic comparison to other middle schools in the same neighborhood. So here it is:

Paul Mulshine, Amoral Self-Indulgence & New Jersey School Finance

On most days, I can simply laugh off a ridiculous Paul Mulshine column in the Star Ledger. Most of his claims regarding education, taxation and the intersection of the two range from flat-out incorrect to wacky and misguided. But Mulshine’s claims in his column on Wednesday June 22nd necessitate a response.

For several years, I have been a professor where one of my primary responsibilities has been to train future school administrators. I believe strongly that well-informed well prepared and knowledgeable school administrators can and should play a critical role in guiding public education policy.  As one might figure from the name of this blog, my emphasis is on teaching school finance – an inherently political and divisive topic that often pits one district against another or even one school against another. As a result, I believe it is particularly important that leading voices in education policy in a state understand not only how policies affect their own district and children but how those policies affect children statewide – that local school administrators can think beyond the boundaries of their own school district and local constituents, and be mindful of the good of the public as a whole.

Any local school administrator would likely want to find ways to manipulate the state formula for allocating aid in a way that drives more aid to their district. And over the years, I’ve seen many twisted and unethical arguments advocated and legislated to accomplish these goals – including Jackson Wyoming – the wealthiest district in Wyoming – arguing (successfully) that it needs 30% more funding than any other district in the state simply because it is so wealthy. Kansas similarly adopted provisions which provide for more funding in districts a) with higher priced houses and b) with more children attending school in new facilities. I’ve seen more money driven to wealthier districts in South Carolina on the argument that they have more gifted children. And I’ve seen more money targeted to white schools than black schools in Alabama (still in effect) on the basis that white schools have more teachers with advanced degrees and that teachers with advanced degrees cost more (built into the state aid calculation). I’ve written on this topic in peer-reviewed research.

I’ve often been frustrated to see local public school administrators in districts advantaged by these illogical policies either sit idly by, knowing the policies to be wrong, or advocate loudly on behalf of these policies, still knowing full well that the policies are built on flimsy if not absurd arguments.  In the politics of state school finance, self-interest is often hard to overcome.  It is a rare administrator who is able to balance these conflicts well – to not take the easy way out and accept an absurd or even unethical policy position simply because it drives more dollars to their constituents. Earl Kim of Montgomery Township is one of those rare administrators.

Mr. Mulshine’s view that the only role of the local public administrator is to get more for his or her constituents, and that local bureaucrats should never take any action to the contrary – regardless of ethical considerations – is not only absurd but is indicative of much of what is wrong in politics today and society in general.  Mulshine prefers his bureaucrats to be amoral sock puppets.

Here is a clip of what Mulshine had to say about Earl Kim:

“Let me offer a hint to this overpaid bureaucrat: An employee of the school board  has no say whatsoever in such public policy matters as the proper amount of property-tax relief.”

“If he did, however, he should not be advising his superiors to take a course of action that deprives the taxpayers of tens of millions of dollars that could lower their property taxes and help keep them in their houses.”

What Earl Kim understands and what Mulshine clearly doesn’t, is that while Doherty’s “Fair School Funding” plan might drive a lot more money into Earl Kim’s district, it would only do so at the expense of the system as a whole. And that is an ethical compromise that Earl Kim seems unwilling to support. To Mulshine, however, ethics seem inconsequential, when traded for millions of dollars.

Let’s actually take a simulated look at why Earl Kim might be concerned about the Doherty plan. Let’s start with a quick look at how school finance formulas work.

Local public school districts receive varied amounts of state aid based on two major types of factors:

  1. Differences in local school districts’ ability to raise local tax revenue to pay for schools;
  2. Differences in the needs and costs of providing adequate educational services to widely varied student populations.

In simple terms, the current formula – SFRA – accounts for both, and the Doherty plan accounts for neither.  Aw… what the heck, all that math is too complicated anyway!

In a typical state school finance formula, there is a target amount of revenue to be raised by each school district – based on the estimated differences in needs and costs of children attending each district and other factors such as variations in competitive wages for teachers. But, even if the target funding per pupil was the same for each district, the state aid share would be very different. Why? Because some districts have far greater capacity to raise local property tax revenues than others.

Here’s a New Jersey SFRA simulated (oversimplified) example using data from 2009 and 2010. Under the 2010 SFRA, the average target budget per pupil for an Abbott district was $16,387, based on the greater needs of children in these districts and the fact that the largest Abbott districts were also in higher cost north Jersey labor markets.

Applying equitable tax effort, Abbott districts are only able to raise about $4,300 per pupil compared to wealthy districts (to 2 deciles) which can raise, on average, over $13,000 (which is actually more than they would need).  State aid, as it currently stands in NJ (and in other states with similarly structured formulas) is used to fill the gap between what can be fairly raised locally, and what is estimated to be needed to provide an adequate education.

Expressed as effective tax rates, the local share for wealthy I&J districts appears to be slightly higher when expressed relative to property values, but these districts have the lowest effective rate with respect to income – even under SFRA. Overall, the distributions are relatively fair. I’ve written previously about this.

So then, how would it work to simply give every district the same amount of state aid per pupil? The Doherty plan argues for giving every district $7,481 per pupil a) regardless of need and costs and b) regardless of ability to raise local revenue. That would be unprecedented, even in Kansas, Alabama or Wyoming.*

This table shows one perspective on the Doherty plan – if the state simply gave every district the same amount of state aid per pupil, but if we then assumed that districts would need to raise the rest on their own if they really wanted to provide an adequate education (as estimated under SFRA). That is, if high need districts like Abbott districts still wished to try to raise what SFRA projected that they needed. Abbott districts would be expected to raise $8,906 per pupil toward their $16,387 and the wealthiest districts would be expected to raise on their own, about $5,000 per pupil. This creates a nearly nine-fold difference in the effective income tax equivalent across districts! And that’s “Fair School Funding?” One can understand Earl Kim’s concern, even if the proposal would bring home millions to Montgomery Township!

Here’s what it looks like in pictures. In the first picture, we see how SFRA operates pretty much like any state school finance formula built on a “foundation formula” approach. Each district has a target revenue per pupil. And the poorest districts – those with the least local fiscal capacity – are expected to raise the least toward this total. Wealthy districts even when applying equitable tax effort can raise far more than they need!

Here’s what the Doherty plan would look like. Here, every district gets the same regardless of need or capacity. This is rather like arguing that we should distribute food stamps and other financial assistance to residents of the estates of Far Hills in equal amounts to the distributions in Camden, or that we should pave well-conditioned and little used roadways with comparable frequency to heavily worn, highly traveled ones. When we place Doherty aid on top of 2009 local revenues per pupil, we see that the lowest income districts end up having combined state and local revenue per pupil well under $10,000 and that the wealthy districts now have combined state and local revenue per pupil approaching $25,000.

Here’s how it looks with respect to children qualifying for free or reduced price lunch. New Jersey’s school finance system has been praised in several national reports, including this one, for most effectively targeting additional resources toward greater needs. And there exists a significant body of research to validate that such school finance reforms actually do matter (regardless of political rhetoric to the contrary). Indeed the Doherty plan would turn New Jersey school finance on its head – making the system among the most regressive in the nation. That is, a system where higher need districts have systematically fewer resources per pupil.

Now, I don’t expect that this proposal really has much broad-based support, and I would not have typically bothered to critique or debunk it. I’ve stated my reasons above for why I needed to take this particular issue on at this time and under these circumstances.  It would simply make no sense for a well-informed local public school administrator like Earl Kim to advocate on behalf of a policy that is so clearly wrongheaded, so obviously unfair, simply because that policy would drive money into the pockets of his constituents.

(Finally, as an interesting aside, we also know from a series of studies of property tax relief aid for wealthy districts in New York State that increasing state aid to wealthy districts is among the surest ways to increase inefficiency in school district spending.  I often use the analogy that it’s like giving out $100 gift cards to Scarsdale residents to shop at Neiman Marcus. They take the $100 and spend $500 for something they didn’t really need. That is, these policies seem to encourage inefficient spending as much if not more than they provide tax relief. Meanwhile, we might have reallocated those $100 gift cards for basic needs in nearby Yonkers or Mount Vernon.)

*Note: It is conceivable that a state would attempt to create a fully state financed education system (that is, eliminate local share) in which case there is no need to correct for differences in capacity to raise local share. But, a completely flat allocation under these circumstances would fail to address differences in needs and costs.  Relying entirely on state source revenues (sales and income taxes) can, however, reduce the stability of revenue flow to schools (property tax revenues tend to be more stable in economic downturns).

School Finance through Roza-Tinted Glasses: 5 School Funding Myths from a single Misguided Source

I’ve reached a point after these past few years where I feel that I’ve spent way too much time  critiquing poorly constructed arguments and shoddy analyses that seem to be playing far too large a role in influencing state and federal (especially federal) education policy. I find this frustrating not because I wish that my own work got more recognition. I actually think my own work gets too much recognition as well, simply because I’ve become more “media savvy” than some of my peers in recent years.

I find it frustrating because there are numerous exceptional scholars doing exceptional work in school finance and the economics of education whose entire body of rigorous disciplined research seems drowned out by a few prolific hacks with connections in the current policy debate.It may come as a surprise to readers of popular media, but individuals like Mike Petrilli, Eric Osberg, Rick Hess (all listed on the USDOE resource web site) or Bryan Hassel wouldn’t generally be considered credible scholars in school finance or economics of education. I’d perhaps have less concern – and be able to blow this off – if many of the assertions being made by these individuals – and others – weren’t so often completely unsupported by reasonable analysis and if those assertions didn’t lead to potentially dangerous and damaging policies.

This post is specifically about the body of methodologically flimsy research produced in recent years by Marguerite Roza, previously of the Center on Reinventing Public Education and currently an advisor to the Gates Foundation.

Why this post now? I’ve simply lost my patience.

This post is in part a response to the recent unveiling of the U.S. Dept. of Education web site on improving educational productivity http://www.ed.gov/oii-news/resources-framing-educational-productivity. Amazingly, this site lists primarily non-peer reviewed, shoddy work by Marguerite Roza and colleagues and bypasses entirely more serious research on educational productivity or methods for evaluating it.  The quality of some of the examples on this site is particularly abysmal. Yet it is presented as “the work of leading thinkers in the field.” (interesting that “thinkers” is used in place of “researchers.”) Among the worst examples, this site lists as a credible resource the Center for American Progress Return on Investment analysis. (by Ulrich Boser, a great writer on the topic of art theft, but in this case, a bit out of field).

I don’t mind so much that this stuff exists. But it certainly doesn’t belong in a serious policy conversation, nor does it represent “the work of leading thinkers in the field.”

Let’s start with a few common attributes of the worst-of-the-worst types of policy research floating around out there and warping and misguiding the education policy debates in general and school finance debates in particular. For lack of a better term, let’s just call it “hack research.”

Perhaps most importantly, hack research fails to recognize all of the credible work that’s already been done on a topic, typically because the research hack who produced it lacks entirely the discipline to bother to understand that body of work and how to build on it in order to come to new, credible findings and conclusions.

Further, hack research displays little regard for the connection between rigorous analysis and conclusions that may be drawn from it. This stems in part from the lack of discipline to actually conduct rigorous analyses.

Particularly effective hacks will not just ignore the body of existing scholarship but will do so belligerently, proclaiming that no good work has ever been done, no credible methods of analysis do exist, and therefore the time is right for their own creative and new perspective! The hack research method substitute is usually some seemingly intuitive, completely shallow, poorly conceived back-of-the-napkin approach. In other words, the hack research motto is that we must think outside the box, because it’s just too much work to open and unpack that box!

Many of us start as hacks, but eventually grow out of it as we realize that there’s a lot of great stuff out there to read and exceptional scholars from which to learn. And, some non-hacky researchers will occasionally hack. Hack happens. It’s only really problematic when it’s a persistent pattern of hackyness or even gets worse over time.

The most dangerous hacks use their shtick to influence policy with catchy anecdotes, convincing policymakers and major players that they need look no further (at real research, for example) than their own hacky “research.”  And the most effective hacks can spin findings that never were into pure urban legend – well-accepted myths turned realities – with serious policy implications!

Let’s take a look at a number of mythical findings from shoddy research produced by Marguerite Roza in recent years, including a few sources cited on the USDOE resources page.

Myth #1: States have largely solved between district funding disparities and within district disparities are the remaining problem of the day.

Sources of the myth: See references in Baker/Welner article (cited below)

A now common myth in school finance reiterated in numerous sources produced by the Education Trust, Center for American Progress and other DC think tanks and pundits is that states have largely resolved disparities in funding between districts and that persistent disparities are primarily within districts, between schools – a function of illogical district allocation formulas.

In a recent article Kevin Welner and I tackle this argument and dig deeply into the sources behind this argument, which invariably find their way back to Marguerite Roza, then of the Center on Inventing Research Findings – excuse me – Center for Reinventing Public Education (CRPE).

Kevin and I conclude in our article:

 Two interlocking claims are being increasingly made around school finance: that states have largely met their obligations to resolve disparities between local public school districts and that the bulk of remaining disparities are those that persist within school districts. These local decisions are described as irrational and unfair school district practices in the allocation of resources between individual schools. In this article, we accept the basic contention of within-district inequities. But we offer a critique of the empirical basis for the claims that within-district gaps are the dominant form of persistent disparities in school finance, finding instead that claims to this effect are largely based on one or a handful of deeply flawed analyses.

Kevin Welner and I dissect in detail the problematic, “non-traditional” methods Roza and colleagues use for conducting their analyses (ignoring real methods used by real researchers in real publications), but perhaps more interesting are those cases where a narrow, measured finding pertaining to one specific estimate in one specific context becomes a national trend, a dominant reality soon thereafter. Op-Ed columns by Roza on the topic of within versus between district funding disparities include particularly egregious examples. Kevin Welner and I explain:

Following a state high court decision in New York mandating increased funding to New York City schools, Roza and Hill (2005) opined: “So, the real problem is not that New York City spends some $4,000 less per pupil than Westchester County, but that some schools in New York [City] spend $10,000 more per pupil than others in the same city.” That is, the state has fixed its end of the system enough.

This statement by Roza and Hill is even more problematic when one dissects it more carefully. What they are saying is that the average of per pupil spending in suburban districts is only $4,000 greater than spending per pupil in New York City but that the difference between maximum and minimum spending across schools in New York City is about $10,000 per pupil. Note the rather misleading apples-and-oranges issue. They are comparing the average in one case to the extremes in another.

In fact, among downstate suburban[1] New York State districts, the range of between-district differences in 2005 was an astounding $50,000 per pupil (between the small, wealthy Bridgehampton district at $69,772 and Franklin Square at $13,979). In that same year, New York City as a district spent $16,616 per pupil, while nine downstate suburban districts spent more than $26,616 (that is, more than $10,000 beyond the average for New York City). Pocantico Hills and Greenburgh, both in Westchester County (the comparison County used by Roza and Hill), spent over $30,000 per pupil in 2005.[2] These numbers dwarf even the purported $10,000 range within New York City (a range that we agree is presumptively problematic); our conclusion based on this cursory analysis is that the bigger problem likely remains the between-district disparity in funding.

For the full take down, see:

Baker, B. D., & Welner, K. G. (2010). “Premature celebrations: The persistence of interdistrict funding disparities” Educational Policy Analysis Archives, 18(9). Retrieved [date] from http://epaa.asu.edu/ojs/article/view/718

Myth #2: America’s public school system suffers from something called Baumol’s disease, therefore the only solutions must be found outside of public education

Source: Curing Baumol’s Disease: In Search of Productivity Gains in K–12 Schooling Paul Hill, Marguerite Roza

While I don’t think this one really ever caught on, it’s so absurd that it must be addressed. Further, it’s actually cited on the USDOE resources in educational productivity page despite the fact that it offers no useful guidance whatsoever on the topic.

The objective of this policy brief by Paul Hill and Marguerite Roza of CRPE is to explain how American public education suffers from Baumol’s disease, or “the tendency of labor-intensive organizations to become more expensive over time but not any more productive.” Hill and Roza’s attempt at empirical validation that American public education suffers from Baumol’s disease is presented in two oversimplified figures, a graph showing increased number of staff who are not core teachers (Figure 1) and a graph showing that student test scores on the National Assessment of Educational Progress have remained flat over time (Figure 2).  The latter claim that we’ve seen no improvement in NAEP scores over time is contested.[1] And the former claim, when aggregated nationally is not particularly meaningful. The authors provide no empirically rigorous link between the two.

Rather, the casual reader is simply to assume that public schools have added a lot of non-teaching staff and have, on average, nationally seen no yield for that increase costs. Hill and Roza posit:

“While these indicators clearly point to increased costs for education, efforts to quantify productivity changes have been hampered by measurement challenges on the outputs side of the equation. By most accounts, key indicators of outcomes have not shown comparable gains. A thirty-year look at NAEP performance for seventeen year-olds, for instance, suggests that test scores have changed very little.” (p. 3)

While this may, in fact, not be entirely untrue, the authors provide no rigorous validation that “Baumol’s Disease” is a persistent problem of American public schools.

However, without a disease with a catchy name, there would be little reason for their proposed cure. But the proposed cure is no more thoroughly vetted or precisely articulated than the disease.  A central assumption in the Baumol’s disease policy brief is that American public education systems take on one single form, as represented by national averages in the TWO graphs provided, that there is little or no variation within the public education system in terms of resource use or outcomes achieved (e.g. that it all suffers Baumol’s disease), and that therefore the only possible cures are those that come from outside the public education system or at its fringes. That is, that we have nothing to learn from variation within the public education system itself, because there is no such variation. Instead, for example, the authors suggest a closer look at “home schooling, distance learning systems, foreign language learning, franchise tutoring programs, summer content camps, parent-paid instructional programs (music, swimming lessons, etc.), armed services training, industry training/development, apprentice programs, education systems abroad.” (p. 10)

Numerous more credible researchers have spent a great deal of time learning from the heterogeneity of how schools, school districts, and charter schools operate, as well as across states, including studying the relative efficiency of schools that either operate differently or change how they operate. The assumption that the only solutions must come from outside the system is patently absurd, when the “system” consists of 51 policy contexts, over 100,000 schools, 5,000 charter schools and about 15,000 public districts. And it’s just lazy, hack thinking.

While one might gain insights from other labor-intensive industries, or education at the fringes of the current public system, it would be foolish to ignore the extent of variation within the current American public education system, and across traditional public, magnet, charter and private schooling. Arguably, the authors present the view that there is little or nothing to learn from the current system specifically in order to avoid the need for conducting rigorous analysis of it. Further, while such policy briefs may be generously considered as useful conversation starters, we take serious issue with the U.S. Department of Education’s identification of sources of this type, which are purely speculative, and severely lacking in intellectual or empirical rigor, as “Key Readings on Educational Productivity.”

Myth #3: Poor, failing school districts have plenty of money but are squandering too much on Cheerleading and Ceramics when they need to be spending on basics!

Original Source of (the anecdote behind the) myth: “Now is a Great Time to Consider the Per Unit Cost of Everything in Education.”

As I explain in my recent conference paper:

Authors including Marguerite Roza and colleagues of the Center for Reinventing Public Education encourage public outrage that any school district not presently meeting state outcome standards would dare to allocate resources to courses like ceramics or activities like cheerleading. To support their argument, the authors provide anecdotes of per pupil expense on cheerleading being far greater than per pupil expense on core academic subjects like math or English.

  • Imagine a high school that spends $328 per student for math courses and $1,348 per cheerleader for cheerleading activities. Or a school where the average per student cost of offering ceramics was $1,608; cosmetology, $1,997; and such core subjects as science, $739.1

These shocking anecdotes, however, are unhelpful for truly understanding resource allocation differences and reallocation options, and are an unfortunate and unnecessary distraction. For example, the major reason why cheerleading or ceramics expenses per pupil are seemingly high is the relatively small class sizes, compared to those in English or Math. In total, the funds allocated to either cheerleading of ceramics are unlikely to have much if any effect if redistributed to reading or math.

Now, this myth is a rather strange one, because the source from which it comes, which is authored by Marguerite alone, really isn’t totally unreasonable. It’s not useful in any way shape or form, but it’s not unreasonable either. This wacky anecdote about cheerleading and ceramics spending comes from a piece in which Roza is trying to explain the importance of comparing unit costs of providing specific programs/opportunities. This is a rather “no duh” idea, and the working paper and eventual book chapter is built on uninteresting anecdotes, at best. The original point of the paper is that if administrators look at the per unit cost of everything, they might find some things that stand out, and some things that might be reasonably reorganized to be offered at a lower unit cost (for example, the cost of cheerleading was reduced by moving it from a class period drawing on salaried time, to an after school activity, paid by small stipend).

But, the spin from this piece has been that this is all that low performing, poor urban districts need to do. They’ve all got enough. They themselves are responsible for the most persistent inequities – not the states. And they are the ones wasting way too much on things like cheerleading and ceramics. Given that this spin has had far more traction than the more reasonable paper behind it, one might assert that this is precisely what Roza intended.

In my paper, I conclude:

Rather, the emergent story from the data in both states was the contrast between high spending, high outcome districts, and low spending low outcome districts and their respective high schools. On average, high spending, high outcome districts were as one might expect much lower in student poverty concentration and low spending, low outcome districts much higher in poverty. That is, after applying thorough cost adjustment including adjustments for differences in student needs. Interestingly, the most striking differences between these groups of districts were not in the availability of assigned teachers or courses in the arts, but rather in the distribution of advanced versus basic course offerings in curricular areas such as math and physical science.

Note that to begin with, low spending, low outcome schools had fewer teacher main assignments and fewer course assignments per pupil. As such, they were, from the outset, more constrained in their allocation options. Further, there is at least some evidence that when evaluating district wide resource allocation, low resource, low outcome districts see greater necessity or feel greater pressure to allocate a larger overall share of resources to elementary classrooms (based on Illinois findings).

More thorough analyses of this issue see:

Baker, B.D. (2011) Cheerleading, Ceramics and the Non-Productive Use of Educational Resources in High Need Districts: Really? Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA 2011

Myth #4: High schools in Washington State pay math and science teachers less than other teacher despite public interest and state policies which encourage paying them more

Source: Washington State High Schools Pay Less for Math and Science Teachers than for Teachers in Other Subjects Jim Simpkins, Marguerite Roza, Cristina Sepe

This is one that suffers from both major issues identified at the beginning of this rant. First, the disconnect between the “study” and the press release:

The Press Release
http://www.crpe.org/cs/crpe/view/news/111

The analysis finds that in twenty-five of the thirty largest districts, math and science teachers had fewer years of teaching experience due to higher turnover—an indication that labor market forces do indeed vary with subject matter expertise. The subject-neutral salary schedule works to ignore these differences.

The Study
http://www.crpe.org/cs/crpe/download/csr_files/rr_crpe_STEM_Aug10.pdf

That said, the lower teacher experience levels are indicative of greater turnover among the math and science teaching ranks, lending support to the hypothesis that math and science teachers may have access to more compelling non-teaching opportunities than do their peers. (p. 5)

That is, the conclusions of the study itself and the press release are, well, not consistent. But this pattern of behavior is entirely consistent for Roza and CRPE.

In a previous post I address just how ridiculous the methods in this analysis are, in which she compares STEM teacher salaries with non-STEM teacher salaries without any controls for other factors that affect salaries (on the argument that salaries shouldn’t be based on those things – experience and degree level – anyway).

All that Roza really found in this paper was that STEM teachers tend to be younger and as a result have lower average salaries than non-STEM teachers. From that, she spun the argument that because STEM teachers don’t earn more than other teachers, but STEM fields are more competitive, STEM teachers must be leaving teaching at a higher rate, leading to a less experienced pool and lower average salaries (a vicious cycle indeed! But one that’s never validated by the ridiculous analysis).

In my post, I actually evaluate several years of teacher level data on all teachers in Washington State, finding most of her conclusions to be flat out wrong. Here’s the figure on mean STEM and non-STEM teacher salaries over time: https://schoolfinance101.com/wp-content/uploads/2010/08/slide42.jpg

I also point out that credible researchers like Lori Taylor of Texas A&M have actually done better analyses of Washington teacher wages and addressed variations in labor market competitiveness by field:

Report on Taylor Study:

http://www.wsipp.wa.gov/rptfiles/08-12-2201.pdf

Taylor Study:

http://www.leg.wa.gov/JointCommittees/BEF/Documents/Mtg11-10_11-08/WAWagesDraftRpt.pdf

Somehow, not surprisingly, Roza was unaware of either this better research or the more credible methods used in this research.

For the full take down, see: https://schoolfinance101.wordpress.com/2010/08/20/new-from-the-center-on-inventing-research-findings/

Myth #5: With our handy-dandy basket of reformy fixes, we can cut significant funding from American public schools and dramatically increase productivity!

Source: Petrilli and Roza

Stretching the School Dollar (Brief)

http://www.edexcellence.net/publications-issues/publications/stretching-the-school-dollar-policy-brief.html

In their policy brief on Stretching the School Dollar, Mike Petrilli of Thomas B. Fordham Institute and Marguerite Roza of the Gates Foundation provide a lengthy laundry list of strategies by which school districts and states might arguably increase their productivity at lower expense, or “stretch the dollar” so to speak.  This policy brief is an extension of the Frederick Hess (American Enterprise Institute) and Eric Osberg (Fordham Institute) edited book by the same title.  We highlight this source because of repeated specific references to this source in Secretary Duncan’s “New Normal” speeches during the Fall of 2010.[2]

Because this policy brief and book specifically list strategies that are intended to improve productivity at comparable or lower expense, it would be particularly relevant for the book or brief to either provide directly or summarize from other sources, rigorous cost-effectiveness analysis of these options, or relative efficiency comparisons of schools and districts employing these options.   But that is apparently asking way too much of Roza or Petrilli. I’ll cut Mike some slack here, because he isn’t the one actually presenting himself as a school finance expert/scholar. That’s Roza’s role in this partnership, therefore the burden falls on her.  But after reading enough work by Roza and colleagues, I’m no-longer convinced that she is even aware that there is a body of research out there on Cost-effectiveness analysis or relative efficiency (more on this later). I certainly encourage her to go buy a copy of Hank Levin and Patrick McEwan’s book, not so subtly titled Cost-Effectiveness Analysis: Methods and Applications. It’s a relatively easy, non-academic read.

I’ll offer a primer on these methods and their application to these questions in a future post. There’s no need to beat a dead horse on this topic. I’ve taken down Roza and Petrilli’s reformy gift basket in two previous posts to which you can refer.

For the full take down, see:

Part 1 – Stretching the Truth, Not Dollars: School Finance in a Can: Unproven and Unsubstantiated Dollar-Stretching State Policies

Part 2 – Stretching the Truth, Not Dollars: Considering the Application of Cost-Benefit Analysis to Teacher Layoff Alternatives

The Offensively Defensive Ideology of Charter Schooling

There now exists a fair amount of evidence that Charter schools in many locations, especially high performing charter schools in New Jersey and New York tend to serve much smaller shares of low income, special education and limited English proficient students (see various links that follow). And in some cases, high performing charter schools, especially charter middle schools, experience dramatic attrition between 6th and 8th grade, often the same grades over which student achievement climbs, suggesting that a “pushing out” form of attrition is partly accounting for charter achievement levels.

As I’ve stated many times on this blog, the extent to which we are concerned about these issues is a matter of perspective. It is entirely possible that a school – charter, private or otherwise – can achieve not only high performance levels but also greater achievement growth by serving a selective student population, including selection of students on the front end and attrition of students along the way. After all, one of the largest “within school effects on student performance” is the composition of the peer group.

From a parent (or child) perspective, one is relatively unconcerned whether the positive school effect is function of selectivity of peer group and attrition, so long as there is a positive effect.

But, from a public policy perspective, the model is only useful if the majority of positive effects are not due to peer group selectivity and attrition, but rather to the efficacy and transferability of the educational models, programs and strategies. To put it very bluntly, charters (or magnet schools) cannot dramatically improve overall performance in low income communities by this approach, because there simply aren’t enough less poor, fluent English speaking, non-disabled children to go around. They are not a replacement for the current public system, because their successes are in many cases based on doing things they couldn’t if they actually tried to serve everyone.

Again, this is not to say that some high performing charters aren’t essentially effective magnet school programs that do provide improved opportunities for select few. But that’s what they are.

But rather than acknowledging these issues and recognizing charters and their successes for what they are (or aren’t), charter pundits have developed a series of very intriguing (albeit largely unfounded) defensive responses (read excuses) to the available data.  These include the arguments that:

  1. Lotteries don’t discriminate and charters have to use lotteries, therefore they couldn’t possibly discriminate!
  2. Charters only appear to have fewer children with disabilities because they actually just provide better, more inclusive programming and choose not to label kids who would get labeled in the public system! In particular, charters do so much better at early grades interventions that they keep kids out of special education in later grades!
  3. While one might think charters are advantaged by having fewer low income children, in reality, Charters suffer significantly from “negative selection.” That is, the parents who choose charters are invariably the parents of kids who are having the most trouble in the public system.
  4. While it appears that Charter middle schools have high rates of attrition between 6th and 8th grade, all schools really do. Charters are no different.
  5. The data are always biased against charters and never in their favor on these issues.

The foundation for these arguments is flimsy in some cases, and manipulative in others.

 

1. Lotteries don’t discriminate

True, lotteries alone don’t, really can’t discriminate. They are random draws. Among those students whose parents enter them into a lottery for a specific school, those who get picked should be comparable to those who don’t picked.  But that does not by any stretch of the imagination – or by much of the available data – mean that those who end up in charter schools through the lottery system are in any way representative of students who live in the surrounding neighborhoods or attend traditional public schools in the local district.

In other words:

 Lotteried In = Lotteried Out

 Not the same as:

 Charter School Enrollment = Nearby Public School Enrollment

Why aren’t these the same? Well, those who enter the lottery to begin with are only a subset of those who might otherwise attend the local public schools. That subset can be influenced by a number of things, including quite simply, the motivation of a parent to sign up for the lottery, or parental impression regarding the “fit” of the school to the child. So, if the lottery pool is selective, then those lotteried into charters are merely a random group of the selective group.

Pundits frequently point to lottery based studies of charter school effects to make their case that lotteries don’t discriminate and that therefore charter schools serve the same students as traditional public schools.

Richard Ferris and I, in our recent study of New York City Charters note:

As one would expect, Hoxby found no differences between those who were randomly selected and those who entered the lottery but were not selected. This is not the same, however, as saying that the overall population in the charter schools is demographically similar to comparison groups or non-charter public school students. While they do compare the demographics of the charter “applicant pool” to those of the city schools as a whole (see Hoxby‟s Table IIA, page II-2),30 they never compare charter enrollment demographics with those the nearest similar schools or even schools citywide serving the same grade ranges.

http://nepc.colorado.edu/publication/NYC-charter-disparities

2. Charters are just better at dealing with children with disabilities in their regular programs and therefore don’t classify them

This story takes two different forms:

Version 1: Charters simply don’t identify kids because they provide better inclusive programming

This is perhaps conceivable when addressing children with mild specific learning disabilities and/or mild behavioral problems, but much less likely to be the case where more severe disabilities are concerned. In New Jersey and in New York City, many charter schools serve few or no children with disabilities (see: https://schoolfinance101.com/wp-content/uploads/2011/01/charter-special-ed-2007.jpg ).  This can only be accomplished if the only children with disabilities who were present to begin with were those with only the mildest disabilities – making declassification reasonable. Perhaps more importantly, while charter advocates make this claim, I am aware of no rigorous large scale or even individual case study research that provides any validation of this claim.

Version 2: Charters provide better early intervention programs such that by third grade, children don’t need to be classified when they reach the grades where they typically would be classified.

I’ve only heard this argument on a few occasions and it is simply a variation on the first argument. But this argument has important additional holes in it that make it even more suspect than the first argument. Most notably, very large shares of charter schools including charter schools with disproportionately low shares of children with disabilities are charter schools that don’t have lower grades – and serve upper elementary to middle grades. In fact, nationally, 44% of charters start after 3rd grade, and in New Jersey, for example, these are the schools with very low rates of children classified for special education services.

Perhaps more importantly, while charter advocates make this claim, I am aware of no rigorous large scale or individual case study research that provides any validation of this claim.

3. Not only do charters not cream skim, they actually are disadvantaged by negative selection!

That is, among poor children or among non-poor children, some statistical models show a small effect of the average entry performance of those choosing charters to be lower. Actually, the only potential validation I can find of this is from a study of high school charter schools in Florida (and a similar study of high school voucher recipients in Florida), though some other studies speculate the existence of a small negative selection effect without strong empirical validation.

But even if we see negative selection, as typically reported in these studies, we have to consider what it is that is being reported. Typically, what is being reported is:

Initial Performance of Non-Disadvantaged Students in Charters <= Initial Performance of Non-Disadvantaged Students in Traditional Publics

&

 Initial Performance of Disadvantaged Students in Charters <= Initial Performance of Disadvantaged Students in Traditional Publics

And across other categories of student needs (to the extent the attend charters). This could be problematic for making statistical comparisons where one is only able to control for various disadvantages but not to capture the fact that there may be some “negative selection” within these groups (lower initial performance). That would create model bias that works to the disadvantage of charters.

But that’s not what the pundits are claiming. This punditry is rather like the punditry about lotteries not discriminating. The above comparisons do not address the simpler issue of:

% Disadvantaged in Charters < % Disadvantaged in Traditional Public Schools

Rather, they compare initial achievement only among subgroups.

If the traditional public school 90% low income and 10% non-low income and the charter school is only 50% low income and 50% non-low income, the populations are still different – significantly and substantially. The entry performance of the 50% low income is being compared to the entry performance of the 90% low income in the traditional public school. But this does not address the fact that the schools are, overall, very different and the average entry performance of the groups overall are very different. That is, cream-skimming is indeed occurring on the basis of income and of other factors and as a result on the basis of entry performance in across all groups, but charters aren’t necessarily getting the strongest students within those groups.

 

4. Traditional public schools have attrition too

This is largely true, but with a few qualifiers attached. In general, children residing in lower income communities tend to make more unplanned moves from school year to school year and even during school years. So, mobility is a problem in high poverty settings and it is perhaps reasonable to assume that these poverty induced – housing disruption induced – mobility patterns affect both traditional public school and charter students in some settings.  But, this is only one component of mobility and attrition in the urban schooling setting.

This has been a hot topic lately to some extent because a report released by Gary Miron which used national school enrollment data to look at attrition patterns in KIPP middle schools.  Many who immediately shot back at Miron cited the KIPP study done by Mathematica which was able to more precisely address which students were “retained” versus which actually left. Of course, Gary Miron also cited this study and explained that it had greater precision in some respects, but further explained how in his own calculations it was simply infeasible that all of the attrition could be explained by retention. That is, that the entire difference between the size of the 8th grade cohorts and 6th grade cohorts could be attributed to holding kids back in 6th grade. Unfortunately, while the original Mathematica KIPP study provided some additional insights, it did not provide sufficient disaggregation or precision in explaining the different types of mobility and attrition occurring across KIPP and nearby public schools.

Mathematica subsequently released a more detailed descriptive analysis of student mobility and attrition, which did largely confirm similar aggregate rates of attrition between KIPP and matched public schools. But, while this study does allay some of the concerns regarding perceptions of attrition in KIPP schools, further untangling of inter-school within district mobility is warranted, and the findings that pertain to KIPP middle schools in the Mathematica analysis do not necessarily pertain to any and all charter schools or host districts showing comparable attrition rates.

5. The Data are Always/Only Biased against Charters (never in their favor)

This is one of my favorites because I love data, but recognize their fallibility. The data are what they are. There may be explanations for why one set of schools is more or less likely to have accurate data than another, and why these differences may compromise comparisons. But the data are what they are, with all relevant caveats attached.  What is NOT reasonable is to use the existing data to make a comparison, find that the result isn’t what you wanted it to be, and then explain why the data aren’t what they are… but do so without alternative data.

For example, it is unreasonable to compare host district rates of special education classification and charter special education classification, find that charters have far fewer classified students, and then only provide reasons why the charter classification rates must be wrong… implying that despite what the data say… there really aren’t differences in classification rates… or in ELL/LEP concentrations… or in low income student concentrations. Yes, there may be problems with the data, but data proof speculation about those problems with corrections applying only to the favor of charters is unhelpful and dishonest.

Hoxby & Murarka spend two pages here making arguments for why the dramatically lower reported rates of special education and ELL students in New York charter schools simply must be wrong – systematically under-reported. While some of their arguments may be true and seem reasonable, there is no clear evidence to support their implied argument that in spite of the data, we should assume that charters are actually comparable to traditional public schools. Rather, the data they use shows a finding they don’t like – a finding that NYC charters appear to under-serve ELL children and children with disabilities.

One example of a common data bias that does cut the other way, as I’ve shown on multiple occasions, occurs when comparing rates of low income students in charters and traditional public schools if only comparing those who qualify for “free or reduced price lunch.” When this measure is used alone, charters often do look the same as nearby traditional public schools (at least in NY and NJ). But, when a lower income threshold is used, we see that charters actually serve far fewer of the poorer students.  The “free or reduced lunch” data are insufficient for the comparison, and the bias makes charters look more comparable than they really are.

Oh, and finally: Charter schools are public schools!  Or are they?

Charter pundits get particularly irked when anyone expresses as a dichotomy “charter schools vs. public schools,” referring to charter schools versus “traditional” district schools. Charter pundits will often immediately interrupt to correct the speaker’s supposed error, proclaiming ever so decisively – “let’s get this straight first – CHARTER SCHOOLS ARE PUBLIC SCHOOLS!”

Well, at least in terms of liability under Section 1983 of the U.S. Code, in cases involving employee dismissal (and deprivation of liberty interests w/o due process), the 9th Circuit Court of Appeals has decided that charter schools are not state actors. That is, at least in some regards, they are not public entities, even if they provide a “public” service.  Or at least the companies responsible for managing them and their boards of directors are not held to the same standards as would official state actors – public officials and/or employees.

 Horizon is a private entity that contracted with the state to provide students with educational services that are funded by the state. The Arizona statute, like the Massachusetts statute in Rendell-Baker, provides that the sponsor “may contract with a public body, private person or private organization for establishing a charter school,” Ariz. Rev. Stat. § 15- 183(B), to “provide additional academic choices for parents and pupils . . . [and] serve as alternatives to traditional public schools,” id. § 15-181(A). The Arizona legislature chose to provide alternative learning environments at public expense, but, as in Rendell-Baker, that “legislative policy choice in no way makes these services the exclusive province of the State.”

Merely because Horizon is “a private entity perform[ing] a function which serves the public does not make its acts state action.”

http://www.ca9.uscourts.gov/datastore/opinions/2010/01/04/08-15245.pdf