Blog

Learning from Really Bad Graphs & Ill-informed Conclusions: Thoughts on the New PEPG “Catching Up” Report

A new policy paper from Eric Hanushek, Paul Peterson & Ludger Woessmann has been receiving considerable attention. This despite numerous completely outlandish assertions drawn from junk charts that fill the pages of this reformy manifesto.

Look, I’ve said it before and will say it again. Eric Hanusek has contributed a great deal of high quality research to the fields of education policy and economics of education over the years and I have in the past and continue to this day to rely heavily on much of it to inform my own analyses and thinking in education policy. But this kind of stuff is really just infuriating. Rather than spend too much time venting, let’s try to use this new report for instructive purposes – to instruct the casual reader how to debunk and distill complete and utter BS when presented with pretty scatterplots and glossy formatting.

First, for your reading pleasure, the complete brief may be found here: http://www.hks.harvard.edu/pepg/PDF/Papers/PEPG12-03_CatchingUp.pdf

Before I go down this road, allow me to point out that it’s one thing to offer up this type of analysis as a conversation starter… or even as a provocation with all relevant caveats and disclaimers. It’s yet another to present information of this caliber (or lack thereof) as a serious attempt at immediate influence over policy. There’s a huge freakin’ difference there. And it is certainly my impression that this brief, by its framing, is indeed intended to shape the immediate policy conversation as much if not more so than to generate speculative, intellectual musings over the various possible meanings of the charts.

Further I’m particularly concerned with the way in which much of the information is presented and the way in which conclusions are drawn from that information. This is where this brief can be useful and illustrative – where we can turn this clumsy manifesto into a teaching moment.  I’ll tackle three specific issues here:

  1. measures matter, especially when we are dealing with money and test scores,
  2. the complexity of educational systems is difficult to untangle two-measures at a time,
  3. always watch out for the ol’ bait and switch! (sometimes it’s really obvious!)

The report presents numerous international comparisons (that’s the focus) of similar rigor to the state level comparisons I critique here. I’m just a bit pressed for time, and had the state data more readily available.

Measures Matter!

Okay… so here’s the first graph that drove me up the freakin’ wall. This graph is a classic extension of what I refer to as the Hanushekian cloud of uncertainty.

Figure 1 – State Spending Increases & Test Score Gains (from report)

For decades, Hanushek has been presenting deceptively oversimplified scatter plots of school district, state level and international data on education spending and outcome measures. These scatterplots in and of themselves are invariably freakin’ meaningless.  I evaluate this body of literature by Hanushek as a whole in my policy brief Revisiting the Age Old Question: Does Money Matter in Education?  

This graph provides a new twist, comparing the dollar increases in spending to the NAEP average annual gain. Hanushek uses this graph to draw the following conclusions:

 According to another popular theory, additional spending on education will yield gains in test scores. To see whether expenditure theory can account for the interstate variation, we plotted test-score gains against increments in spending between 1990 and 2009. As can be seen from the scattering of states into all parts of Figure 9, the data offer precious little support for the theory.

On average, an additional $1,000 in per-pupil spending is associated with a trivial annual gain in achievement of one-tenth of 1 percent of a standard deviation.

Michigan, Indiana, Idaho, North Carolina, Colorado, and Florida made the most achievement gains for every incremental dollar spent over the past two decades.

(keep an eye on Michigan and Indiana – we’ll hear from them again later. Here, they are AWESOME – getting bang for the buck… Of course, one can look good on this indicator by simply not spending much more and showing commensurately paltry outcome gains!)

I love the sarcastic use of “precious” in this quote. But I digress.

But there are at least a few small – okay… pretty damn big … okay … huge… completely undermining – problems with using this scatterplot to draw these conclusions.

Let’s set aside the outcome measure for now and focus on two other not-so-trivial issues. First and foremost, a $1,000 increase in spending in Louisiana and a $1,0000 increase in spending in New Jersey or Connecticut may… just may… not be worth the same. Does $1,000 more go as far to improving competitiveness of teacher salaries in New Jersey as it does in New Mexico? Uh… not so much.  In fact, the National Center for Education Statistics Education Comparable Wage Index indicates that competitive wages in New Jersey are substantially greater than in Louisiana, significantly altering the value of the additional dollar.  Second… it’s possible that other factors may actually play a role too?

Let’s shatter the spending measure & related conclusions first! Here’s an alternate view – taking the current expenditures per pupil from 2008-09 over the current expenditures for 1990-91 – that is, expressing them effectively as a percent increase over base year (albeit not inflation adjusted – see this post for more on this topic).

Figure 2

Hmmm… as it turns out, New Jersey spending really didn’t increase much as a percent over the base year. Louisiana, however, did. In fact, Louisiana actually had among the highest growth among states.  Well then, that would mean that New Jersey really kicked some butt! Not much spending increase at all… and some pretty damn good outcome gain!

The bottom line however, is that either scatterplot is pretty meaningless, with mine arguably slightly less meaningless than the original! But neither really useful for making any bold statements about state aggregate spending and outcome gains. Again, in my policy brief on Money Matters, I explore these issues in much greater detail. Referring to more rigorous studies attempting to link spending and outcome measures, I explain:

They [more recent studies] also, however, raised new, important issues about the complexities of attempting to identify a direct link between money and student outcomes. These difficulties include equating the value of the dollar across widely varied geographic and economic contexts, as well as in accurately separating the role of expenditures from that of students’ family backgrounds, which also play some role in determining local funding.

http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

I can’t pass up this seemingly tangential point.  I took particular enjoyment in this finding from Hanushek’s new report:

Maryland, Massachusetts, and New Jersey enjoyed substantial gains in student performance after committing substantial new fiscal resources.

Hanushek went to great lengths in an earlier book and in related policy papers to make the case the New Jersey was a classic example of failed massive spending increases and he has repeatedly cited New Jersey’s failures (as recently as this spring – my rebuttal here!) as a reason why other states should not increase funding for schools. Kevin Welner and I discuss this Hanushekian claim extensive in a recent article in Teachers College Record.

Isn’t that precious?

Two Measures Generally Insufficient for anything but Playful Speculation & Exploration!

As I noted above, the second reason why we should NOT take the Hanushekian cloud seriously, nor should we take the other graphs in the new report too seriously is that they attempt to draw inappropriately bold conclusions from graphs involving only two variables at a time. This approach can be useful for exploring patterns and/or raising questions. We all should spend much time exploring visual representations of our data- getting to know our data – our measures and how they relate. But to take this information and assert that spending matters little, or to go even further and make claims that the South is rising again… and that accountability driven policies of southern states are leading to disproportionate gains while curmudgeonly anti-reformy anti-accountability Midwest states are suffering, is just absurd.  I’ll dig into these conclusions a bit more in the next and final section.

What else might be going on here? Well, one likely issue requiring at least some more exploration is whether there are any substantive changes in the demography of these states. Yeah… it’s just possible that states that saw greater improvement saw less increase in poverty. Uh… and yeah… it’s possible that states that started lower gained more. Now, the authors acknowledge this latter point, but then brush it off. Instead, they assert that a likely alternative explanation is that Midwest states were riding high on their past successes and great universities, and simply got complacent.

Here are a few figures to chew on.

Figure 3 – Demographics and Outcome Change

Note that Hanushek, Peterson and Woessmann make a big deal about the great performance of Louisiana, Delaware, Maryland and Florida and the particularly sucky performance of Michigan, Indiana, Minnesota and Wisconsin. Uh… wait, weren’t Indiana and Michigan awesome above – for getting those paltry outcome gains for little or no additional investment? Yeah… but now they suck. Really… suck… because… they’re complacent… and not reformy.   As it turns out, the states referred to as generally awesome by the authors also had generally less increase in % low income students.

Figure 4 – Starting Performance Level and Outcome Change

While the authors acknowledge that starting performance levels are associated with outcome change, they go to great lengths to blow off this issue, arguing a) that it explains a relatively small share of the variation (uh… only about a quarter of it… which is actually quite large for this type of data/analysis) and b) that other plausible explanations involving the southern reformyness vs. midwestern complacency dichotomy may explain much of the rest of the difference? (without any evidence to support this notion!).

Yes. Starting level does seem to matter! And that can’t be overlooked, or brushed aside.

Together, change in % free lunch and 1992 8th grade math score explain about 41% of the variation in annual gain across the 34 states for whom each measure is available.

Ye Ol’ Bait & Switch

But there are bigger and more obvious problems with the conclusions drawn in this report… that don’t really even require much statistical digging. A classic deceptive strategy used in this type of reporting is ye ol’ bait and switch and/or conflating one group identification with another.

Ye ol’ bait and switch is often used in voucher debates where pundits will point to elite private schools as examples of the choices that all children/families should have and will then point to the average tuition of Catholic elementary schools (circa 1999) as an example of the cost of private education (see: http://nepc.colorado.edu/publication/private-schooling-US). Uh… 1999 national average Catholic elementary school tuition won’t cover much of the tuition at Sidwell Friends in 2012!

An entire subsection of the Hanushek, Peterson and Woessmann report is titled Is the South Rising Again? Much attention is paid in the report to the premise that southern states are staging an impressive comeback and that this impressive comeback is a function of their forward thinking in the 1990s and 2000s.

Specifically, the authors laud the achievement gains of Louisiana, Delaware, Maryland and Florida! All, of course, “southern.”

And specifically, the authors laud the early reformyness of Tennessee, North Carolina, Florida, Texas, and Arkansas – as providing possible explanations for the high performance of southern states!

Wait a second…. Those aren’t the same freakin’ states are they? What’s up with that? Did they really do that? Did they really frame it that way?

Here’s what the report says:

Five of the top-10 states were in the South, while no southern states were among the 18 with the slowest growth. The strong showing of the South may be related to energetic political efforts to enhance school quality in that region. During the 1990s, governors of several southern states—Tennessee, North Carolina, Florida, Texas, and Arkansas—provided much of the national leadership for the school accountability effort, as there was a widespread sentiment in the wake of the civil rights movement that steps had to be taken to equalize educational opportunity across racial groups. The results of our study suggest those efforts were at least partially successful.

Meanwhile, students in Wisconsin, Michigan, Minnesota, and Indiana were among those making the smallest average gains between 1992 and 2011. Once again, the larger political climate may have affected the progress on the ground. Unlike in the South, the reform movement has made little headway within midwestern states, at least until very recently. Many of the midwestern states had proud education histories symbolized by internationally acclaimed land-grant universities, which have become the pride of East Lansing, Michigan; Madison, Wisconsin; St. Paul, Minnesota; and Lafayette, Indiana. Satisfaction with past accomplishments may have dampened interest in the school reform agenda sweeping through southern, border, and some western states.

Keep in mind that Louisiana and Delaware didn’t get all reformy until the Race to the Top Era. Further as shown above, Louisiana actually had one of the largest proportionate increases in funding and Louisiana had relatively low growth in low income students.

Here’s a look at the BAIT and at the SWITCH, where I consider the bait to be those precious high outliers – the over-performers in the analysis, and the switch to be the states that were lauded as implementing policies that are likely behind this performance. As it turns out, while those early accountability/reform states also saw pretty good gains, their gains are more or less in line with gains of other states that had similar starting point – at least on 8th grade math (my apologies for simply not having the time to combine all NAEP scores, but the 8th grade math starting point explains 27% of the variation in gain, and along with free lunch change explains 41% of the variation in gain. Not bad, and more than Hanushek, Peterson and Woessmann suggest!).

Figure 5 – The BAIT… and the SWITCH!

Why is this relevant? The assertion being made in this report is essentially that the SWITCH group of states were implementing desired policies… policies that the sucky states like Michigan and Indiana should perhaps consider – or at least should have instead of resting on their laurels. Then, perhaps they could have looked more like the  precious bait. The problem is that the only overlap between the BAIT and the SWITCH is Florida – hardly a stereotypical “southern” state… and one whose reformyess and NAEP gains have been discussed & critiqued extensively by others in recent years (not time for that here). And then of course, we have the proclamation of the suckyness of Michigan and Indiana. Okay… which is it?

The bottom line in all of this is that this new report doesn’t tell us much. I don’t really have a problem with that. What I have a problem with is assuming that it does.

I do have a problem with particularly junky charts/analysis like the one asserting that spending increases have no relationship to outcome increases – with no consideration at all for the regional differences in the value of those increases – and all of the other variables that may… just may… play some role! That’s just lazy and sloppy and inexcusable.

But, at least I’ve got a new handout for discussion & critique for the first week of my fall semester class on data analysis and reporting!

Moneyball, Superman, Angry Royals Fans and Education Reform?

These past few days have been interesting, as I’ve followed more than usual, the festivities around the Major League Baseball All Star Game. I’ve followed the festivities in part because the game was in Kansas City this year and I lived in the Kansas City ‘burbs for 11 years up until 2008. I’m an east coast guy – born & raised Vermonter, livin’ in Jersey – college in PA, masters in CT, Doc in NYC… also taught in NH. I love east coast cities, and I probably fit the typical east coast snob profile. But some of the events that went down this week at the ASG left me feeling a bit uneasy.  Now, even as a kid, I kind of like the Royals. They were pretty damn good when I was growing up, and had that cool stadium with the fountains. While we lived in KC, we went to quite a few games… ‘cuz tickets were cheap and accessible.[1]

As I sat down to watch the Home Run Derby, I happened to be checking twitter – where I still follow some Kansas City media folks. I starting seeing tweets with the hashtag #boocano… along with links to explanations as to why KC fans should boo when Yankee Robinson Cano comes to bat.  Even as the booing actually happened… and it was quite impressive… the story I was getting from ESPN was strangely disconnected from the story I was getting from my KC tweets.

In case you missed it here’s some video from the stands at the K:

http://www.youtube.com/watch?v=LZlQk861C5c&feature=plcp

http://www.youtube.com/watch?v=sPl9Ez8dE6w&feature=plcp

In fact, ESPN wasn’t sharing much of anything… rather, suggesting that the KC fans were being inappropriate and expressing sour grapes simply because their guy (who must suck, because he’s a Royal) didn’t get picked for the home run derby. Eventually, ESPN and also Fox would post on their websites, stories of how Kansas City fans were “classless” and rude, while never actually sharing the details behind why Royals fans booed Cano.  For my east coast peers, here’s a Kansas City run down on what actually happened, since the national media found it far more convenient to demonize the rough and tumble, classless meanies in Kansas City rather than the upstanding and esteemed Yankee Cano.

As someone from the east, who headed to KC for 11 years after living in Yonkers, teaching and attending grad school in NYC… I found KC… and its sports fans to be frustratingly mild & passive, but still enthusiastic. Rough and tumble, rude, classless meanies? Nah… those are attributes of the fan base of my team – the Red Sox (remember, I’m a born/raised New Englander) – and we’re damn proud of it!

The national media spin was that KC fans were over-reacting because Billy Butler wasn’t picked for this inconsequential event. There was no mention of the fact that Cano said he would likely pick him – for this inconsequential event. That’s what fueled the whole #boocano movement in social media. So, the whole Boo Cano thing itself was about a lie and a broken promise [whether obnoxious and condescending or simply oblivious on Cano’s part] and was really directed at Cano himself. This wasn’t about some misguided, misplaced Yankee envy from a poor Midwestern team that just can’t get its own act together.

 What does this have to do with Education Reform?

The subsequent national media spin was both interesting and disturbing to me –  and I began to see all sorts of parallels between a) the national media coverage of this event and the national media coverage of (and spin on) “education reform” (such as NBC’s Education Nation & Waiting for Superman), and b) the real inequities of major league baseball that thwart any possibility that it will ever be a legitimate, fair competition, and the real inequities of American education that thwart any possibility that kids, regardless of where they grow up will ever have equal opportunity for social mobility.

I was particularly struck by how the national media constructed a storyline that allowed them to generate sympathy for Cano while demonizing Royals fans, blatantly suppressing the actual reasons why those Royals fans were so angry. It’s rather like the demonization of teachers in the ed reform debates (finding the right visuals of teachers as angry mobs protesting, carrying pickets decrying salary cuts & furloughs, etc.). It’s just bizarre. Teachers tend to be about as angry & aggressive and threatening…on average, as, well… Royals fans!

Why, then, are the Royals fans the preferred demons in this story line, and the Yankees and Cano the upstanding victims?  This one particular blog post seems to have nailed it best:

It’s perfectly fine for Phillies fans to be passionate for their team. It’s a crime for the Royals faithful to do the same. Why? Because we’re supposed to be the doormats. Doormats do not speak out about being walked out. They do not protest their role as a cleaner of the feet of the social elite. They do their jobs quietly.

http://kingsofkauffman.com/2012/07/10/we-will-remain-silent-no-longer/?utm_source=twitterfeed&utm_medium=twitter

Even worse, doormats are supposed to feel lucky they are allowed to be the doormats for the elite. Doormats are supposed to know their place, sit down, shut up and take it. Questioning one’s place, as a doormat, is certainly out of the question! [again… this isn’t what the Cano thing was about initially… it wasn’t about salary equity… Yankee envy… etc. It was about Cano. The media response – referred to by one Boston outlet as “yankee Jazeera”, however, was all too illustrative of the media interest in preserving the inequities of baseball – and the status of the Kansas City Royals as doormats!]

What Do Moneyball and Superman Have in Common?

There was a time when Royals fans were legitimately angry and outspoken about the financial inequities of Major League Baseball. They even had the gall to stage a protest against the Yankees when they came to town in 1999. Royals fans donned t-shirts which said “share the wealth” on their backs, and about 3,000 fans with the shirts turned their backs to the Yankees.

Arguments over making baseball more legitimately competitive by capping salaries and/or aggressively sharing revenue seem to have died down since that time. Much like arguments about school funding equity or adequacy that were more prominent a few decades ago. I guess this is because in both cases we have simply come to realize that money really doesn’t matter in either case. Low payroll teams have as much chance as anyone else of winning? And of course we all know about those charter schools serving low income kids that consistently beat the odds with so few resources?

Hmmm… that still doesn’t make a whole lot of sense? Why would public sentiment shift so sharply away from these glaring inequities. Cleary, even if other stuff in addition to money matters, having a level financial playing field is still relevant? As I explained in a recent post, there is certainly no evidence that more equitable student outcomes are attainable in a less financial equitable system. And there’s certainly no evidence that baseball is fairer by virtue of the huge salary inequities!

When did we become so distracted? How? Why?

Moneyball and Superman!

The American public has to a large degree been duped by clever media portrayals of statistical anomalies and superhero disinformation.

First, let’s take a look at some of the baseball evidence. Here’s the relationship for the current year between win/loss percent and team salaries up to the All Star Break, for the American League (where salary disparities are greatest).

FIGURE 1

Now, here is a look at cumulative salaries and cumulative won/loss percentages from 2009 to the all star break of 2012.

FIGURE 2

Yeah… there’s actually a pattern here. In fact, in the AL, salary variation alone explains nearly half of the variation in won/loss percentage, when taken over time. Money may not be “everything” but it’s clearly something!

But… but… but… MONEYBALL! The concept of Moneyball and its popularity provide MLB an excuse to ignore that which makes the entire sport illegitimate. The idea that if teams just got clever with their statistical analysis – thought about baseball differently – they could realize that this salary stuff is really completely meaningless. Who needs to pay big bucks? It’s about being smart! Yeah… exactly what the big dollar teams would like everyone else to think.

Those wishing to maintain the distraction will often use more anecdotal and less relevant characterizations of the numbers – such as pointing out that in most years the highest payroll team does not win the World Series – and/or that sometimes low payroll teams do really well – MONEYBALL!

Two important points are in order here. First, even if a team does come up with a clever strategy that works well in one season like finding the cheapest players who add value to the team, as other teams catch on and adopt similar strategies, the market adjusts and those with the big bucks still win.

Second, outliers and/or outlier seasons are not a basis for making judgments about what is better policy for achieving a legitimate competitive playing field for Major League Baseball.

This is much the same argument – and a similar distraction being used in the education reform debates. The argument is that parents and kids in low income districts need to shut up and sit down, not ask for a fair share of funding. Instead, they should play moneyball! Or… uh… no money… ball. And, since they are incapable of determining the rules for themselves, we shall impose upon them a statistical system of teacher reshuffling and deselection!  We’ll moneyball their schools for them – through ill-conceived reformy state mandates… with few or no additional resources attached!

Let’s take a look at two of our least equitable states, New York and Illinois. I’ve used these graphs before in posts, and they come from this recent paper: https://aefpweb.org/sites/default/files/webform/Baker.AEFP_.NY_IL.Unpacking.Jan_2012.pdf

FIGURE 3: ILLINOIS PUBLIC SCHOOL DISTRICTS 2008-09

FIGURE 4: NEW YORK PUBLIC SCHOOL DISTRICTS 2008-09

Each of these graphs (statistical analysis explained in the linked paper) shows that in each state there are districts that have very high resource levels – after adjusting for student needs and district cost factors – and there are districts that have lower resource levels.

In each case, higher need districts, serving very low income populations and lacking the resources to get the job done have systematically lower outcomes.  In really simple terms, there are winners and there are losers – there are Royals and there are Yankees – and there are resource disparities that match.

The whole idea behind Waiting for Superman, like Moneyball, is similarly to assert (read deceive) that there are these clever costless strategies out there being used by (mainly charter) schools that simply beat the odds, while serving the very same kids and while having no special, additional resources upon which to draw.

It’s got nothing at all to do with money! Instead, like the 2002 Oakland A’s, schools that beat the odds know how to buck the standard practices of the game, recruit exceptional team players, and callously – I mean efficiently – dump those who don’t immediately produce.

Unfortunately, many modern reform strategies and rhetoric are little more than distractions from the root issues of inequity in the American Education System – just like Moneyball was a convenient distraction from the inequities that plague MLB. While there might be some legitimate lessons to be learned in each case (including lessons on using statistics in decision making, where relevant), neither moneyball nor superman validate a claim that money really doesn’t matter.  It does.

Again, it’s utterly foolish to assert that baseball is fairer by maintaining salary inequity, and similarly ridiculous to assert that equitable schooling can be more easily achieved with vastly inequitable funding.

How Education is Different from Baseball

Now, here’s the big difference between public schooling and Major League Baseball:

Educating future generations of children isn’t a freakin’ game!

Yeah – Major League Baseball will never have any credibility as a legitimate competitive sport as long as it permits some teams to spend more than 3.5 times what other teams do. Arguably, MLB has little interest in favoring such credibility over generating revenues. MLB likely benefits more as a commercial for-profit entity by maintaining the disparity than by quashing it. TV revenues are likely higher when the World Series includes big market teams. So it’s in the interest of MLB to increase the odds that big market teams make the series.  So, I accept that the revenue interests of the sport override any efforts to make it a legitimate competition. So be it.

One can make a similar case that it’s in the interest of those who have the resources in elementary and secondary education to suppress the odds of children from lower income families competing for admission to colleges and universities. But while it may be reasonable to overlook such interests in Baseball, I find it somewhat more offensive when it comes to kids and their schools.

So, yeah… I think the Royals fans were just fine when the booed Cano and the media was simply wrong for demonizing them while selectively presenting facts.

But those Royals fans were even more right when they donned those t-shirts back in 1999.  Yeah… it is the money. Money matters. Equity matters.

And don’t let Moneyball or Superman convince you otherwise.

 


[1] funny tangent – being an east coast snob [having just finished my doc work at Columbia the previous year] and understanding how ticket access works back east, when I went to get our first Royals tickets, I called in a favor through a friend in the MLB central office, to get us some extra-special seats… they gave me the phone # of someone in the Royals front office… who seemed to think I was being a total ass by trying to get a favor… free tickets… from a team that could really use the ticket revenue! In retrospect, he was totally right!

Friday Finance 101: School Finance Formula & Money Matters Basics

Modern state school finance formulas – aid distribution formulas – typically strive (but fail) to achieve two simultaneous objectives: 1) accounting for differences in the costs of achieving equal educational opportunity across schools and districts, and 2) accounting for differences in the ability of local public school districts to cover those costs. Local district ability to raise revenues might be a function of either or both local taxable property wealth and the incomes of local property owners, thus their ability to pay taxes on their properties.

Figure 1 presents a hypothetical example of the distribution of state and local revenue per pupil across school districts, sorted by poverty concentration. The hypothetical relies on the simplified assumption that districts with weaker local revenue raising capacity also tend to be higher in poverty concentration. While that’s not uniformly true, there is often at least some correlation between the two [it serves to make this hypothetical a bit more straightforward]. Accepting this oversimplified characterization, Figure 1 shows that the typical low poverty and high local fiscal capacity district would likely raise the vast majority of the cost of providing its children with equal educational opportunity through local tax dollars. There may be some small share of state general aid assuming that the total cost of providing equal educational opportunity exceeds the local resources raised with a fair tax rate.

Figure 1

 

This pattern is usually arrived at (if it is arrived at) through some overly complicated formula requiring multiple inefficiently and illogically laid out spreadsheets of calculations and based on measures for which each state chooses its own, completely distinct and unrecognizable nomenclature. A short version might go as follows:

Step 1 – determine target funding level (need & cost adjusted foundation level) per pupil for each district

Target Funding per Pupil = Foundation Level x Student Need Adjustments x Geographic Cost Adjustments

Where the foundation level is some specified per pupil dollar amount. Where student need adjustments include adjustments for individual student educational needs, as for children with limited English language proficiency and children with one or more disabilities, and collective characteristics of the student population such as poverty, homelessness and/or mobility/transiency rates. Where geographic costs refer to geographic variations in competitive wages, and factors such as economies of scale and population sparsity.

Step 2 – determine the share of target funding to be raised by local communities

State Aid per Pupil = Target Funding per Pupil – Local Fair Share

Yep. That’s it. Student needs and costs are accommodated in Step 1, and differences in local wealth and/or capacity to pay are accommodated in Step 2! Now convert that into about 2,000+ separate calculations and create incomprehensible names for each measure (like calling a weight on “low income students” a “student success factor”) and you’ve got a state school finance formula.

But I digress.

Implicit in the design of state school finance systems is that money may be leveraged for improving both the measured and unmeasured outcomes of children.  That is, that money matters to the quality of schooling that can be provided in general and that money matters toward the provision of special services for children with greater educational needs. That is, money can be an equalizer of educational opportunity.

In a typical foundation aid formula, it is implied that a foundation level of “X” should be sufficient for producing a given level of student outcomes in an average school district. It is then assumed that if one wishes to produce a higher level of outcomes, the foundation level should be increased. In short, it costs more to achieve higher outcomes[1] and the foundation level in a state school finance formula is the tool used for determining the overall level of support to be provided.

Further, it is assumed that resource levels may be adjusted in order to permit districts in different parts of the state to recruit and retain teachers of comparable quality. That is, the wages paid to teachers affect who will be willing to work in any given school. In other words, teacher wages affect teacher quality and in turn they affect school quality and student outcomes. This is plain common sense, and this teacher wage effect operates at two levels. First, in general, teacher wages must be sufficiently competitive with other career opportunities for similarly educated individuals. The overall competitiveness of teacher wages affects the overall academic quality of those who choose to enter teaching.[2] Second, the relative wages for teachers across local public school districts determine the distribution of teaching quality.[3] Districts with more favorable working conditions (more desirable facilities, fewer low income and minority students) can pay a lower wage and attract the same teacher. Wages matter, therefore, money matters.

Finally, those student need adjustments in state school finance formulas assume that the additional resources can be leveraged to improve outcomes for low income students, or students with limited English language proficiency. First, note that some share of the additional resources is needed in higher poverty settings simply to provide for “real resource” equity – or to pay the wage premium for doing the more complicated job. Second, resource intensive strategies such as reduced class sizes in the early grades, high quality (using qualified teaching staff)[4] early childhood programs, intensive tutoring and extended learning time programs may significantly improve outcomes of low income students. And these strategies all come with significant additional costs (even when adopted under the veil of “no excuses charterdom“).

But, because providing more money to support public schools often means raising more tax dollars and because providing supplemental resources to children whose own communities may lack local revenue raising capacity often means more aggressive redistribution of state tax revenues, whether and how money  matters in education is often hotly politically contested.

School finance is a political minefield, which is arguably why so many pundits have tried to distract from school finance issues by advancing ludicrous arguments that education equity and overall quality can be improved by altering teacher labor markets via statistical deselection without ever addressing funding deficiencies and wage disparities or by expanding charter schooling and ignoring the role of philanthropic contributions (while counting on them).  Unfortunately for those political pundits, school finance is a minefield they must eventually walk through if they ever expect to make real progress in resolving quality or equity concerns.

In a recent report titled Revisiting the Age Old Question: Does Money Matter in Education?[5] I review the controversy over whether, how and why money matters in education, evaluating the current political rhetoric in light of decades of empirical research.  I ask three questions, and summarize the response to those questions as follows:

Does money matter? Yes. On average, aggregate measures of per pupil spending are positively associated with improved or higher student outcomes. In some studies, the size of this effect is larger than in others and, in some cases, additional funding appears to matter more for some students than others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent – in other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.

Do schooling resources that cost money matter? Yes. Schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes. Again, in some cases, those effects are larger than others and there is also variation by student population and other contextual variables. On the whole, however, the things that cost money benefit students, and there is scarce evidence that there are more cost-effective alternatives.

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more adequate funding with more accountability for its use may be most promising.

While there may in fact be better and more efficient ways to leverage the education dollar toward improved student outcomes, we do know the following:

  • Many of the ways in which schools currently spend money do improve student outcomes.
  • When schools have more money, they have greater opportunity to spend productively. When they don’t, they can’t.
  • Arguments that across-the-board budget cuts will not hurt outcomes are completely unfounded.

In short, money matters, resources that cost money matter and more equitable distribution of school funding can improve outcomes. Policymakers would be well-advised to rely on high-quality research to guide the critical choices they make regarding school finance.

Regarding the politicized rhetoric around money and schools, which has become only more bombastic and less accurate in recent years, I explain the following:

Given the preponderance of evidence that resources do matter and that state school finance reforms can effect changes in student outcomes, it seems somewhat surprising that not only has doubt persisted, but the rhetoric of doubt seems to have escalated. In many cases, there is no longer just doubt, but rather direct assertions that: schools can do more than they are currently doing with less than they presently spend; the suggestion that money is not a necessary underlying condition for school improvement; and, in the most extreme cases, that cuts to funding might actually stimulate improvements that past funding increases have failed to accomplish.

To be blunt, money does matter. Schools and districts with more money clearly have greater ability to provide higher-quality, broader, and deeper educational opportunities to the children they serve. Furthermore, in the absence of money, or in the aftermath of deep cuts to existing funding, schools are unable to do many of the things they need to do in order to maintain quality educational opportunities. Without funding, efficiency tradeoffs and innovations being broadly endorsed are suspect. One cannot tradeoff spending money on class size reductions against increasing teacher salaries to improve teacher quality if funding is not there for either – if class sizes are already large and teacher salaries non-competitive. While these are not the conditions faced by all districts, they are faced by many.

It is certainly reasonable to acknowledge that money, by itself, is not a comprehensive solution for improving school quality. Clearly, money can be spent poorly and have limited influence on school quality. Or, money can be spent well and have substantive positive influence. But money that’s not there can’t do either. The available evidence leaves little doubt: Sufficient financial resources are a necessary underlying condition for providing quality education.

There certainly exists no evidence that equitable and adequate outcomes are more easily attainable where funding is neither equitable nor adequate. There exists no evidence that more adequate outcomes will be attained with less adequate funding. Both of these contentions are unfounded and quite honestly, completely absurd.

 


[1] Duncombe, W. and Yinger, J.M. (1999). Performance Standards and Education Cost Indexes: You Can’t Have One Without the Other. In H.F. Ladd, R. Chalk, and J.S. Hansen (Eds.), Equity and Adequacy in Education Finance: Issues and Perspectives (pp.260-97). Washington, DC: National Academy Press.

[2] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.  Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352. David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498. Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408. Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

[3] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144. Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62. Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438. Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[5] Baker, B.D. (2012) Revisiting the Age Old Question: Does Money Matter in Education. Shanker Institute. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

More thoughts on Charter Punditry & Declarations of Certainty

I’m a little late in pouncing on this one. JerseyJazzMan beat me to the punch with some relevant points.  A short while back, the Wall Street Journal posted an op-ed by Deborah Kenny, CEO of New York based charter chain Harlem Village Academies. Kenny’s op-ed purported to explain why charter schools are successful.  Of course, we could spend all day on that contention alone, since it is relatively well understood that charter results have been mixed at best. Indeed, I have explained in my published work and in blog posts that the track record for certain charter chains and in certain settings seems stronger than in others.

Here is how Deborah Kenny explained why charters succeed (implicitly where traditional public schools do not):

Critics claim that charter schools are successful only because they cherry-pick students, because they have smaller class sizes, or because motivated parents apply for charter lotteries and non-motivated parents do not. And even if charters are successful, they argue, there is no way to scale that success to reform a large district.

None of that is true. Charters succeed because of their two defining characteristics—accountability and freedom. In exchange for being held accountable for student achievement results, charter schools are generally free from bureaucratic and union rules that prevent principals from hiring, firing or evaluating their own teams.

http://online.wsj.com/article_email/SB10001424052702303703004577472422188140892-lMyQjAxMTAyMDIwNDEyNDQyWj.html?mod

As is par for the course of late in such arguments, Kenny’s chartery punditry is completely void of any data or contextual information that might provide insights as to why, or even whether charter schools “succeed.” Yet, while bafflingly void of substantiation, Kenny’s punditry is disturbingly decisive & hyper-confident.

It is yet another case of declaring to know absolutely what we absolutely don’t know!

For the moment, let’s accept Kenny’s proposition that at least in New York City, many charter schools affiliated with high profile management organizations have posted solid test scores (not entirely the case… but let’s accept that proposition…).

So then, let’s compare New York City charter schools from these CMO chains to traditional public schools in the city on a handful key parameters – a) how much they spend and b) which kids they serve – each relative to the schools which they supposedly far outshine.  These are things that actually matter. Now… if they do spend the same as NYC traditional public schools and serve similar student populations, we might be able to make the case that their “success” is a function of something different that they are doing with the same dollar – more bang for the buck. A relevant question… but a hard one to distill. But, if they serve very different student populations, then it’s even harder to distill what the heck is really going on.[1]

Further, if they are outspending NYC public schools that do serve similar populations, their access to resources may be what allows them to do different stuff… which may then explain their supposed “success.”  It would certainly be hard to make the above claims without looking at any of this, wouldn’t it?

So, here’s the stat sheet:

For each of these comparisons I have used a three year panel of data on NYC Charters schools and all NYC traditional public schools, from 2008 to 2010. To compare spending, I have used the estimates generated in our recent report on charter school spending:

  • Baker, B.D., Libby, K., & Wiley, K. (2012). Spending by the Major Charter Management Organizations: Comparing charter school and local public district financial resources in New York, Ohio, and Texas. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/spending-major-charter.

Further discussion of the spending comparisons for NYC can be found here: https://schoolfinance101.wordpress.com/2012/05/07/no-excuses-really-another-look-at-our-nepc-charter-spending-figures/

In short, each of these charter chains spends more per pupil than NYC public schools that serve similar student populations. Some, like KIPP and UnCommon schools spend a lot more!

Further, when compared against same grade level schools citywide, each of these charter chains serves fewer children with disabilities (and I lack data on the type of disabilities, which may also matter).

Finally, when compared against same grade level schools in the same zip code, each of these charter chains serves far fewer low income children and FAR fewer children with limited English language proficiency.

These substantive differences in resources and student populations make it difficult if not impossible to assert that these charter school chains operating in New York City have somehow identified a magic formula for success that is neither resource dependent nor dependent on serving very different student populations than city district schools.

There is certainly no basis whatsoever for asserting that accountability and freedom – specifically freedom from bureaucratic and union rules – are necessarily the determinants of charter success. In fact, these broad principles apply similarly to all independent charters, but while some are good, others suck – and many are allowed to persistently suck despite supposed heightened accountability. Indeed, the upper half is better than average! And the lower half… is not!

It’s hard to suggest that either of these factors – accountability or freedom – are the determinants of charter success when success varies so widely across charters. What does tend to vary across charters is a) access to philanthropic resources and b) student populations served. AND… it may also be the case that some charters have adopted unique strategies…… some of which may actually come with additional costs!

There may be some cool stuff going on in some of these schools, just as there may be some cool stuff going on in NYC district schools.  It may well be that freedom from bureaucratic rules permits schools to do cool stuff.  It would certainly seem advantageous in the context of New York State moving forward to be able to skip out on complying with new, ill-conceived teacher evaluation legislation.

We need to figure out what works and for whom, whether those ideas come from traditional public schools, charter schools or private schools.

We need to figure out the costs of doing these things. Ken Libby, Kathryn Wiley and I discuss these issues in our recent policy brief (read it! It’s not some anti-charter propaganda. It’s an actual study of spending data… with detailed documentation & extensive lit review).

Unfortunately, the tendency among charter “defenders” is to simply deny, deny, deny… ignore costs (make bizarre, unfounded excuses, present half-assed, back of the napkin estimates, or sidestep them)… ignore substantive contextual issues, etc., etc., etc. (certainly, the tendency among the attackers is to declare all charter operators/supporters to be union-busting privatizing profiteers – also an unhelpful characterization for a diverse array of institutions).

It’s time to start digging deeper into what makes schools tick and for whom and how to provide the mix of schooling that best serves the largest share of children.


[1] As I explained in a recent post, even in a lottery study – of students lotteried in/lotteried out – those lotteried out likely attend schools with substantively different classroom peers than those lotteried in, and it remains difficult if not impossible to distill school/teacher effect from peer effect since both operate at the classroom level.

 
 

 

 

How much does Federal Title I Funding Affect Fairness in State School Finance Systems?

About this much!

These funding profiles are based on the methodology used in our reports on school funding fairness. The reports can be found here: http://schoolfundingfairness.org/ and the technical appendix can be found here: http://schoolfundingfairness.org/

This graph is based on an updated model which includes data from 2007-08, 2008-09 and 2009-10 (these are linear projections of otherwise messy distributions… hence the fact that some of the lines cross at/around 0% poverty).

The bottom line is that while Federal Title I programs certainly provide much needed funds to many high poverty districts, in the big picture, they are a drop in the bucket. They are now, and have been for some time.

The states in this figure are among the least equitable in the nation. And Title I aid simply isn’t sufficient to fix that. Equitable and adequate financing of local public school districts remains the responsibility of the states, and these states have some work to do!

Friday Finance 101: What Can we Learn about Education Costs & Efficiency by Studying Existing Public Schools?

One pervasive reformy argument is that our entire education system may be instantly transformed to be more productive and efficient by instantly adopting untested reformy policies and/or untested solutions of sectors other than education. Further, that we must take these bold leaps of faith because the public education system itself is too corrupt, too bloated, too inefficient to provide any useful lessons! Perhaps the whole system can be replaced with you-tube videos. Or perhaps we can just fire all of the teachers with more than 10 years experience and pay the rest based on the test scores they produce! Or perhaps some other lessons of industry can cure the (unsubstantiated) ills of American public schooling!

Kevin Welner and I addressed this issue in our critique of materials provided on the U.S. Department of Education’s website on improving educational productivity.  Specifically, Marguerite Roza and Paul Hill in one working paper titled Curing Baumol’s Disease argue that the entire public schooling system suffers from a disease of inefficiency and thus any lessons for improving educational productivity must be sought outside of the current system.

Similar arguments have been used by those who claim that state legislatures and state courts should never rely on cost analyses based on current practices of existing educational systems in order to either guide the design of state school finance systems through reform legislation, or to evaluate whether state school finance systems are equitable or adequate.

Researchers and/or policy analysts tend to use either of two general approaches to study education costs, identifying spending levels that should generally be sufficient for achieving desired outcomes and identifying how education costs vary from one location to another across districts within a state and how those costs vary by the needs of varied student populations. One approach involves gathering focus groups of informed constituents to specify the inputs to schooling they believe are needed to get the job done. These professional judgment panels are essentially proposing a hypothesis of the programs and services needed under varied conditions and for varied student populations to achieve desired outcomes. The alternative is to construct statistical models which estimate the relationship between current district spending levels and current student outcomes, with consideration for various factors that affect the cost of achieving desired outcomes (student characteristics, district characteristics, labor market pressures) and with consideration for factors that influence whether districts are more or less likely to spend inefficiently.

This approach, called education cost function modeling has been used extensively in peer-reviewed studies of education costs and cost variation.[1]  As Tom Downes, an economist from Tufts University explained back in 2004: “Given the econometric advances of the last decade, the cost-function approach is the most likely to give accurate estimates of the within-state variation in the spending needed to attain the state’s chosen standard, if the data are available and of a high quality” (p. 9).[2]

But, because these methods are sometimes used beyond academic journals, and in the highly political context of estimating not only how much money is possibly needed to achieve certain outcomes, but also how that money should be distributed across districts and children, they are not without controversy. These methods become the subject of more heated debate when they are introduced as evidence to assist judges in their evaluation of the constitutionality of state school finance systems. Heck, as explained below, a few authors have gone to great lengths to try to explain/argue how such information should never be used to either guide policy development or evaluate the rationality of current policies. Those assertions are completely unjustified.

The goal of education cost modeling – or any form of cost analysis – whether applied for evaluating equal educational opportunity or for producing adequacy cost estimates is to establish “reasonable marks” to provide guidance in developing more rationale state school finance systems. Only with reasonable marks in hand can one make informed judgments as to whether existing policies are wide of those reasonable marks.

Historically, funding levels for state school finance systems have largely been determined by taking the total revenue generated for schooling as a function of statewide tastes for statewide taxation and dividing that funding by the number of students in the system. That is, the budget constraint – or total available revenue – and total student enrollment have been the key determinants of the foundation level, or basic allotment. To some degree, this will always be true. But reasonable estimates of the “cost” of producing desired outcomes, given current technologies of production (the range of practices actually used/tested), may influence the taste for additional taxes by revealing that the preferences regarding taxation and the preferences regarding desired quality of public education are misaligned, meaning that one or the other should be adjusted. That is, if we find out that higher outcomes are going to cost us more, we can then have a more reasonable discussion of whether we are willing to pay that amount more for the expected gain in quality, or whether to lower our expectations. Alternatively, we can simply fly blind!

It’s rather like the individual who wishes to buy a Cadillac Escalade but expects only to spend about $25,000. After a little research, he finds that he can either buy a Ford F-150 for $25,000 or an Escalade for $65,000. That’s where that little bit of research comes in handy – identifying the gap between uniformed assumptions and reasonably informed ones, albeit with greater precision (actual prices) in this example than in cost estimation in education. Heck, if one wants to get really crazy with this, one could fit a statistical model relating prices with various features of existing makes and models of “comparable” vehicles.

Reasonable estimates of cost may also assist courts in determining whether current funding levels and distributions are wide of a reasonable mark, or substantially misaligned with constitutional standards. Cost model estimates are not meant to be exact predictions of what student outcomes will necessarily occur next year if we suddenly adopt a state school finance system based on the cost model estimates. Cost models provide guidance regarding the general levels (predictions with error ranges) of funding increases that would be required to produce measured outcomes at a certain level, assuming that districts are able to absorb the additional resources without efficiency loss.

Studies of state school finance reform also suggest that the key to successful school finance reforms is that they are both substantive and sustained. If additional dollars to high need districts are best leveraged toward high quality preschool programs and/or early grades class size reduction, we are unlikely to see changes to college readiness outcomes the following year (or following five years). If the additional dollars are best leveraged toward increasing teacher salaries for teachers in their optimal years of experience, allowing districts to recruit and retain “better” teachers over time, we are also unlikely to see immediate returns in student test scores.

Importantly, cost model estimates are estimates based on the actual production technologies of schooling. They are based on the outcomes schools and/or districts produce under different circumstances, for different children – the actual children they serve, based on the actual assessments given, and based on the real conditions under which children attend school.

Some critics of education cost analysis in general, and cost function modeling in particular assert that all local public school districts are simply inefficient, mainly because they pay their personnel based on parameters not associated with improved student outcomes.[3] Therefore, they assert that it is useless to consider the spending practices of current districts when trying to determine how much needs to be spent to achieve desired outcomes. A common version of this argument goes that if schools/districts paid teachers based on test scores they produce and if schools/districts systematically excessed ineffective teachers, productivity would increase dramatically and spending would decline. Thus, educational adequacy could be achieved at much lower cost, and therefore, estimating costs based on current conditions/practices is a meaningless endeavor.[4]

The most significant problem with this logic is that there exists absolutely no empirical evidence to support it. It is entirely speculative, frequently based on the assertions that teacher workforce quality can be improved with no increase to average wages, simply by firing the bottom 5% each year and paying the rest based on the student test scores they produce.  To return to the car purchasing analogy above, this is like assuming that somewhere out there is a car/truck with all the features of the Escalade, but the price of the F-150 – specifically, a version of the Escalade itself produced by a new, yet to be discovered technology with materials not yet invented that allow that vehicle to be sold at less than1/3 its original price.

In fact, the logical way to test these very assertions would be to permit or encourage some schools/districts to experiment with alternative compensation strategies, and other “reforms,” and to include these schools and districts among those employing other strategies (production technologies) in a cost model, and see where they land along the curve. That is, do schools/districts that adopt these strategies land in a different location along the curve? In fact, some schools and districts do experiment with different strategies and those schools carry their relevant share of weight in any statewide cost model. Thus far, what we seem to be seeing is that the more productive experimental approaches being used a) aren’t that bold and b) cost quite a bit!

Pure speculation that some alternative educational delivery system would produce better outcomes at much lower expense is certainly no basis for making a judicial determination regarding constitutionality of existing funding, and is an unlikely (though not unheard of) basis for informing statewide mandates or legislation.  Cost model estimates, as well as recommendations of professional judgment and expert panels can serve to provide useful, meaningful information to guide the formulation of more rational, more equitable and more adequate state school finance systems.


[1] Duncombe, W., Yinger, J. (2008) Measurement of Cost Differentials In H.F. Ladd & E. Fiske (eds) pp. 203-221. Handbook of Research in Education Finance and Policy. New York: Routledge.  Duncombe, W., Yinger, J. (2005) How Much more Does a Disadvantaged Student Cost? Economics of Education Review 24 (5) 513-532. Duncombe, W.D. and Yinger, J.M. (2000).  Financing Higher Performance Standards: The Case of New York State. Economics of Education Review, 19 (3), 363-86. Duncombe, W., Yinger, J. (1999). Performance Standards and Education Cost Indexes: You Can’t Have One Without the Other. In H.F. Ladd, R. Chalk, and J.S. Hansen (Eds.), Equity and Adequacy in Education Finance: Issues and Perspectives (pp.260-97). Washington, DC: National Academy Press. Duncombe, W., Yinger, J. (1998) “School Finance Reforms: Aid Formulas and Equity Objectives.” National Tax Journal 51, (2): 239-63. Duncombe, W., Yinger, J. (1997). Why Is It So Hard to Help Central City Schools? Journal of Policy Analysis and Management, 16, (1), 85-113. Imazeki, J., Reschovsky, A. (2004b) Is No Child Left Beyond an Un (or under)funded Federal Mandate? Evidence from Texas. National Tax Journal 57 (3) 571-588.

[2] Downes (2004) What is Adequate? Operationalizing the Concept of Adequacy for New York State. http://www.albany.edu/edfin/Downes%20EFRC%20Symp%2004%20Single.pdf

[3]Hanushek, E. (2005, October). The alchemy of ‘costing out’ and adequate education. Paper presented at the Adequacy Lawsuits: Their Growing Impact on American Education conference, Cambridge, MA. Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223.

[4] For elaboration on this argument, see: Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223

Friday Finance 101: Equitable and Adequate Funding and Teacher Quality is Not an Either-Or choice!

In recent years, the casual observer of debates over public education policy might be led to believe that improving teacher quality and ensuring that low income and minority school children have access to high quality teachers has little or nothing to do with the equity or adequacy of financing of schools. The casual observer might be led to believe that there actually exists a sizable body of empirical research that confirms a) that high quality teaches matter, b) that money doesn’t matter and c) by extension money has nothing to do with recruiting, retaining or redistributing teacher quality. These arguments, while politically convenient for those hoping to avoid thorny questions of tax policy and state aid formulas, are not actually grounded in any body of decisive, empirical research. Rather, to the contrary, it is reasonably well understood that while teacher quality does indeed matter, teacher wages also matter and teacher working conditions matter, both in terms of the level of quality of the overall teacher workforce and in the distribution of quality teachers.

The modern debate over the role of teachers and teaching quality for improving student outcomes dates back to findings within the Coleman report in the 1960s. The Coleman report looked at a variety of specific schooling resource measures, most notably teacher characteristics, finding positive relationships between these traits and student outcomes. A multitude of studies on the relationship between teacher characteristics and student outcomes have followed, producing mixed messages as to which matter most and by how much.[1] Inconsistent findings on the relationship between teacher “effectiveness” and how teachers get paid – by experience and education – added fuel to “money doesn’t matter” fire. Since a large proportion of school spending necessarily goes to teacher compensation, and (according to this argument) since we’re not paying teachers in a manner that reflects or incentivizes their productivity, then spending more money won’t help.[2] In other words, the assertion is that money spent on the current system doesn’t matter, but it could if the system was to change.

Of course, in a sense, this is an argument that money does matter. But it also misses the important point about the role of experience and education in determining teachers’ salaries, and what that means for student outcomes.

While teacher salary schedules may determine pay differentials across teachers within districts, the simple fact is that where one teaches is also very important in determining how much he or she makes.[3] Arguing over attributes that drive the raises in salary schedules also ignores the bigger question of whether paying teachers more in general might improve the quality of the workforce and, ultimately, student outcomes. Teacher pay is increasingly uncompetitive with that offered by other professions, and the “penalty” teachers pay increases the longer they stay on the job.[4]

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[5] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[6] In addition, more recent studies have tackled the specific issues of relative pay noted above. Loeb and Page showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[7]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[8]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[9]

With regard to teacher quality and school racial composition, Hanushek, Kain, and Rivkin (2004) note: “A school with 10 percent more black students would require about 10 percent higher salaries in order to neutralize the increased probability of leaving.”[10] Others, however, point to the limited capacity of salary differentials to counteract attrition by compensating for working conditions.[11]

Finally, it bears noting that those who criticize the use of experience and education in determining teachers’ salaries must of course produce a better alternative, and there is even less evidence behind increasingly popular ways to do so than there is to support the policies they intend to replace. In a perfect world, we could tie teacher pay directly to productivity, but contemporary efforts to do so, including performance bonuses based on student test results,[12] have thus far failed to produce concrete results in the U.S. More promising efforts to measure productivity, such as new teacher evaluations that incorporate heavily-weighted teacher productivity measures based on their students’ test scores, are still a work in progress, and there is not yet evidence that they will be any more effective (or cost-effective) in attracting, developing or retaining high-quality teachers.

To summarize, despite all the uproar about paying teachers based on experience and education, and its misinterpretations in the context of the “Does money matter?” debate, this line of argument misses the point. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.


[1] Hanushek, E.A. (1971) Teacher Characteristics and Gains in Student Achievement: Estimation Using MicroData. Econometrica 61 (2) 280-288, Clotfelter, C.T., Ladd, H.F., Vigdor, J.L. (2007) Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review 26 (2007) 673–682, Goldhaber, D., Brewer, D. (1997) Why Don’t Schools and Teachers Seem to Matter? Assessing the Impact of Unobservables on Educational Productivity. The Journal of Human Resources, 332 (3) 505-523, Ehrenberg, R. G., & Brewer, D. J. (1994). Do school and teacher characteristics matter? Evidence from High School and Beyond. Economics of Education Review, 13(1), 1-17, Ehrenberg, R. G., & Brewer, D. J. (1995). Did teachers’ verbal ability and race matter in the 1960s? Economics of Education Review, 14(1), 1-21, Jepsen, C. (2005). Teacher characteristics and student achievement: Evidence from teacher surveys. Journal of Urban Economics, 57(2), 302-319, Jacob, B. A., & Lefgren, L. (2004). The impact of teacher training on student achievement: Quasi-experimental evidence from school reform. Journal of Human Resources, 39(1),50-79, Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 471, Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains. Review of Educational Research, 73(1), 89-122, For a recent review of studies on the returns to teacher experience, see: Rice, J.K. (2010) The Impact of Teacher Experience: Examining the Evidence and Policy Implications. National Center for Analysis of Longitudinal Data in Educational Research.

[2] Some go so far as to argue that half or more of teacher pay is allocated to “non-productive” teacher attributes, and so it follows that that entire amount of funding could be reallocated toward making schools more productive. See, for example, a recent presentation to the NY State Board of Regents from September 13, 2011 (page 32), slides by Stephen Frank of Education Resource Strategies: http://www.p12.nysed.gov/mgtserv/docs/SchoolFinanceForHighAchievement.pdf

[3] Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62

[4] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.

[5] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[6] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[7] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[8] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71. See also: Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[9] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

[10] Hanushek, Kain, Rivkin, “Why Public Schools Lose Teachers,” Journal of Human Resources 39 (2) p. 350

[11] Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438, Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[12] For recent studies specifically on the topic of “merit pay,” each of which generally finds no positive effects of merit pay on student outcomes, see: Glazerman, S., Seifullah, A. (2010) An Evaluation of the Teacher Advancement Program in Chicago: Year Two Impact Report. Mathematica Policy Research Institute. 6319-520, Springer, M.G., Ballou, D., Hamilton, L., Le, V., Lockwood, J.R., McCaffrey, D., Pepper, M., and Stecher, B. (2010). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching. Nashville, TN: National Center on Performance Incentives at Vanderbilt University, Marsh, J. A., Springer, M. G., McCaffrey, D. F., Yuan, K., Epstein, S., Koppich, J., Kalra, N., DiMartino, C., & Peng, A. (2011). A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses. Final Evaluation Report. RAND Corporation & Vanderbilt University.

Which states screw the largest share of low income children? Another look at funding fairness

Here’s a little Friday afternoon fun with the updated Census Fiscal Survey data through 2009-2010. I’ve written recently about the national school funding fairness report card, which I work on with colleagues from the Education Law Center. The report card can be found here:

http://www.schoolfundingfairness.org/

I also recently wrote a blog post about America’s Most Screwed City School Districts. It was clear to some readers that the most screwed city school districts happen to be concentrated in certain states like Illinois and Pennsylvania, and also in Connecticut which is often perceived as a reasonably well funded and fairer state (than the other two).

Par for the course, as soon as we release the School Funding Fairness report card using data from 06-07 to 08-09 (most recent available at the time we put it together), the Census Bureau releases their 2009-10 district level finance figures… leading to the usual flurry of misinterpretations of data (which I’ll get to another day). Not being able to resist the temptation, despite a heavy backlog of other work to do, I decided I had to play with the updated fiscal data. I also decided for fun to take an alternative look at the data, bridging the idea I presented on my blog about screwed city schools with the general idea of state school funding systems. I decided to ask which states screw the most low income kids.

Here’s my operational definition of screwed for this post. A district is identified as screwed (new technical term in school finance… as of a few posts ago) if a) the district has more than 50% higher census poverty than other districts in the same labor market and b) lower per pupil state and local revenues than other districts in the same labor market. As I’ve explained on numerous previous occasions, it is well understood that districts with higher poverty rates (among other factors) have higher costs of providing equal educational opportunity to their students.

I then tally the percent of statewide enrollments that are concentrated in these screwed districts to determine the share of kids screwed by their state. And here are the rankings… or at least the short list of states that screw the largest share of low income students:

Not much new here. The same culprits make up the list. Nebraska is elevated to its position of disgrace by its systematic underfunding of Omaha Public Schools, which seemed to improve for a fleeting few years, but recent data don’t look so good. Woonsocket and Pawtucket bring Rhode Island into the mix… but raising additional fun questions regarding placement of blame (another post, another day… but should the city managers/local officials have the authority to deprive children in their jurisdiction of state constitutional rights? under what circumstances and by what mechanism should the state step in? Can they?).

Here are a few graphs showing the distributions of individual districts in Illinois, Pennsylvania and Connecticut. On the horizontal axis is the relative poverty rate of districts compared to all other districts in the same core based statistical area. On the vertical axis is the state and local revenue per pupil relative to the average for all other districts in the core based statistical area.

Again, Allentown, Reading and Philadelphia are massively screwed (yep… a new school finance classification). Meanwhile… Lower Merion… in the Philly ‘burbs is not screwed at all. An intriguing contrast in Pennsylvania school finance is that Pittsburgh has long had far more adequate funding than Philadelphia for a variety of reasons. It is important to understand here that the highest poverty districts – those with 3x the average for their labor market – likely need FAR MORE revenue per pupil than their neighbors to get by – not just the same. So, while York and Harrisburg are decidedly less screwed than Allentown or Reading, they too are not in particularly good shape. They have about the same revenue per pupil as surrounding districts, and 3x the poverty rate.

Here’s Illinois:

Waukegan and Aurora East, along with Round Lake hold the coveted spots of “most screwed” but Chicago Public Schools isn’t far behind (with over 400k students). A multitude of smaller high poverty districts in the Chicago metro not shown here also have very low relative revenue per pupil.

Finally, here’s Connecticut once again:

Again, Bridgeport and New Britain, along with Waterbury (among others) remain substantially screwed. Recall from my previous post that Hartford and New Haven funding is somewhat distorted by magnet school aid.

So why does any of this matter anyway. Well, at face value it’s patently unfair to systematically deprive these districts of resources comparable to their less needy neighbors.  If money doesn’t matter to New Britain or Bridgeport, then why does it matter to Greenwich or Westport? Really, if money is so damn trivial for improving schooling quality, they why don’t all those districts in the upper right hand corner of these graphs just give all that useless money to those in the lower right hand corner. Oh, wait… perhaps money does matter…???…!!!

One thing about school finance that’s really important to understand is that the relative position of districts matters a great deal. It matters a great deal because education is a labor intensive industry. It is about getting a sufficient quantity of sufficient quality teachers in front of kids who need them. The spending behavior and negotiated agreements, and working conditions in districts like Westport and Greenwich matter for the teacher recruitment potential for Bridgeport.  The distribution of quality teachers across districts in a labor market depends on numerous factors, many of which tie back to available resources. And in these states, large numbers of children attend high need districts that simply lack resources to compete.

Notably, those districts sitting pretty in the upper left hand corner of these figures also have had traditional teacher contracts, tenure, seniority preferences and likely other policies that would make “reformers” cringe for years.  But most are doin’ just fine.  So too do the even higher spending and lower poverty elite private schools in the same labor markets! Most don’t use test scores as the basis for providing merit pay and I’m quite sure that few if any of them use test scores as the basis for firing the bottom 5% of their teachers every year. They haven’t been and aren’t being subjected to manipulative heavy handed takeovers, school closures and massive charter school expansion.

None of that reformy junk would likely do much good for the Westports, Greenwiches or Lower Merions of the US school system.  And none of that reformy junk is likely to be much good for the Bridgeports, New Britains, Allentowns, Readings, Philadelphias or Chicagos!

I find it particularly infuriating when I hear news of these “most screwed” districts being blamed for their own failure by the state officials who have deprived them systematically of resources for decades.

What these districts need as a baseline – a fair starting point – is equitable & adequate funding. Once that has been accomplished, then, and only then can we start having a reasonable conversation about how to best leverage that funding to improve student outcomes. But without the funding, there are no options for leveraging it.

 

 

 

 

Deconstructing Funding Fairness: Comments on the release of our latest report

Today, I, along with colleagues at the Education Law Center released the second round report on school funding fairness which can be found here:

http://www.schoolfundingfairness.org

We cover much ground in this report and develop what we believe are a useful set of indicators for comparing state school finance systems. In this new version of the report, we also include interactive tables and graphs thanks to the efforts and expertise of Danielle Farrie.

http://www.schoolfundingfairness.org/ia_reports.htm

But there’s always more to the story. There’s always more to be discussed/addressed that can’t be fully captured in a short policy report. Specifically, I would like to address a handful of potential misconceptions regarding funding fairness.

First, it is important to understand that unfair conditions may occur even in states that would appear in our updated report to be generally fair.

Second, it is really important to understand that the percent of money that comes from the state – from state tax revenue sources – does not seem to predict/influence the overall level of fairness. Fairness is not achieved by pushing all funding away from property tax revenues and onto state source revenues. In fact, such a move might do little to improve fairness while substantially increasing revenue volatility (income tax revenues which fuel state general funds are typically far more volatile – elastic to economic conditions – than are property tax revenues.   The real key to a good school finance formula is to figure out how to integrate the revenue sources into a system that is overall fair, and stable.

Third, federal revenues make things only marginally fairer. Their effect is minor. Yes, they are targeted to higher poverty districts generally. And yes, for those districts the resources are needed and may seem substantial. But, in the big picture of funding fairness, it comes down to providing that right mix of state and local funds to achieve a system that is overall fair.

Let’s take a closer look at each of these issues.

There are unfair conditions even in states that appear fairer!

Let’s begin with a look at Connecticut, a state that appears to a) spend a fair amount on its schools and b) spend marginally more on higher poverty districts. Or at least so the federal data on state and local revenues which we use in the funding fairness report indicate.

Connecticut is a particularly interesting case. As it turns out the fairness we find in our report is selective in two ways. First, the progressive tilt to the formula overall is significantly influenced by special aid provided primarily to Hartford and New Haven. Other high poverty districts lack this benefit. It is selectively applied. Second, the aid to which I refer is aid targeted for magnet schools which partly serve children from other districts in an effort to integrate minority and non-minority, low income and non-low income students. That this aid shows up in the expenditures of Hartford and New Haven also creates some distortion to the calculation of per pupil spending.

Here is an arguably more accurate portrayal of the selective fairness of funding in Connecticut. To clarify – selective fairness is… well… unfair.

This graph relates current spending per pupil (Net current expenditures per ADM 2011) after removing magnet aid from district expenditures. Overall, Hartford and New Haven remain better funded than other high poverty districts, but lower than with magnet aid included. Further, several very high poverty Connecticut districts have very low funding compared to their surroundings, precisely what landed them on my previous list of most screwed school districts.

Figure 1

Allocating more state aid doesn’t make it fairer if aid is allocated unfairly!

There exists a common assertion that disparities in school funding across districts are largely caused by disparities in property tax base – local property wealth – and the failure of states to allocate enough aid to offset those disparities. At times, I even hear advocates suggesting that if we could just do away with property tax funding of schools, and move all of the funding to state taxes and make the system completely state controlled, all of these equity concerns would be resolved. Wrong. Wrong… and double Wrong.

First, as a tangent which I mentioned above, allow me to point out that property tax revenues actually play a really important role in stabilizing school revenues over time and acting as a counterbalancing force to state aid fluctuations. State school finance systems require a balanced portfolio of revenue sources!  State income tax revenues are much more volatile to economic cycles.

That aside, the figure below shows as we did in our first edition of the report, that states where districts on average receive a higher share of funding from the state (either as actual state disbursements, or in some cases as cleverly reclassified local property tax revenues raised by state mandated minimum tax rates, perhaps with revenue sharing), do not necessarily have fairer- more progressive – distributions of state aid.

Figure 2

So, how can this be? The implication of this is that state aid itself is being allocated unfairly? Is that possible? How might a state allocate aid in ways that fails to improve the fairness of the overall distribution of state and local revenue?

Well, let’s start with a hypothetical of what should be, or the distribution of aid as it might appear in a progressively funded state like New Jersey or Ohio. The figure below shows that state aid must counter two forces of local economics. First, state aid must be allocated in higher amounts to districts with less local capacity to raise that aid on their own. Second, to achieve progressiveness, aid must be allocated in higher supplemental amounts – or weighted amounts – to districts with greater student needs. If we totally oversimplify these issues and assume that low capacity districts also tend to have higher needs, it might look something like this:

Figure 3

And, as it turns out, states like New Jersey actually do look something like that:

Figure 4

But even New Jersey isn’t “perfect” in this regard. Note that middle wealth districts actually drop below the highest wealth districts. The pattern “dips” when it should perhaps climb more consistently from left to right.

So then, what the heck is going on in other states? Well, here are a few examples of the state aid distributions in states that scrape the bottom of the fairness barrel in our updated report. I will have a new report out this fall supported by the Center for American Progress in which I dissect how states actually use their aid formula to make things worse! Unbelievable, but true. Some states actually allocate state aid so inequitably as to make funding gaps bigger! (see this post for an explanation of the pig!)

Figure 5. North Carolina

Figure 6. Texas

In this final figure, I show how New York State “tweaks” their aid formula from its initial calculations to its final calculations in ways that actually increase the funding gap from lower to higher poverty districts. The first cut (left hand side of the figure) calculations of state aid in New York would have many districts getting little or no state general foundation aid. But, the state aid formula then tweaks that amount by guaranteeing a minimum aid of $500 per pupil and an upward adjustment to the aid share for districts that are middle to upper middle wealth. Then, as I’ve discussed in previous posts, the state allocates disproportionate property tax relief aid to the wealthiest districts. Overall, these adjustments have the effect of increase the low poverty to high poverty funding gap from $1,100 per pupil to $2,300 per pupil. Yep… using state aid to double the funding gap! The politics of state school finance systems at work!

Figure 7. New York

Federal aid is no substitute for a sound, well designed, progressive state school finance system!

Finally, what about that federal aid and specifically what about that biggest chunk federal aid allocated to local districts primarily on the basis of poverty? Doesn’t that do the trick? Doesn’t the federal aid create the necessary upward tilt? Well… uh… no… it doesn’t. It helps, indeed. But Federal Title I aid creates only marginal improvements.

Consider that according to the most rigorous empirical research on the topic that it generally costs double to achieve comparable outcomes in a district that is 100% low income versus one that is 0% low income. That is, each low income child would warrant a “weight” of about 1.0 if counting low income as qualifying for free/reduced lunch (185% income level). When using the more stringent 100% poverty threshold, the required weight is about 1.5.

The following figure and table show that on average nationally federal title I funding adjusts upward the tilt of revenues per pupil by about 5% for a district that is 30% in poverty (100% poverty level). This would be comparable to about a 5% adjustment for a district that is 70% or more “low income” (qualified for free or reduced lunch, see page 31). That’s a relatively modest and far from sufficient adjustment!

Figure 8

Figure 9

So, while Federal Title I Aid is not entirely irrelevant, it is far from sufficient for achieving the extent of need-based targeting required for high poverty settings.

A sound, well-designed, progressive state school finance system is required.

Sadly, far too few of such systems presently exist.

Equitable and adequate financing of public school systems in the U.S. remains largely a state responsibility, and some states continue to either throw their entire education systems under the bus (Arizona, Tennessee), or selectively disregard children living in high poverty settings. Put simply, money matters. School funding equity and school finance reforms matter.

It’s not sexy and it’s not reformy. In fact, it’s quite possibly anti-reformy, but the reality is that equitable and adequate financing of state education systems remains the necessary underlying condition for providing quality schooling and achieving equal educational opportunity for all children.

Five Ridiculously Reformy “Copy & Paste” Policies & Why They’re Misguided

65 Cent Solution (now defunct?)

What is it? It was (thankfully this one is pretty much dead!) a policy proposal being pitched in the mid-2000s which would require, through state mandate/legislation or regulation, that local public school districts show on paper that they spend 65% of their total budgets on “instruction.”

The argument was that the average district nationally allocates somewhat less than 65% to instruction. Instruction is good. Private sector businesses use benchmarks, therefore education should use benchmarks. 65% is a benchmark. Therefore it should be used! Viola… freakin’ brilliant?

Backers of this proposal argued that the policy allowed state legislators to claim they were increasing classroom spending without actually allocating more money.

But, the backers were caught with their pants down in a memo leaked to the Austin Statesmen newspaper in Texas. In an article in Educational Policy (full citation below), Doug Elmer & I summarize the whole memo debacle:

In addition to these criticisms of what qualified as instructional spending, many opponents of the bill questioned the motives behind FCE (First Class Education) and the 65% solution proposal. These suspicions were in part confirmed by a memo written by Mooney to Republican legislators and obtained by the Austin American-Statesman in 2005 (Embry, 2005). In the memo, Mooney (2003) listed several political benefits of the 65% Solution, including the following:

  • Splitting of the Education Union. The 1st Class Education proposal pits administrators and teachers at odds with one another. . . .
  • Direct Fix for Public Education. While voucher and charter school proposals have great merit, large segments of the voting public—especially suburban, affluent women voters—view these ideas as an abandonment of public education . . . targeted segments of voters may be more greatly predisposed to supporting voucher and charter school proposals, as Republicans address the voting public with greater credibility on public education issues. . . .
  • Allows the Use of Unlimited Non-Personal Money for Political Position Advantages. The aforementioned benefits can be achieved with funding in any amount and from any source.
  • It Wins! As with initiatives proposing tax limits, term limits, and the definition of marriage, ballot successes for the 1st class is exceedingly likely.

Of course, one thing that never seemed to get discussed in this process was that empirical research on the issue of instructional spending shares of budgets, student outcomes and other school quality measures suggests little if any relationship – especially with respect to the 65% threshold.

Thankfully, this particular bit of copy and paste education policy foolishness seems to have come and gone!

Research

Taylor, L, Grosskopf, S. (2007) Is a Low Instructional Share an Indicator of School Inefficiency?

Exploring the 65-Percent Solution http://bush.tamu.edu/research/workingpapers/ltaylor/The_65_Percent_Solution.pdf

Baker, B.D., Elmer, D.R. (2009) The Politics of Off-the-Shelf School Finance Reform. Educational Policy 23 (1) 66-105

Parent Trigger

The Parent Trigger is perhaps even more obnoxious and deceptive than the 65 cent debacle. What is it? Well, the Parent Trigger is a policy that allows the majority of parents of students in any failing (generally meaning high poverty/minority concentration) school to vote, by simple majority to have the school taken over by a private company, charter operator, or to simply fire all of the teachers and the principal and start fresh (options may vary). The assertion is that this mechanism gives low income and minority parents “rights” that they are simply unable to assert through bloated and non-responsive urban district bureaucracies. While it may be true that some urban district bureaucracies are less than responsive, the parent trigger sure as hell isn’t the solution.

The parent trigger basically permits a simple majority of parents of children who happen to attend a given school for a period of time to stage a takeover of that school, and this could be done for a variety of motives in a variety of ways with a plethora of possible distorted, negative consequences. A group of middle school parents (during their 3 to 4 year window) might, for example, take a year to take-over their school and turn it over to a private charter management company.  The parent majority might, for example, have a gripe against LGBT students, or students of a particular race, culture or religion. Charter takeover would allow the simple majority to makeover the school into a themed school – like a school for traditional family values, or a English only academy. The simple majority could easily use this tool to oppress any minority population (and don’t give me that crap about this being better than the tyranny of the district oppressing everyone).

Further, if the simple majority of parents do forcibly convert the school to a privately managed charter, it may turn out that all parents and children lose important statutory and constitutional rights, as I have discussed in previous posts regarding parental/student/teacher rights in privately managed charter schools.

Notably, this hostile takeover could have occurred under the majority rule of parents on one cohort of students and have lasting adverse effects on subsequent cohorts of children whose parents had little input.

Further, this mechanism removes from the process any/all other residents of the community surrounding the school (who contribute tax dollars to the school), placing all control in the hands of the simple majority of parents with children attending the school at any one point in time.

It’s a ridiculous approach granting disproportionate, ill-defined power to an ill-defined majority constituency seemingly intended to do little more than stimulate infighting among low income and minority populations as a distraction from the larger policy issues.

Blog Posts

Potential abuses of the Parent Trigger

https://schoolfinance101.wordpress.com/2010/12/07/potential-abuses-of-the-parent-trigger/

Public/Private Status of Charter Schools

https://schoolfinance101.wordpress.com/2012/05/02/charter-schools-are-public-private-neither-both/

Why Public/Private Status Matters: Legal Issues

https://schoolfinance101.wordpress.com/2012/05/04/follow-up-on-why-publicnessprivateness-of-charter-schools-matters/

Weighted Student Funding Reformy edition

This one is an example of a totally reasonable policy concept that has been dreadfully abused and over-emphasized as a panacea for urban district budgeting and management, and conflated with many other management policies strategies.

Weighted student funding itself is simply an approach to calculating the need and cost based funding to be delivered to schools or districts. Several states used weighted student formulas to drive money to districts. Several districts use weighted student formulas to allocate budgets out to schools.

But, in the reformy world, weighted student funding has taken on the meaning of a Money Follows the Child coupled with Decentralized School Site Control model. Put simply, there really isn’t much evidence that decentralized governance across schools within districts is particularly effective policy for improving productivity and efficiency, or equity for that matter! (see Baker & Elmer article below).

But, the most frustrating part of the WSF discussion for me has been that it has encouraged many to argue that the big problem with school funding today is the disparities in budgets across schools within large city districts. Yeah… there are some significant problems there. But those are not the biggest problems. Between district disparities and state school finance systems continue to severely constrain district’s ability to target funds to their highest need schools.

Sadly, despite there being some virtues of WSF as a funding approach, the reformy takeover and complete mis-representation of the issue has led to some truly baffling think tanky reporting on WSFs. Take for example 2010 Bunkum Award Winner The Reason Foundation:

http://nepc.colorado.edu/bunkum/2010/time-machine-award

This Reason Foundation report has multiple features that make it an award winner. It engages in definitional acrobatics, pouring a kitchen sink’s worth of assorted reforms into a vessel it calls Weighted Student Formula (WSF) reforms. And, in a truly breathtaking innovation, the report enters its time machine and attributes positive reform outcomes to policy changes that had not yet been implemented. In broad terms, WSF reforms involve linking funding to each student, with that funding calculated as the student’s base allocation and any additional funds for special needs, economic deprivation or other reasons. The Reason report somehow manages to squeeze into this WSF concept three additional reforms: (a) site-based management; (b) site-based budgeting; and (c) school choice. The expert third party reviewer said this about the Reason “umbrella labeled as WSF:” “[it] deceptively suggests that all related policies are necessarily good—even going so far as to credit those policies for improvements that took place before the policies were implemented.”

“The report then irresponsibly recommends untested, cherry picked policy elements, some of which may substantially undermine equity for children in the highest-need schools within major urban districts.” For example, the plan suggests that extra funds for economically deprived students be eliminated but that added money should be given to gifted and talented students. The report also ignores a large body of relevant literature on within-district equity and school site management in its uncritical effort to find support for the foundation’s ideological policy preferences.

Look… a good weighted student formula is not a bad idea at all. Pretending that district weighted student formulas and decentralized governance will solve the most pressing equity issues in education today, however, is totally ridiculous!

Research on WSF

Baker, B. D., & Welner, K. G. (2010). “Premature celebrations: The persistence of interdistrict funding disparities” Educational Policy Analysis Archives, 18(9). Retrieved [date] from http://epaa.asu.edu/ojs/article/view/718

Baker, B. (2009). Review of “Weighted Student Formula Yearbook 2009.” Boulder and Tempe: Education and the Public Interest Center & Education Policy Research Unit. Retrieved [date] from http://epicpolicy.org/thinktank/review-Weighted-Student-Formula-Yearbook

Baker, B.D., Elmer, D.R. (2009) The Politics of Off-the-Shelf School Finance Reform. Educational Policy 23 (1) 66-105

Baker, B.D. (2009) Evaluating Marginal Costs with School Level Data: Implications for the Design of Weighted Student Allocation Formulas. Education Policy Analysis Archives 17 (3)

Baker, B.D. (2012) Re-arranging deck chairs in Dallas: Contextual constraints on within district resource allocation in large urban Texas school districts. Journal of Education Finance 37 (3) 287-315

Toxic Trifecta Teacher Evaluation Policies

Another type of cut-and paste policy that’s been driving me up the wall lately is what I refer to as the Toxic Trifecta Teacher Evaluation Framework. I have explained in previous posts the issues associated with Value Added Models for determining teacher effects on student outcomes. I have also explained how Student Growth Percentiles are not appropriate for the task at all. But, I have also explained how this information might be responsibly used, for example, for exploring patters across teacher within a school or district, while retaining the option to decide that the data were simply wrong.

Toxic trifecta policies, in very simple terms, MANDATE THE MISUSE OF STATISTICAL INFORMATION FOR MAKING TENURE AND DISMISSAL DECISIONS.

They negate responsible human judgment altogether and replace it with rigid, ill-conceived frameworks reflecting a baffling degree of statistical ignorance (and educational and management ignorance).  

Here are the elements to look out for in Toxic Trifecta Teacher Evaluation Policies:

  1. Mandating potentially invalid VAM or necessarily invalid SGP scores to be used as a fixed share in determining personnel decisions.  Necessarily becomes an overriding factor!
  2. Forcing precise cut-point determinations through data with absurdly wide error ranges (creating categories of performance with defined cut points for VAM or SGP estimates).
  3. Forcing that personnel decisions be made on the basis of this information, on strict timelines, without consideration of any other contextual factors (or the possibility that the estimates are simply WRONG)

Really, any one of these elements alone is bad enough. But in combination, they are a complete disaster (except for the legal profession)!

As I’ve explained on many occasions, simply saying that the VAM or SGP measure of teacher effect on student test score change is “only 20%” or “only 40%” of the evaluation is unhelpful. It is still assumed to be valid and important and it may be neither.

Further, that element which varies most in the overall scheme is likely to tip the scales on most decisions. And the variance in VAM or SGP estimates is a mix of a) real effect, b) noise and c) bias (likely heavy bias in SGPs). Further, noise and bias are quite likely to dominate any “real effect” (and the real effect may not be an important effect). And we simply can’t know what share is real, bias or nose.

On the second element, it is utterly foolish to try to set up cut-scores for defined performance categories (with a point or two difference changing the category) given the extent of noise and bias in the measures. How can say that a 25 is unacceptable and a 26 is okay, when both have error ranges of 50 points on each end?  It then stands to reason that it is even more foolish to tie high stakes decisions to falling just above or below these cut scores from year to year.

Now,  I don’t know what the current TEACHNJ (Ruiz) bill includes from the toxic trifecta, but on my last read, it included all three components, the worst of which was the absolute requirement that teachers lose tenure after 2 bad evaluations, stated rigidly as follows:

Notwithstanding any provision of law to the contrary, the principal, in consultation with the panel, shall revoke the tenure granted to an employee in the position of teacher, assistant principal, or vice-principal if the employee is evaluated as ineffective in two consecutive annual evaluations. (p. 10)

Further, in an effort to rub salt in the wound following this mandated misuse of statistical information, the versions of the bill which I had reviewed indicated that teachers could only appeal these decisions on procedural grounds. Several other states have already adopted trifecta elements in part or entirely.

As I’ve mentioned in a few recent blog posts, there might (though I’m increasingly pessimistic) exist some reasonable uses of VAM estimates or SGPs for informing management decision making in schools.  Those reasonable uses invariable acknowledge that these measures are not only noisy but may also simply be wrong, and permit human judgment to make that call. Toxic trifecta policies prohibit those reasonable uses, and will ultimately mandate that bad decisions be made based on inadequate information.

Related Articles

Green, P.C., Baker, B.D., Oluwole, J. (2012) Legal implications of dismissing teachers on the basis of value-added measures based on student test scores. BYU Education and Law Journal 2012 (1)

Blog Posts

Toxic Trifecta: https://schoolfinance101.wordpress.com/2012/04/19/the-toxic-trifecta-bad-measurement-evolving-teacher-evaluation-policies/

If it’s not Valid, Reliability Doesn’t Matter: https://schoolfinance101.wordpress.com/2012/04/28/if-its-not-valid-reliability-doesnt-matter-so-much-more-on-vam-ing-sgp-ing-teacher-dismissal/

Video Post: https://schoolfinance101.wordpress.com/2012/05/23/video-thoughts-on-test-scores-vam-sgp-teacher-evaluation/

Mutual Consent Hiring/Assignment/Dismissal

This is one of those policies that had seemed relatively pointless and innocuous. Originally it was mostly about district human resource management policies, not about state requirements. But, as a state mandate and in conjunction with other teacher evaluation policies (the toxic trifecta), mutual consent policies take on new meaning.

Mutual consent policies – when adopted as state legislation or regulation – require that principals have the “last  word” on which teachers are assigned to or hired to work within their buildings. These policies have been driven by two ideas/purposes. On the one hand, there were the outrage-invoking news stories of principals being forced to draw from pools of excess teachers (implied [without validation] to be awful teachers and completely unqualified) in large city districts, when they supposedly knew they could get someone better from the outside. Second… and originally… these policies were intended to improve the distribution of teacher qualifications across more and less advantaged schools within districts.  Both are virtuous to the extent that a) the problem is real and b) the solution works.

But, there are many problems of both basic logic and of operational reality when it comes to mutual consent policies. Here’s a short list:

  1. Mutual consent assumes only good decisions are made at building level and bad ones at district level;
  2. Mutual consent ignores that district officials hire/fire and assign principals;
  3. Mutual consent sets up scenario where central office may wish to assign ‘good teachers’ to weak school but principal could reject (district might even be trying to groom new leaders for the school);
  4. Research suggests that it doesn’t actually accomplish much if anything!

In really simple terms, mutual consent causes administrative chaos, by mandating that the subordinate has final word, when the subordinate never really has the final word. What kind of silly crap is that? At least as a state policy mechanism?

It’s one thing if a district decides to have a collaborative process, or even a policy of collaboration regarding personnel decision between building leaders and central office. But having the state mandate that building leaders have authority over central office, when central office ultimate has authority over building leaders is ludicrous. Suggesting that this is based on how big business in the private sector works is even more ludicrous!

Further… what’s particularly warped is when a mutual consent policy is proposed in the same legislation as the toxic trifecta elements above. The toxic trifecta mandates who the principal must fire or at least de-tenure and under what specific circumstances and based on measures over which the principal has no control and may have limited statistical understanding????  And then the mutual consent policy “empowers” the principal? Are you kidding me?

Look, districts should design personnel policies such that school leaders can build good teams. I’m all for that, and have conducted and published research on that very topic. I favor building level involvement in personnel policy toward the goal of building effective teams. State mandated “mutual consent” does little or nothing to advance this goal.

As for the research on mutual consent, the one study done on a large district that used the policy found that it did not achieve its goal of improving the distribution of teacher quality:

We conduct an interrupted time-series analysis of data from 1998-2005 and find that the shift from a seniority-based hiring system to a “mutual consent” hiring system leads to an initial increase in both teacher turnover and share of inexperienced teachers, especially in the district’s most disadvantaged schools. For the most part, however, these initial shocks are corrected within four years leaving little change in the distribution of inexperienced teachers or levels of turnover across schools of different advantage. http://www.nctq.org/docs/Mutual_Concent_8049.pdf

Blog Posts

Regarding research on mutual consent: https://schoolfinance101.wordpress.com/2010/10/08/nctq-were-sure-it-will-work-even-if-research-says-it-doesnt/