Effects of Charter Enrollment on Newark District Enrollment

In numerous previous posts I have summarized New Jersey charter school enrollment data, frequently pointing out that the highest performing charter schools in New Jersey tend to be demographically very different from schools in their surrounding neighborhoods and similar grade level schools throughout their host districts or cities. I have tried to explain over and over that the reason these differences are important is because they constrain the scalability of charter schooling as a replicable model of “success.” Again, to the extent that charter successes are built on serving vastly different student populations, we can simply never know (even with the best statistical analyses attempting to sort out peer factors, control for attrition, etc.) whether the charter schools themselves, their instructional strategies/models are effective and/or would be effective with larger numbers of more representative students.

Here, I take a quick look at the other side of the picture, again focusing on the city of Newark. Specifically, I thought it would be interesting to evaluate the effect on Newark schools enrollment of the shift in students to charter schools, now that charters have taken on a substantial portion of students in the city. If charter enrollments are – as they seem to be – substantively different from district schools enrollments, then as those charter populations grow and remain different from district schools, we can expect the district schools population to change.  In particular, given the demography of charter schools in Newark, we would expect those schools to be leaving behind a district of escalating disadvantage – but still a district serving the vast majority of kids in the city. I’m not sure why I never got around to looking this issue. I’ve certainly explored it in Pennsylvania with respect to special education populations (where there exists an incentive for PA charters to serve low need special education students, leaving high need ones behind for the district to serve with fewer resources).

First, here are the data sources on which I am relying for this analysis:

1) school level enrollment data 2010-11: http://www.nj.gov/education/data/enr/enr11/stat_doc.htm

2) school directory (for identifying city location): http://education.state.nj.us/directory/schoolDL.php

3) special education classification & placement: http://www.nj.gov/education/specialed/data/ADR/2010/EligibilitybyPlacement/PlacementByElig6-21.xls

[Placement by eligibility 6 to 21 year olds. I left of 3 to 5 year olds for now]

This is a rough first cut at an analysis that should be done in greater depth at some point and for more than just Newark. Consider this to be illustrative.

Here’s my usual starting point – % free lunch and % ell by school for schools with their city of location as Newark.

Again, most of the charter schools in Newark have very low % Free Lunch or % ELL when compared with other schools in Newark, except for a handful of NPS specialized and magnet schools. Indeed, the district does impose a significant degree of segregation on itself.

The real trick in all of this is to figure out how to balance the presence of these specialized schools, charter schools and district schools to create the best set of opportunities for the largest share of children.

If we take the school level enrollments for charter schools in Newark and for NPS schools in Newark and sum them up we get the following distribution of students:

Table 1. Summed School Level Enrollments* from Enrollment File

SLI = speech/language impairment, SLD = specific learning disability.

*Note that if we look at the district enrollment data, NPS enrollment is actually greater than the figure above, and greater in each other category. Some of the difference is a function of special education out of district placements, where many of those students are both disabled and low income. District reported totals are enrollment = 33,279, free lunch = 26,320, ELL = 2,665 (leading to slightly higher disadvantaged shares than above). see: http://www.nj.gov/cgi-bin/education/data/enr11plus.pl

A noticeable feature of Table 1 is that for the most part, charter schools aren’t serving many children with disabilities to begin with. But, they are especially not serving children with disabilities other than mild specific learning disabilities or speech/language impairment.

Table 2 puts Table 1 into percentages.

Table 2. District school and charter school enrollment characteristics by percent

Here, we see that few charters in Newark have anywhere near the % free lunch share of the district as a whole. The differences are especially large for Robert Treat, North Star and Greater Newark. The differences are even more striking for LEP and special education classifications, except for TEAM which enrolls a sizable share of SLD/SLI students, but very few more severe disabilities.

Now, here’s another angle on the student populations. Table 3 shows the effect of extracting these charter enrollments from the district enrollments. In the first column, I include the summed enrollments of all schools in the city of Newark including charter schools (but not private schools). In the second column I include the summed enrollments of Newark Public schools within the city of Newark.

The fourth column is particularly important. This column shows that:

  1. Charter schools listed above have absorbed about 15% of the district total enrollment. 
  2. But, these charter schools have absorbed only 13% of the district’s lowest income children.
  3. Further, they have absorbed less than 1% of the district’s ELL population.
  4. They have absorbed only 8.3% of the district’s low need special education population
  5. And, they have absorbed only 2% of the district’s higher need special education population (most of these being students listed in the broad, “other health impairment” category and attending TEAM academy).

For charter schools not be be having a negative effect on district enrollment characteristics, they would have to be – in the aggregate – absorbing 15% of each special needs group. But clearly they are not. Thus, we can expect that those left behind in district schools are becoming a higher and higher need group as charter enrollments expand (unless they become more representative in the aggregate).
Table 3. District & Charter Enrollments & Effect of Charter Enrollments on District

  • Thus far, growth in enrollment of the charters included here has led to an increase in district schools % free lunch of 2%.
  • Thus far, growth in enrollment of the charters included here has led to an increase in the district schools % ELL of greater than 1% (given that the rate is only around 7%, this is sizable).
  • Charter enrollment growth has also led to growth in concentrations of children with lower and especially higher cost disability classifications.

Again – this is just a cursory, preliminary cut at these data based on simply summing up the available enrollment data from 2010 and 2011.

There are numerous potential additional complexities here. For example, does the presence of some high flying charters keep some families in the district that might otherwise seek to move elsewhere? That is, if the charters weren’t there, would the district lose less needy students to out-migration? That’s possible, but likely in smaller shares than seen here.

My main point here is that this is yet another issue of New Jersey charter schooling that requires much more in-depth investigation… with improved data on specific student level mobility… and also exploring the effect of charter enrollment attrition mid-year on nearby school population characteristics.

These issues are particularly worthy of additional exploration as NJDOE considers massive charter expansion in other cities such as Camden. Again, if the successes of some of these charters are largely contingent on the selective populations they serve, the successes of these charters (a) may be limited in their replicability and b) may be coming at significant expense to other children left behind.

Further, these issues are of critical importance when determining the appropriate approach to financing charter schools and their host districts. As I have noted previously, Pennsylvania has chosen among the worst approaches for dealing with special education charter school financing. New Jersey must avoid a comparable debacle (and thus far, has largely done so). All student needs based funding must be distributed with respect to the actual needs of students served – especially in the case of children with disabilities.  That is, if a charter school serves a district student with a mild, specific, low cost disability, they should be subsidized specifically on that basis, so as to ensure that sufficient funding is left for the district to serve remaining higher need students. New Jersey charter school financial data continue to be woefully inadequate for detailed analysis. More on that at a later point.

Cheers!

 

A not so modest proposal: My new fully research based school!

It’s about time we all suck it up and realize that the best of economic research on factors associated with test score gains not only can, but must absolutely drive the redesign of our obviously dreadful American public education system! [despite substantial evidence to the contrary!]

With that in mind, I have selectively mined some of my own favorite studies and summaries of studies in order to develop a framework for the absolutely awesomest school ever! I’ve chosen to focus on only economic studies of measurable stuff that is actually associated with measured test score gains. After all, that’s what matters – that’s all that matters!

Mind you that this school will be awesomest not merely in terms of overall effectiveness, but also in terms of bang for the buck, because I’m not messin’ around with expensive curriculum or elaborate facilities… or high priced consultants… or really expensive strategies like class size reduction.

I’ve chosen to avoid enrolling grades K-3 since the research is actually pretty strong that I should offer smaller class sizes in those grades. If I don’t have those grades, I guess I don’t have to worry about class size! Right?  In the absence of such clear research for grades 4 to 8 (or my choice to ignore that which really is relevant), I’ve decided that when it comes to class size, anything goes.

I’m goin’ for low hangin’ fruit here. Keepin it simple – with class sizes of 60 or so (since we know that doesn’t matter???) , running my school in a vacant lot and with absolutely no administration and/or supervision – since I’ve negated the need for the principal role in guiding high quality teacher selection by using an alternative, necessarily cost effective strategy!

So, here goes… Here’s my Econometric Academy Middle School (Grades 4 to 8).

Hire and keep only those teachers who have exactly 4 years of experience

First, and foremost, since the research on teacher experience and degree levels often shows that student value-added test scores tend to level off when teachers reach about the 4th year of their experience, I see absolutely no need to have teachers on my staff with any more or less than 4 years experience, or with a salary of any greater than a 4th year teacher with a bachelors degree might earn. Anything above and beyond this is simply inefficient. Paying a teacher more after the 4th year is simply inefficient. Boosting 4th year pay is also inefficient if I can simply, in perpetuity, employ only teachers with exactly 4 years experience.

Here’s a graph from a Calder Center report summarizing the student test score gains in relation to teacher experience.

http://www.caldercenter.org/UploadedPDF/1001455-impact-teacher-experience.pdf

Now, I’ve reviewed the various economic simulations that suggest that dismissing teachers on the basis of student value added test scores is a reasonable approach to, over time, increasing teacher quality. For my nifty new school, I choose to believe in their assumption that there will always exist a normally distributed flow of new applicants whose average quality is the same as the current pool of teachers.

My approach allows me not to even worry about selecting out the bottom 5 or 10% and replacing them with average teachers. Instead, I’m going for cost-effectiveness! You see, if the average teacher has already achieved their likely best value-added outcomes by year 4, then (accepting the current experience based pay system) at year 4 I’ve got teachers who are at their maximum productivity and the lowest wage – and I don’t have to ever worry about paying them more! That’s totally freakin’ awesome! I just have to make sure that every year, when I let my entire staff go, I get out there and find a totally new crop of teachers who have just completed their third year of teaching elsewhere – and are at least “average” among soon-to-be 4th year teachers at producing outcomes. Thus, every year, I will have teachers who have the average production of 4th year teachers and the average wage of 4th year teachers.

That is, they are necessarily better than average in terms of cost effectiveness.

This is a no brainer!

Implement carb loading on testing days, scaled up w/grade level (& in spring where fall-spring assessments are given)

Now, let’s shoot for some somewhat more obscure ideas… that have great potential to yield some nice marginal gains to tests scores on top of my already optimally staffed school. For my next few clever strategies, I turn to the work of David Figlio formerly of the University of Florida and currently at Northwestern (yes… this is a sarcastic post… but Figlio is a truly exceptional scholar… really clever guy… and one of the nicest people you could ever meet. Plus, he produces some really fun food-for-thought!).

Research from back in 2002 found that under Virginia’s accountability system, many school districts were adjusting their lunch menus to increase carb loading on SOL (uh… standards of learning) testing days. More importantly, David Figlio and colleagues found that it worked!

Using detailed daily school nutrition data from a random sample of Virginia school districts, we find that school districts having schools faced with potential sanctions under Virginia’s Standards of Learning (SOL) accountability system apparently respond by substantially increasing calories in their menus on testing days, while those without such immediate pressure do not change their menus. Suggestive evidence indicates that the school districts who do this the most experience the largest increases in pass rates.

Specifically, the authors note:

We observe that the estimated effect of calorie manipulation is positive across all five tests, and is statistically significant, despite the extremely small effective sample size, in the case of mathematics. (Figlio, food for thought)

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.159.8754&rep=rep1&type=pdf

Yeah… this is really low hanging fruit (perhaps quite literally) for my bang-for-the-buck econometric academy. All I have to do is carefully plan out my school menus to optimize their influence on student test scores. I might want to think carefully about how to play this right in a value-added structure. For example, if we have fall-spring assessments, do I carb load only in the spring?

(from the authors acknowledgement section)

The opinions expressed in this paper do not necessarily reflect those of their employers, funders, or young children, the latter of whom respectfully disagree with the authors’ derogatory characterization of “empty calories.” We, in turn, blame them for any remaining errors.

Rename all students prior to entry

Figlio has also produced an intriguing series of studies that consider how students’ names affect their behavior and performance in school. There’s some more low hanging fruit for making my school the best performing school ever with little additional costs! The policy implications are absolutely clear from the research – I must review the names of all incoming 4th graders for two potential performance inhibiting characteristics. First, are their boys who have names that sound like girls names? Second, are there kids with either names that sound “black” or names that sound like they were given by less educated and/or lower class parents.  Next, I simply have to require that the parents of their kids change their names prior to starting 4th grade. To aid these parents in making good name choices, I will have available a list of gender appropriate, Asian sounding names, because that too is backed by the research!

Here’s the research behind my brilliant, cost-effective proposal… and it is both statistically significant, and compelling!

Racially identifiable names:

The persistence of the Black-White test score gap, and its widening over the course of the school cycle, is an issue of significant public policy concern. This paper presents evidence that a portion of these patterns could be due to the names given particularly prevalently to Black children. Children with names associated with low socio-economic status, and to a limited degree, with “Blackness” per se, tend to score lower on their reading and mathematics tests, relative to their siblings with less race or class-identifiable names.

This hypothesis is also bolstered by the finding that the opposite set of results are observed in the instance of Asian families, for whom a racially-identifiable name may signal attributes that are perceived to be associated with success. Asian children with racially-identifiable names apparently face higher teacher expectations and also tend to score higher on examinations.

http://faculty.smu.edu/Millimet/classes/eco7321/papers/figlio.pdf

Boys with female sounding names:

 I find that, as suggested above, boys with female-sounding names tend to misbehave  disproportionately in sixth grade, as compared to other boys and to their previous (relative) behavior patterns. In addition, I find that behavior problems, instrumented with the distribution of boys’ names in the class, are associated with increased peer disciplinary problems and reduced peer test scores, indicating that disruptive behavior of students has negative ramifications for their peers.

http://www.aeaweb.org/assa/2005/0107_1430_1102.pdf

Have salaries based entirely – not just partially – on loss aversion tied to test score gains

Finally, what kind of econometric academy would I have if I didn’t totally buy into the most recent study on  loss aversion as a compensation strategy! Roland Fryer and colleagues have provided us a real gem here. Fryer and colleagues find:

In this paper, we demonstrate that exploiting the power of loss aversion—teachers are paid in advance and asked to give back the money if their students do not improve sufficiently—increases math test scores between 0.201 (0.076) and 0.398 (0.129) standard deviations.

http://scholar.harvard.edu/sites/scholar.iq.harvard.edu/files/fryer/files/enhancing_teacher_incentives.pdf

Now, I’m going all out with this one. Every teacher gets paid their full salary at the beginning of the year (I’ll have to use some kind of accounting trick to deal with the timing of my state aid and local transfer payments, or once I’m up and running, rely on the money I took back from the previous year teachers to pay those up front salaries the next year). If your kids’ scores don’t increase more than the average, you lose your whole salary (see, it’s all relative, I get half the salaries back every year no matter what!).

I can see how this strategy might create a divisive culture in some schools or might create animosity between teachers and administrators for teachers who repeatedly lose compensation – and may lose that compensation largely as a function of random error and/or omitted variables bias in the model designed to estimate their effectiveness. Yeah… I can see how taking teachers’ salaries away for factors that may be entirely outside their control could really piss them off, more than actually inspiring them to try to control these uncontrollable in a subsequent year.

But, my school is different. I really don’t have administration… because decisions are already made, by me, at a great distance. Besides, I don’t have to see any of my teachers the following year anyway because they all get dismissed every year, and a new crop of 4th year teachers – equal to the previous – comes in (at least I think they will…). I just have to inspire them (scare the crap out of them) to kick some butt for that one year!

And that my friends, is the Econometric Academy of Achievement Test Excellence!

 

Closing thoughts

But seriously, much of the past week or so seems to have been dominated by discussions of Roland Fryer’s new NBER working paper indicating that while typical merit pay incentives don’t seem to influence student outcomes (by increasing teacher effort), when those incentives are paid up front, and taken back in response to lower performance, gains can be noticed. The buzz phrase (and theoretical framework) for the analysis is “loss aversion,” and a common assumption is that people may have more incentive to work harder if they fear losing something they already have, as opposed to gaining something they never had.

This stuff is fun to ponder (in a warped, academic sort of way), and potentially interesting as a research topic. But it’s all highly questionable in terms of usefulness for improving school quality (note that I said school quality, not test score gains!).

And that’s true of a lot of educational, psychological and econometric research related to schools. It’s especially true of any one of these branches of research in isolation!

The real key with most of these studies and others like them is to avoid the leap that these studies have immediate decisive policy implications – that they can and should be used to inform school reform – school redesign and state and federal education policy more broadly. Yes, each bit of information can advance our understanding. But, we must avoid the urge to assume that each new tidbit provides a new silver bullet answer and also negates all that we’ve learned previously.

Policymakers (and newspapers) want research with immediate and obvious policy implications. They want the silver bullet. They want the breakthrough that negates all previous understanding – that tells us why everything we’ve done to date is wrong and paints a clear path forward. Unfortunately, too many researchers feel compelled to play along.

Consider the great Chetty, Friedman and Rockoff one great teacher can earn a classroom of kids and extra quarter million dollars study, from this past winter. Many policymakers leaped to use that study as an immediate call to use value-added data for teacher de-selection policies. That call was endorsed by one of the authors own media quotes in which he asserted that we should fire sooner than later! (and that assertion was built on an overly bold if not absurd extrapolation of the earnings effect based on the single age at which the earnings effect was largest).

Similar overreaching for immediate policy implications appeared in the author-endorsed media spinning of Roland Fryer’s piece on “no excuses” charter schools in New York City, where despite not even attempting to accurately measure school expenditures, or the cost of “no excuses strategies” Fryer  fueled the media assertion that “no excuses” strategies and NOT money are the answer to improving urban school performance (partly in language embedded in the working paper itself).

If we are willing to accept these types of bold immediate policy recommendations, then we might actually be willing to accept the school I’ve laid out above as a reasonable proposition. My research based academy above might actually produce some marginally greater value-added estimates on student achievement data than it would for the same group of kids if I didn’t strategically carb load on testing days, change the kids names (to alter teachers’ expectations of them) and threaten their teachers with complete loss of salary.

And it might even be a really efficient approach to value-added gains if my (completely ridiculous) assumption holds that I can find a pool of average 4th year teachers willing to enter such a toxic environment for a single year, every year at an average 4th year salary. Yeah… that’s one $#!+load of assumptions (worthy of a few pages of appropriately formatted footnotes!).

But I’m pretty sure it would be a really sucky place to work as a teacher or to attend as a student. And call me a sappy, post-empiricist, sucker, but that matters too!

Learning from Really Bad Graphs & Ill-informed Conclusions: Thoughts on the New PEPG “Catching Up” Report

A new policy paper from Eric Hanushek, Paul Peterson & Ludger Woessmann has been receiving considerable attention. This despite numerous completely outlandish assertions drawn from junk charts that fill the pages of this reformy manifesto.

Look, I’ve said it before and will say it again. Eric Hanusek has contributed a great deal of high quality research to the fields of education policy and economics of education over the years and I have in the past and continue to this day to rely heavily on much of it to inform my own analyses and thinking in education policy. But this kind of stuff is really just infuriating. Rather than spend too much time venting, let’s try to use this new report for instructive purposes – to instruct the casual reader how to debunk and distill complete and utter BS when presented with pretty scatterplots and glossy formatting.

First, for your reading pleasure, the complete brief may be found here: http://www.hks.harvard.edu/pepg/PDF/Papers/PEPG12-03_CatchingUp.pdf

Before I go down this road, allow me to point out that it’s one thing to offer up this type of analysis as a conversation starter… or even as a provocation with all relevant caveats and disclaimers. It’s yet another to present information of this caliber (or lack thereof) as a serious attempt at immediate influence over policy. There’s a huge freakin’ difference there. And it is certainly my impression that this brief, by its framing, is indeed intended to shape the immediate policy conversation as much if not more so than to generate speculative, intellectual musings over the various possible meanings of the charts.

Further I’m particularly concerned with the way in which much of the information is presented and the way in which conclusions are drawn from that information. This is where this brief can be useful and illustrative – where we can turn this clumsy manifesto into a teaching moment.  I’ll tackle three specific issues here:

  1. measures matter, especially when we are dealing with money and test scores,
  2. the complexity of educational systems is difficult to untangle two-measures at a time,
  3. always watch out for the ol’ bait and switch! (sometimes it’s really obvious!)

The report presents numerous international comparisons (that’s the focus) of similar rigor to the state level comparisons I critique here. I’m just a bit pressed for time, and had the state data more readily available.

Measures Matter!

Okay… so here’s the first graph that drove me up the freakin’ wall. This graph is a classic extension of what I refer to as the Hanushekian cloud of uncertainty.

Figure 1 – State Spending Increases & Test Score Gains (from report)

For decades, Hanushek has been presenting deceptively oversimplified scatter plots of school district, state level and international data on education spending and outcome measures. These scatterplots in and of themselves are invariably freakin’ meaningless.  I evaluate this body of literature by Hanushek as a whole in my policy brief Revisiting the Age Old Question: Does Money Matter in Education?  

This graph provides a new twist, comparing the dollar increases in spending to the NAEP average annual gain. Hanushek uses this graph to draw the following conclusions:

 According to another popular theory, additional spending on education will yield gains in test scores. To see whether expenditure theory can account for the interstate variation, we plotted test-score gains against increments in spending between 1990 and 2009. As can be seen from the scattering of states into all parts of Figure 9, the data offer precious little support for the theory.

On average, an additional $1,000 in per-pupil spending is associated with a trivial annual gain in achievement of one-tenth of 1 percent of a standard deviation.

Michigan, Indiana, Idaho, North Carolina, Colorado, and Florida made the most achievement gains for every incremental dollar spent over the past two decades.

(keep an eye on Michigan and Indiana – we’ll hear from them again later. Here, they are AWESOME – getting bang for the buck… Of course, one can look good on this indicator by simply not spending much more and showing commensurately paltry outcome gains!)

I love the sarcastic use of “precious” in this quote. But I digress.

But there are at least a few small – okay… pretty damn big … okay … huge… completely undermining – problems with using this scatterplot to draw these conclusions.

Let’s set aside the outcome measure for now and focus on two other not-so-trivial issues. First and foremost, a $1,000 increase in spending in Louisiana and a $1,0000 increase in spending in New Jersey or Connecticut may… just may… not be worth the same. Does $1,000 more go as far to improving competitiveness of teacher salaries in New Jersey as it does in New Mexico? Uh… not so much.  In fact, the National Center for Education Statistics Education Comparable Wage Index indicates that competitive wages in New Jersey are substantially greater than in Louisiana, significantly altering the value of the additional dollar.  Second… it’s possible that other factors may actually play a role too?

Let’s shatter the spending measure & related conclusions first! Here’s an alternate view – taking the current expenditures per pupil from 2008-09 over the current expenditures for 1990-91 – that is, expressing them effectively as a percent increase over base year (albeit not inflation adjusted – see this post for more on this topic).

Figure 2

Hmmm… as it turns out, New Jersey spending really didn’t increase much as a percent over the base year. Louisiana, however, did. In fact, Louisiana actually had among the highest growth among states.  Well then, that would mean that New Jersey really kicked some butt! Not much spending increase at all… and some pretty damn good outcome gain!

The bottom line however, is that either scatterplot is pretty meaningless, with mine arguably slightly less meaningless than the original! But neither really useful for making any bold statements about state aggregate spending and outcome gains. Again, in my policy brief on Money Matters, I explore these issues in much greater detail. Referring to more rigorous studies attempting to link spending and outcome measures, I explain:

They [more recent studies] also, however, raised new, important issues about the complexities of attempting to identify a direct link between money and student outcomes. These difficulties include equating the value of the dollar across widely varied geographic and economic contexts, as well as in accurately separating the role of expenditures from that of students’ family backgrounds, which also play some role in determining local funding.

http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

I can’t pass up this seemingly tangential point.  I took particular enjoyment in this finding from Hanushek’s new report:

Maryland, Massachusetts, and New Jersey enjoyed substantial gains in student performance after committing substantial new fiscal resources.

Hanushek went to great lengths in an earlier book and in related policy papers to make the case the New Jersey was a classic example of failed massive spending increases and he has repeatedly cited New Jersey’s failures (as recently as this spring – my rebuttal here!) as a reason why other states should not increase funding for schools. Kevin Welner and I discuss this Hanushekian claim extensive in a recent article in Teachers College Record.

Isn’t that precious?

Two Measures Generally Insufficient for anything but Playful Speculation & Exploration!

As I noted above, the second reason why we should NOT take the Hanushekian cloud seriously, nor should we take the other graphs in the new report too seriously is that they attempt to draw inappropriately bold conclusions from graphs involving only two variables at a time. This approach can be useful for exploring patterns and/or raising questions. We all should spend much time exploring visual representations of our data- getting to know our data – our measures and how they relate. But to take this information and assert that spending matters little, or to go even further and make claims that the South is rising again… and that accountability driven policies of southern states are leading to disproportionate gains while curmudgeonly anti-reformy anti-accountability Midwest states are suffering, is just absurd.  I’ll dig into these conclusions a bit more in the next and final section.

What else might be going on here? Well, one likely issue requiring at least some more exploration is whether there are any substantive changes in the demography of these states. Yeah… it’s just possible that states that saw greater improvement saw less increase in poverty. Uh… and yeah… it’s possible that states that started lower gained more. Now, the authors acknowledge this latter point, but then brush it off. Instead, they assert that a likely alternative explanation is that Midwest states were riding high on their past successes and great universities, and simply got complacent.

Here are a few figures to chew on.

Figure 3 – Demographics and Outcome Change

Note that Hanushek, Peterson and Woessmann make a big deal about the great performance of Louisiana, Delaware, Maryland and Florida and the particularly sucky performance of Michigan, Indiana, Minnesota and Wisconsin. Uh… wait, weren’t Indiana and Michigan awesome above – for getting those paltry outcome gains for little or no additional investment? Yeah… but now they suck. Really… suck… because… they’re complacent… and not reformy.   As it turns out, the states referred to as generally awesome by the authors also had generally less increase in % low income students.

Figure 4 – Starting Performance Level and Outcome Change

While the authors acknowledge that starting performance levels are associated with outcome change, they go to great lengths to blow off this issue, arguing a) that it explains a relatively small share of the variation (uh… only about a quarter of it… which is actually quite large for this type of data/analysis) and b) that other plausible explanations involving the southern reformyness vs. midwestern complacency dichotomy may explain much of the rest of the difference? (without any evidence to support this notion!).

Yes. Starting level does seem to matter! And that can’t be overlooked, or brushed aside.

Together, change in % free lunch and 1992 8th grade math score explain about 41% of the variation in annual gain across the 34 states for whom each measure is available.

Ye Ol’ Bait & Switch

But there are bigger and more obvious problems with the conclusions drawn in this report… that don’t really even require much statistical digging. A classic deceptive strategy used in this type of reporting is ye ol’ bait and switch and/or conflating one group identification with another.

Ye ol’ bait and switch is often used in voucher debates where pundits will point to elite private schools as examples of the choices that all children/families should have and will then point to the average tuition of Catholic elementary schools (circa 1999) as an example of the cost of private education (see: http://nepc.colorado.edu/publication/private-schooling-US). Uh… 1999 national average Catholic elementary school tuition won’t cover much of the tuition at Sidwell Friends in 2012!

An entire subsection of the Hanushek, Peterson and Woessmann report is titled Is the South Rising Again? Much attention is paid in the report to the premise that southern states are staging an impressive comeback and that this impressive comeback is a function of their forward thinking in the 1990s and 2000s.

Specifically, the authors laud the achievement gains of Louisiana, Delaware, Maryland and Florida! All, of course, “southern.”

And specifically, the authors laud the early reformyness of Tennessee, North Carolina, Florida, Texas, and Arkansas – as providing possible explanations for the high performance of southern states!

Wait a second…. Those aren’t the same freakin’ states are they? What’s up with that? Did they really do that? Did they really frame it that way?

Here’s what the report says:

Five of the top-10 states were in the South, while no southern states were among the 18 with the slowest growth. The strong showing of the South may be related to energetic political efforts to enhance school quality in that region. During the 1990s, governors of several southern states—Tennessee, North Carolina, Florida, Texas, and Arkansas—provided much of the national leadership for the school accountability effort, as there was a widespread sentiment in the wake of the civil rights movement that steps had to be taken to equalize educational opportunity across racial groups. The results of our study suggest those efforts were at least partially successful.

Meanwhile, students in Wisconsin, Michigan, Minnesota, and Indiana were among those making the smallest average gains between 1992 and 2011. Once again, the larger political climate may have affected the progress on the ground. Unlike in the South, the reform movement has made little headway within midwestern states, at least until very recently. Many of the midwestern states had proud education histories symbolized by internationally acclaimed land-grant universities, which have become the pride of East Lansing, Michigan; Madison, Wisconsin; St. Paul, Minnesota; and Lafayette, Indiana. Satisfaction with past accomplishments may have dampened interest in the school reform agenda sweeping through southern, border, and some western states.

Keep in mind that Louisiana and Delaware didn’t get all reformy until the Race to the Top Era. Further as shown above, Louisiana actually had one of the largest proportionate increases in funding and Louisiana had relatively low growth in low income students.

Here’s a look at the BAIT and at the SWITCH, where I consider the bait to be those precious high outliers – the over-performers in the analysis, and the switch to be the states that were lauded as implementing policies that are likely behind this performance. As it turns out, while those early accountability/reform states also saw pretty good gains, their gains are more or less in line with gains of other states that had similar starting point – at least on 8th grade math (my apologies for simply not having the time to combine all NAEP scores, but the 8th grade math starting point explains 27% of the variation in gain, and along with free lunch change explains 41% of the variation in gain. Not bad, and more than Hanushek, Peterson and Woessmann suggest!).

Figure 5 – The BAIT… and the SWITCH!

Why is this relevant? The assertion being made in this report is essentially that the SWITCH group of states were implementing desired policies… policies that the sucky states like Michigan and Indiana should perhaps consider – or at least should have instead of resting on their laurels. Then, perhaps they could have looked more like the  precious bait. The problem is that the only overlap between the BAIT and the SWITCH is Florida – hardly a stereotypical “southern” state… and one whose reformyess and NAEP gains have been discussed & critiqued extensively by others in recent years (not time for that here). And then of course, we have the proclamation of the suckyness of Michigan and Indiana. Okay… which is it?

The bottom line in all of this is that this new report doesn’t tell us much. I don’t really have a problem with that. What I have a problem with is assuming that it does.

I do have a problem with particularly junky charts/analysis like the one asserting that spending increases have no relationship to outcome increases – with no consideration at all for the regional differences in the value of those increases – and all of the other variables that may… just may… play some role! That’s just lazy and sloppy and inexcusable.

But, at least I’ve got a new handout for discussion & critique for the first week of my fall semester class on data analysis and reporting!

Moneyball, Superman, Angry Royals Fans and Education Reform?

These past few days have been interesting, as I’ve followed more than usual, the festivities around the Major League Baseball All Star Game. I’ve followed the festivities in part because the game was in Kansas City this year and I lived in the Kansas City ‘burbs for 11 years up until 2008. I’m an east coast guy – born & raised Vermonter, livin’ in Jersey – college in PA, masters in CT, Doc in NYC… also taught in NH. I love east coast cities, and I probably fit the typical east coast snob profile. But some of the events that went down this week at the ASG left me feeling a bit uneasy.  Now, even as a kid, I kind of like the Royals. They were pretty damn good when I was growing up, and had that cool stadium with the fountains. While we lived in KC, we went to quite a few games… ‘cuz tickets were cheap and accessible.[1]

As I sat down to watch the Home Run Derby, I happened to be checking twitter – where I still follow some Kansas City media folks. I starting seeing tweets with the hashtag #boocano… along with links to explanations as to why KC fans should boo when Yankee Robinson Cano comes to bat.  Even as the booing actually happened… and it was quite impressive… the story I was getting from ESPN was strangely disconnected from the story I was getting from my KC tweets.

In case you missed it here’s some video from the stands at the K:

http://www.youtube.com/watch?v=LZlQk861C5c&feature=plcp

http://www.youtube.com/watch?v=sPl9Ez8dE6w&feature=plcp

In fact, ESPN wasn’t sharing much of anything… rather, suggesting that the KC fans were being inappropriate and expressing sour grapes simply because their guy (who must suck, because he’s a Royal) didn’t get picked for the home run derby. Eventually, ESPN and also Fox would post on their websites, stories of how Kansas City fans were “classless” and rude, while never actually sharing the details behind why Royals fans booed Cano.  For my east coast peers, here’s a Kansas City run down on what actually happened, since the national media found it far more convenient to demonize the rough and tumble, classless meanies in Kansas City rather than the upstanding and esteemed Yankee Cano.

As someone from the east, who headed to KC for 11 years after living in Yonkers, teaching and attending grad school in NYC… I found KC… and its sports fans to be frustratingly mild & passive, but still enthusiastic. Rough and tumble, rude, classless meanies? Nah… those are attributes of the fan base of my team – the Red Sox (remember, I’m a born/raised New Englander) – and we’re damn proud of it!

The national media spin was that KC fans were over-reacting because Billy Butler wasn’t picked for this inconsequential event. There was no mention of the fact that Cano said he would likely pick him – for this inconsequential event. That’s what fueled the whole #boocano movement in social media. So, the whole Boo Cano thing itself was about a lie and a broken promise [whether obnoxious and condescending or simply oblivious on Cano’s part] and was really directed at Cano himself. This wasn’t about some misguided, misplaced Yankee envy from a poor Midwestern team that just can’t get its own act together.

 What does this have to do with Education Reform?

The subsequent national media spin was both interesting and disturbing to me –  and I began to see all sorts of parallels between a) the national media coverage of this event and the national media coverage of (and spin on) “education reform” (such as NBC’s Education Nation & Waiting for Superman), and b) the real inequities of major league baseball that thwart any possibility that it will ever be a legitimate, fair competition, and the real inequities of American education that thwart any possibility that kids, regardless of where they grow up will ever have equal opportunity for social mobility.

I was particularly struck by how the national media constructed a storyline that allowed them to generate sympathy for Cano while demonizing Royals fans, blatantly suppressing the actual reasons why those Royals fans were so angry. It’s rather like the demonization of teachers in the ed reform debates (finding the right visuals of teachers as angry mobs protesting, carrying pickets decrying salary cuts & furloughs, etc.). It’s just bizarre. Teachers tend to be about as angry & aggressive and threatening…on average, as, well… Royals fans!

Why, then, are the Royals fans the preferred demons in this story line, and the Yankees and Cano the upstanding victims?  This one particular blog post seems to have nailed it best:

It’s perfectly fine for Phillies fans to be passionate for their team. It’s a crime for the Royals faithful to do the same. Why? Because we’re supposed to be the doormats. Doormats do not speak out about being walked out. They do not protest their role as a cleaner of the feet of the social elite. They do their jobs quietly.

http://kingsofkauffman.com/2012/07/10/we-will-remain-silent-no-longer/?utm_source=twitterfeed&utm_medium=twitter

Even worse, doormats are supposed to feel lucky they are allowed to be the doormats for the elite. Doormats are supposed to know their place, sit down, shut up and take it. Questioning one’s place, as a doormat, is certainly out of the question! [again… this isn’t what the Cano thing was about initially… it wasn’t about salary equity… Yankee envy… etc. It was about Cano. The media response – referred to by one Boston outlet as “yankee Jazeera”, however, was all too illustrative of the media interest in preserving the inequities of baseball – and the status of the Kansas City Royals as doormats!]

What Do Moneyball and Superman Have in Common?

There was a time when Royals fans were legitimately angry and outspoken about the financial inequities of Major League Baseball. They even had the gall to stage a protest against the Yankees when they came to town in 1999. Royals fans donned t-shirts which said “share the wealth” on their backs, and about 3,000 fans with the shirts turned their backs to the Yankees.

Arguments over making baseball more legitimately competitive by capping salaries and/or aggressively sharing revenue seem to have died down since that time. Much like arguments about school funding equity or adequacy that were more prominent a few decades ago. I guess this is because in both cases we have simply come to realize that money really doesn’t matter in either case. Low payroll teams have as much chance as anyone else of winning? And of course we all know about those charter schools serving low income kids that consistently beat the odds with so few resources?

Hmmm… that still doesn’t make a whole lot of sense? Why would public sentiment shift so sharply away from these glaring inequities. Cleary, even if other stuff in addition to money matters, having a level financial playing field is still relevant? As I explained in a recent post, there is certainly no evidence that more equitable student outcomes are attainable in a less financial equitable system. And there’s certainly no evidence that baseball is fairer by virtue of the huge salary inequities!

When did we become so distracted? How? Why?

Moneyball and Superman!

The American public has to a large degree been duped by clever media portrayals of statistical anomalies and superhero disinformation.

First, let’s take a look at some of the baseball evidence. Here’s the relationship for the current year between win/loss percent and team salaries up to the All Star Break, for the American League (where salary disparities are greatest).

FIGURE 1

Now, here is a look at cumulative salaries and cumulative won/loss percentages from 2009 to the all star break of 2012.

FIGURE 2

Yeah… there’s actually a pattern here. In fact, in the AL, salary variation alone explains nearly half of the variation in won/loss percentage, when taken over time. Money may not be “everything” but it’s clearly something!

But… but… but… MONEYBALL! The concept of Moneyball and its popularity provide MLB an excuse to ignore that which makes the entire sport illegitimate. The idea that if teams just got clever with their statistical analysis – thought about baseball differently – they could realize that this salary stuff is really completely meaningless. Who needs to pay big bucks? It’s about being smart! Yeah… exactly what the big dollar teams would like everyone else to think.

Those wishing to maintain the distraction will often use more anecdotal and less relevant characterizations of the numbers – such as pointing out that in most years the highest payroll team does not win the World Series – and/or that sometimes low payroll teams do really well – MONEYBALL!

Two important points are in order here. First, even if a team does come up with a clever strategy that works well in one season like finding the cheapest players who add value to the team, as other teams catch on and adopt similar strategies, the market adjusts and those with the big bucks still win.

Second, outliers and/or outlier seasons are not a basis for making judgments about what is better policy for achieving a legitimate competitive playing field for Major League Baseball.

This is much the same argument – and a similar distraction being used in the education reform debates. The argument is that parents and kids in low income districts need to shut up and sit down, not ask for a fair share of funding. Instead, they should play moneyball! Or… uh… no money… ball. And, since they are incapable of determining the rules for themselves, we shall impose upon them a statistical system of teacher reshuffling and deselection!  We’ll moneyball their schools for them – through ill-conceived reformy state mandates… with few or no additional resources attached!

Let’s take a look at two of our least equitable states, New York and Illinois. I’ve used these graphs before in posts, and they come from this recent paper: https://aefpweb.org/sites/default/files/webform/Baker.AEFP_.NY_IL.Unpacking.Jan_2012.pdf

FIGURE 3: ILLINOIS PUBLIC SCHOOL DISTRICTS 2008-09

FIGURE 4: NEW YORK PUBLIC SCHOOL DISTRICTS 2008-09

Each of these graphs (statistical analysis explained in the linked paper) shows that in each state there are districts that have very high resource levels – after adjusting for student needs and district cost factors – and there are districts that have lower resource levels.

In each case, higher need districts, serving very low income populations and lacking the resources to get the job done have systematically lower outcomes.  In really simple terms, there are winners and there are losers – there are Royals and there are Yankees – and there are resource disparities that match.

The whole idea behind Waiting for Superman, like Moneyball, is similarly to assert (read deceive) that there are these clever costless strategies out there being used by (mainly charter) schools that simply beat the odds, while serving the very same kids and while having no special, additional resources upon which to draw.

It’s got nothing at all to do with money! Instead, like the 2002 Oakland A’s, schools that beat the odds know how to buck the standard practices of the game, recruit exceptional team players, and callously – I mean efficiently – dump those who don’t immediately produce.

Unfortunately, many modern reform strategies and rhetoric are little more than distractions from the root issues of inequity in the American Education System – just like Moneyball was a convenient distraction from the inequities that plague MLB. While there might be some legitimate lessons to be learned in each case (including lessons on using statistics in decision making, where relevant), neither moneyball nor superman validate a claim that money really doesn’t matter.  It does.

Again, it’s utterly foolish to assert that baseball is fairer by maintaining salary inequity, and similarly ridiculous to assert that equitable schooling can be more easily achieved with vastly inequitable funding.

How Education is Different from Baseball

Now, here’s the big difference between public schooling and Major League Baseball:

Educating future generations of children isn’t a freakin’ game!

Yeah – Major League Baseball will never have any credibility as a legitimate competitive sport as long as it permits some teams to spend more than 3.5 times what other teams do. Arguably, MLB has little interest in favoring such credibility over generating revenues. MLB likely benefits more as a commercial for-profit entity by maintaining the disparity than by quashing it. TV revenues are likely higher when the World Series includes big market teams. So it’s in the interest of MLB to increase the odds that big market teams make the series.  So, I accept that the revenue interests of the sport override any efforts to make it a legitimate competition. So be it.

One can make a similar case that it’s in the interest of those who have the resources in elementary and secondary education to suppress the odds of children from lower income families competing for admission to colleges and universities. But while it may be reasonable to overlook such interests in Baseball, I find it somewhat more offensive when it comes to kids and their schools.

So, yeah… I think the Royals fans were just fine when the booed Cano and the media was simply wrong for demonizing them while selectively presenting facts.

But those Royals fans were even more right when they donned those t-shirts back in 1999.  Yeah… it is the money. Money matters. Equity matters.

And don’t let Moneyball or Superman convince you otherwise.

 


[1] funny tangent – being an east coast snob [having just finished my doc work at Columbia the previous year] and understanding how ticket access works back east, when I went to get our first Royals tickets, I called in a favor through a friend in the MLB central office, to get us some extra-special seats… they gave me the phone # of someone in the Royals front office… who seemed to think I was being a total ass by trying to get a favor… free tickets… from a team that could really use the ticket revenue! In retrospect, he was totally right!

Friday Finance 101: School Finance Formula & Money Matters Basics

Modern state school finance formulas – aid distribution formulas – typically strive (but fail) to achieve two simultaneous objectives: 1) accounting for differences in the costs of achieving equal educational opportunity across schools and districts, and 2) accounting for differences in the ability of local public school districts to cover those costs. Local district ability to raise revenues might be a function of either or both local taxable property wealth and the incomes of local property owners, thus their ability to pay taxes on their properties.

Figure 1 presents a hypothetical example of the distribution of state and local revenue per pupil across school districts, sorted by poverty concentration. The hypothetical relies on the simplified assumption that districts with weaker local revenue raising capacity also tend to be higher in poverty concentration. While that’s not uniformly true, there is often at least some correlation between the two [it serves to make this hypothetical a bit more straightforward]. Accepting this oversimplified characterization, Figure 1 shows that the typical low poverty and high local fiscal capacity district would likely raise the vast majority of the cost of providing its children with equal educational opportunity through local tax dollars. There may be some small share of state general aid assuming that the total cost of providing equal educational opportunity exceeds the local resources raised with a fair tax rate.

Figure 1

 

This pattern is usually arrived at (if it is arrived at) through some overly complicated formula requiring multiple inefficiently and illogically laid out spreadsheets of calculations and based on measures for which each state chooses its own, completely distinct and unrecognizable nomenclature. A short version might go as follows:

Step 1 – determine target funding level (need & cost adjusted foundation level) per pupil for each district

Target Funding per Pupil = Foundation Level x Student Need Adjustments x Geographic Cost Adjustments

Where the foundation level is some specified per pupil dollar amount. Where student need adjustments include adjustments for individual student educational needs, as for children with limited English language proficiency and children with one or more disabilities, and collective characteristics of the student population such as poverty, homelessness and/or mobility/transiency rates. Where geographic costs refer to geographic variations in competitive wages, and factors such as economies of scale and population sparsity.

Step 2 – determine the share of target funding to be raised by local communities

State Aid per Pupil = Target Funding per Pupil – Local Fair Share

Yep. That’s it. Student needs and costs are accommodated in Step 1, and differences in local wealth and/or capacity to pay are accommodated in Step 2! Now convert that into about 2,000+ separate calculations and create incomprehensible names for each measure (like calling a weight on “low income students” a “student success factor”) and you’ve got a state school finance formula.

But I digress.

Implicit in the design of state school finance systems is that money may be leveraged for improving both the measured and unmeasured outcomes of children.  That is, that money matters to the quality of schooling that can be provided in general and that money matters toward the provision of special services for children with greater educational needs. That is, money can be an equalizer of educational opportunity.

In a typical foundation aid formula, it is implied that a foundation level of “X” should be sufficient for producing a given level of student outcomes in an average school district. It is then assumed that if one wishes to produce a higher level of outcomes, the foundation level should be increased. In short, it costs more to achieve higher outcomes[1] and the foundation level in a state school finance formula is the tool used for determining the overall level of support to be provided.

Further, it is assumed that resource levels may be adjusted in order to permit districts in different parts of the state to recruit and retain teachers of comparable quality. That is, the wages paid to teachers affect who will be willing to work in any given school. In other words, teacher wages affect teacher quality and in turn they affect school quality and student outcomes. This is plain common sense, and this teacher wage effect operates at two levels. First, in general, teacher wages must be sufficiently competitive with other career opportunities for similarly educated individuals. The overall competitiveness of teacher wages affects the overall academic quality of those who choose to enter teaching.[2] Second, the relative wages for teachers across local public school districts determine the distribution of teaching quality.[3] Districts with more favorable working conditions (more desirable facilities, fewer low income and minority students) can pay a lower wage and attract the same teacher. Wages matter, therefore, money matters.

Finally, those student need adjustments in state school finance formulas assume that the additional resources can be leveraged to improve outcomes for low income students, or students with limited English language proficiency. First, note that some share of the additional resources is needed in higher poverty settings simply to provide for “real resource” equity – or to pay the wage premium for doing the more complicated job. Second, resource intensive strategies such as reduced class sizes in the early grades, high quality (using qualified teaching staff)[4] early childhood programs, intensive tutoring and extended learning time programs may significantly improve outcomes of low income students. And these strategies all come with significant additional costs (even when adopted under the veil of “no excuses charterdom“).

But, because providing more money to support public schools often means raising more tax dollars and because providing supplemental resources to children whose own communities may lack local revenue raising capacity often means more aggressive redistribution of state tax revenues, whether and how money  matters in education is often hotly politically contested.

School finance is a political minefield, which is arguably why so many pundits have tried to distract from school finance issues by advancing ludicrous arguments that education equity and overall quality can be improved by altering teacher labor markets via statistical deselection without ever addressing funding deficiencies and wage disparities or by expanding charter schooling and ignoring the role of philanthropic contributions (while counting on them).  Unfortunately for those political pundits, school finance is a minefield they must eventually walk through if they ever expect to make real progress in resolving quality or equity concerns.

In a recent report titled Revisiting the Age Old Question: Does Money Matter in Education?[5] I review the controversy over whether, how and why money matters in education, evaluating the current political rhetoric in light of decades of empirical research.  I ask three questions, and summarize the response to those questions as follows:

Does money matter? Yes. On average, aggregate measures of per pupil spending are positively associated with improved or higher student outcomes. In some studies, the size of this effect is larger than in others and, in some cases, additional funding appears to matter more for some students than others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent – in other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.

Do schooling resources that cost money matter? Yes. Schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes. Again, in some cases, those effects are larger than others and there is also variation by student population and other contextual variables. On the whole, however, the things that cost money benefit students, and there is scarce evidence that there are more cost-effective alternatives.

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more adequate funding with more accountability for its use may be most promising.

While there may in fact be better and more efficient ways to leverage the education dollar toward improved student outcomes, we do know the following:

  • Many of the ways in which schools currently spend money do improve student outcomes.
  • When schools have more money, they have greater opportunity to spend productively. When they don’t, they can’t.
  • Arguments that across-the-board budget cuts will not hurt outcomes are completely unfounded.

In short, money matters, resources that cost money matter and more equitable distribution of school funding can improve outcomes. Policymakers would be well-advised to rely on high-quality research to guide the critical choices they make regarding school finance.

Regarding the politicized rhetoric around money and schools, which has become only more bombastic and less accurate in recent years, I explain the following:

Given the preponderance of evidence that resources do matter and that state school finance reforms can effect changes in student outcomes, it seems somewhat surprising that not only has doubt persisted, but the rhetoric of doubt seems to have escalated. In many cases, there is no longer just doubt, but rather direct assertions that: schools can do more than they are currently doing with less than they presently spend; the suggestion that money is not a necessary underlying condition for school improvement; and, in the most extreme cases, that cuts to funding might actually stimulate improvements that past funding increases have failed to accomplish.

To be blunt, money does matter. Schools and districts with more money clearly have greater ability to provide higher-quality, broader, and deeper educational opportunities to the children they serve. Furthermore, in the absence of money, or in the aftermath of deep cuts to existing funding, schools are unable to do many of the things they need to do in order to maintain quality educational opportunities. Without funding, efficiency tradeoffs and innovations being broadly endorsed are suspect. One cannot tradeoff spending money on class size reductions against increasing teacher salaries to improve teacher quality if funding is not there for either – if class sizes are already large and teacher salaries non-competitive. While these are not the conditions faced by all districts, they are faced by many.

It is certainly reasonable to acknowledge that money, by itself, is not a comprehensive solution for improving school quality. Clearly, money can be spent poorly and have limited influence on school quality. Or, money can be spent well and have substantive positive influence. But money that’s not there can’t do either. The available evidence leaves little doubt: Sufficient financial resources are a necessary underlying condition for providing quality education.

There certainly exists no evidence that equitable and adequate outcomes are more easily attainable where funding is neither equitable nor adequate. There exists no evidence that more adequate outcomes will be attained with less adequate funding. Both of these contentions are unfounded and quite honestly, completely absurd.

 


[1] Duncombe, W. and Yinger, J.M. (1999). Performance Standards and Education Cost Indexes: You Can’t Have One Without the Other. In H.F. Ladd, R. Chalk, and J.S. Hansen (Eds.), Equity and Adequacy in Education Finance: Issues and Perspectives (pp.260-97). Washington, DC: National Academy Press.

[2] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.  Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352. David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498. Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408. Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

[3] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144. Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62. Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438. Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[5] Baker, B.D. (2012) Revisiting the Age Old Question: Does Money Matter in Education. Shanker Institute. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

More thoughts on Charter Punditry & Declarations of Certainty

I’m a little late in pouncing on this one. JerseyJazzMan beat me to the punch with some relevant points.  A short while back, the Wall Street Journal posted an op-ed by Deborah Kenny, CEO of New York based charter chain Harlem Village Academies. Kenny’s op-ed purported to explain why charter schools are successful.  Of course, we could spend all day on that contention alone, since it is relatively well understood that charter results have been mixed at best. Indeed, I have explained in my published work and in blog posts that the track record for certain charter chains and in certain settings seems stronger than in others.

Here is how Deborah Kenny explained why charters succeed (implicitly where traditional public schools do not):

Critics claim that charter schools are successful only because they cherry-pick students, because they have smaller class sizes, or because motivated parents apply for charter lotteries and non-motivated parents do not. And even if charters are successful, they argue, there is no way to scale that success to reform a large district.

None of that is true. Charters succeed because of their two defining characteristics—accountability and freedom. In exchange for being held accountable for student achievement results, charter schools are generally free from bureaucratic and union rules that prevent principals from hiring, firing or evaluating their own teams.

http://online.wsj.com/article_email/SB10001424052702303703004577472422188140892-lMyQjAxMTAyMDIwNDEyNDQyWj.html?mod

As is par for the course of late in such arguments, Kenny’s chartery punditry is completely void of any data or contextual information that might provide insights as to why, or even whether charter schools “succeed.” Yet, while bafflingly void of substantiation, Kenny’s punditry is disturbingly decisive & hyper-confident.

It is yet another case of declaring to know absolutely what we absolutely don’t know!

For the moment, let’s accept Kenny’s proposition that at least in New York City, many charter schools affiliated with high profile management organizations have posted solid test scores (not entirely the case… but let’s accept that proposition…).

So then, let’s compare New York City charter schools from these CMO chains to traditional public schools in the city on a handful key parameters – a) how much they spend and b) which kids they serve – each relative to the schools which they supposedly far outshine.  These are things that actually matter. Now… if they do spend the same as NYC traditional public schools and serve similar student populations, we might be able to make the case that their “success” is a function of something different that they are doing with the same dollar – more bang for the buck. A relevant question… but a hard one to distill. But, if they serve very different student populations, then it’s even harder to distill what the heck is really going on.[1]

Further, if they are outspending NYC public schools that do serve similar populations, their access to resources may be what allows them to do different stuff… which may then explain their supposed “success.”  It would certainly be hard to make the above claims without looking at any of this, wouldn’t it?

So, here’s the stat sheet:

For each of these comparisons I have used a three year panel of data on NYC Charters schools and all NYC traditional public schools, from 2008 to 2010. To compare spending, I have used the estimates generated in our recent report on charter school spending:

  • Baker, B.D., Libby, K., & Wiley, K. (2012). Spending by the Major Charter Management Organizations: Comparing charter school and local public district financial resources in New York, Ohio, and Texas. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/spending-major-charter.

Further discussion of the spending comparisons for NYC can be found here: https://schoolfinance101.wordpress.com/2012/05/07/no-excuses-really-another-look-at-our-nepc-charter-spending-figures/

In short, each of these charter chains spends more per pupil than NYC public schools that serve similar student populations. Some, like KIPP and UnCommon schools spend a lot more!

Further, when compared against same grade level schools citywide, each of these charter chains serves fewer children with disabilities (and I lack data on the type of disabilities, which may also matter).

Finally, when compared against same grade level schools in the same zip code, each of these charter chains serves far fewer low income children and FAR fewer children with limited English language proficiency.

These substantive differences in resources and student populations make it difficult if not impossible to assert that these charter school chains operating in New York City have somehow identified a magic formula for success that is neither resource dependent nor dependent on serving very different student populations than city district schools.

There is certainly no basis whatsoever for asserting that accountability and freedom – specifically freedom from bureaucratic and union rules – are necessarily the determinants of charter success. In fact, these broad principles apply similarly to all independent charters, but while some are good, others suck – and many are allowed to persistently suck despite supposed heightened accountability. Indeed, the upper half is better than average! And the lower half… is not!

It’s hard to suggest that either of these factors – accountability or freedom – are the determinants of charter success when success varies so widely across charters. What does tend to vary across charters is a) access to philanthropic resources and b) student populations served. AND… it may also be the case that some charters have adopted unique strategies…… some of which may actually come with additional costs!

There may be some cool stuff going on in some of these schools, just as there may be some cool stuff going on in NYC district schools.  It may well be that freedom from bureaucratic rules permits schools to do cool stuff.  It would certainly seem advantageous in the context of New York State moving forward to be able to skip out on complying with new, ill-conceived teacher evaluation legislation.

We need to figure out what works and for whom, whether those ideas come from traditional public schools, charter schools or private schools.

We need to figure out the costs of doing these things. Ken Libby, Kathryn Wiley and I discuss these issues in our recent policy brief (read it! It’s not some anti-charter propaganda. It’s an actual study of spending data… with detailed documentation & extensive lit review).

Unfortunately, the tendency among charter “defenders” is to simply deny, deny, deny… ignore costs (make bizarre, unfounded excuses, present half-assed, back of the napkin estimates, or sidestep them)… ignore substantive contextual issues, etc., etc., etc. (certainly, the tendency among the attackers is to declare all charter operators/supporters to be union-busting privatizing profiteers – also an unhelpful characterization for a diverse array of institutions).

It’s time to start digging deeper into what makes schools tick and for whom and how to provide the mix of schooling that best serves the largest share of children.


[1] As I explained in a recent post, even in a lottery study – of students lotteried in/lotteried out – those lotteried out likely attend schools with substantively different classroom peers than those lotteried in, and it remains difficult if not impossible to distill school/teacher effect from peer effect since both operate at the classroom level.

 
 

 

 

How much does Federal Title I Funding Affect Fairness in State School Finance Systems?

About this much!

These funding profiles are based on the methodology used in our reports on school funding fairness. The reports can be found here: http://schoolfundingfairness.org/ and the technical appendix can be found here: http://schoolfundingfairness.org/

This graph is based on an updated model which includes data from 2007-08, 2008-09 and 2009-10 (these are linear projections of otherwise messy distributions… hence the fact that some of the lines cross at/around 0% poverty).

The bottom line is that while Federal Title I programs certainly provide much needed funds to many high poverty districts, in the big picture, they are a drop in the bucket. They are now, and have been for some time.

The states in this figure are among the least equitable in the nation. And Title I aid simply isn’t sufficient to fix that. Equitable and adequate financing of local public school districts remains the responsibility of the states, and these states have some work to do!

Friday Finance 101: What Can we Learn about Education Costs & Efficiency by Studying Existing Public Schools?

One pervasive reformy argument is that our entire education system may be instantly transformed to be more productive and efficient by instantly adopting untested reformy policies and/or untested solutions of sectors other than education. Further, that we must take these bold leaps of faith because the public education system itself is too corrupt, too bloated, too inefficient to provide any useful lessons! Perhaps the whole system can be replaced with you-tube videos. Or perhaps we can just fire all of the teachers with more than 10 years experience and pay the rest based on the test scores they produce! Or perhaps some other lessons of industry can cure the (unsubstantiated) ills of American public schooling!

Kevin Welner and I addressed this issue in our critique of materials provided on the U.S. Department of Education’s website on improving educational productivity.  Specifically, Marguerite Roza and Paul Hill in one working paper titled Curing Baumol’s Disease argue that the entire public schooling system suffers from a disease of inefficiency and thus any lessons for improving educational productivity must be sought outside of the current system.

Similar arguments have been used by those who claim that state legislatures and state courts should never rely on cost analyses based on current practices of existing educational systems in order to either guide the design of state school finance systems through reform legislation, or to evaluate whether state school finance systems are equitable or adequate.

Researchers and/or policy analysts tend to use either of two general approaches to study education costs, identifying spending levels that should generally be sufficient for achieving desired outcomes and identifying how education costs vary from one location to another across districts within a state and how those costs vary by the needs of varied student populations. One approach involves gathering focus groups of informed constituents to specify the inputs to schooling they believe are needed to get the job done. These professional judgment panels are essentially proposing a hypothesis of the programs and services needed under varied conditions and for varied student populations to achieve desired outcomes. The alternative is to construct statistical models which estimate the relationship between current district spending levels and current student outcomes, with consideration for various factors that affect the cost of achieving desired outcomes (student characteristics, district characteristics, labor market pressures) and with consideration for factors that influence whether districts are more or less likely to spend inefficiently.

This approach, called education cost function modeling has been used extensively in peer-reviewed studies of education costs and cost variation.[1]  As Tom Downes, an economist from Tufts University explained back in 2004: “Given the econometric advances of the last decade, the cost-function approach is the most likely to give accurate estimates of the within-state variation in the spending needed to attain the state’s chosen standard, if the data are available and of a high quality” (p. 9).[2]

But, because these methods are sometimes used beyond academic journals, and in the highly political context of estimating not only how much money is possibly needed to achieve certain outcomes, but also how that money should be distributed across districts and children, they are not without controversy. These methods become the subject of more heated debate when they are introduced as evidence to assist judges in their evaluation of the constitutionality of state school finance systems. Heck, as explained below, a few authors have gone to great lengths to try to explain/argue how such information should never be used to either guide policy development or evaluate the rationality of current policies. Those assertions are completely unjustified.

The goal of education cost modeling – or any form of cost analysis – whether applied for evaluating equal educational opportunity or for producing adequacy cost estimates is to establish “reasonable marks” to provide guidance in developing more rationale state school finance systems. Only with reasonable marks in hand can one make informed judgments as to whether existing policies are wide of those reasonable marks.

Historically, funding levels for state school finance systems have largely been determined by taking the total revenue generated for schooling as a function of statewide tastes for statewide taxation and dividing that funding by the number of students in the system. That is, the budget constraint – or total available revenue – and total student enrollment have been the key determinants of the foundation level, or basic allotment. To some degree, this will always be true. But reasonable estimates of the “cost” of producing desired outcomes, given current technologies of production (the range of practices actually used/tested), may influence the taste for additional taxes by revealing that the preferences regarding taxation and the preferences regarding desired quality of public education are misaligned, meaning that one or the other should be adjusted. That is, if we find out that higher outcomes are going to cost us more, we can then have a more reasonable discussion of whether we are willing to pay that amount more for the expected gain in quality, or whether to lower our expectations. Alternatively, we can simply fly blind!

It’s rather like the individual who wishes to buy a Cadillac Escalade but expects only to spend about $25,000. After a little research, he finds that he can either buy a Ford F-150 for $25,000 or an Escalade for $65,000. That’s where that little bit of research comes in handy – identifying the gap between uniformed assumptions and reasonably informed ones, albeit with greater precision (actual prices) in this example than in cost estimation in education. Heck, if one wants to get really crazy with this, one could fit a statistical model relating prices with various features of existing makes and models of “comparable” vehicles.

Reasonable estimates of cost may also assist courts in determining whether current funding levels and distributions are wide of a reasonable mark, or substantially misaligned with constitutional standards. Cost model estimates are not meant to be exact predictions of what student outcomes will necessarily occur next year if we suddenly adopt a state school finance system based on the cost model estimates. Cost models provide guidance regarding the general levels (predictions with error ranges) of funding increases that would be required to produce measured outcomes at a certain level, assuming that districts are able to absorb the additional resources without efficiency loss.

Studies of state school finance reform also suggest that the key to successful school finance reforms is that they are both substantive and sustained. If additional dollars to high need districts are best leveraged toward high quality preschool programs and/or early grades class size reduction, we are unlikely to see changes to college readiness outcomes the following year (or following five years). If the additional dollars are best leveraged toward increasing teacher salaries for teachers in their optimal years of experience, allowing districts to recruit and retain “better” teachers over time, we are also unlikely to see immediate returns in student test scores.

Importantly, cost model estimates are estimates based on the actual production technologies of schooling. They are based on the outcomes schools and/or districts produce under different circumstances, for different children – the actual children they serve, based on the actual assessments given, and based on the real conditions under which children attend school.

Some critics of education cost analysis in general, and cost function modeling in particular assert that all local public school districts are simply inefficient, mainly because they pay their personnel based on parameters not associated with improved student outcomes.[3] Therefore, they assert that it is useless to consider the spending practices of current districts when trying to determine how much needs to be spent to achieve desired outcomes. A common version of this argument goes that if schools/districts paid teachers based on test scores they produce and if schools/districts systematically excessed ineffective teachers, productivity would increase dramatically and spending would decline. Thus, educational adequacy could be achieved at much lower cost, and therefore, estimating costs based on current conditions/practices is a meaningless endeavor.[4]

The most significant problem with this logic is that there exists absolutely no empirical evidence to support it. It is entirely speculative, frequently based on the assertions that teacher workforce quality can be improved with no increase to average wages, simply by firing the bottom 5% each year and paying the rest based on the student test scores they produce.  To return to the car purchasing analogy above, this is like assuming that somewhere out there is a car/truck with all the features of the Escalade, but the price of the F-150 – specifically, a version of the Escalade itself produced by a new, yet to be discovered technology with materials not yet invented that allow that vehicle to be sold at less than1/3 its original price.

In fact, the logical way to test these very assertions would be to permit or encourage some schools/districts to experiment with alternative compensation strategies, and other “reforms,” and to include these schools and districts among those employing other strategies (production technologies) in a cost model, and see where they land along the curve. That is, do schools/districts that adopt these strategies land in a different location along the curve? In fact, some schools and districts do experiment with different strategies and those schools carry their relevant share of weight in any statewide cost model. Thus far, what we seem to be seeing is that the more productive experimental approaches being used a) aren’t that bold and b) cost quite a bit!

Pure speculation that some alternative educational delivery system would produce better outcomes at much lower expense is certainly no basis for making a judicial determination regarding constitutionality of existing funding, and is an unlikely (though not unheard of) basis for informing statewide mandates or legislation.  Cost model estimates, as well as recommendations of professional judgment and expert panels can serve to provide useful, meaningful information to guide the formulation of more rational, more equitable and more adequate state school finance systems.


[1] Duncombe, W., Yinger, J. (2008) Measurement of Cost Differentials In H.F. Ladd & E. Fiske (eds) pp. 203-221. Handbook of Research in Education Finance and Policy. New York: Routledge.  Duncombe, W., Yinger, J. (2005) How Much more Does a Disadvantaged Student Cost? Economics of Education Review 24 (5) 513-532. Duncombe, W.D. and Yinger, J.M. (2000).  Financing Higher Performance Standards: The Case of New York State. Economics of Education Review, 19 (3), 363-86. Duncombe, W., Yinger, J. (1999). Performance Standards and Education Cost Indexes: You Can’t Have One Without the Other. In H.F. Ladd, R. Chalk, and J.S. Hansen (Eds.), Equity and Adequacy in Education Finance: Issues and Perspectives (pp.260-97). Washington, DC: National Academy Press. Duncombe, W., Yinger, J. (1998) “School Finance Reforms: Aid Formulas and Equity Objectives.” National Tax Journal 51, (2): 239-63. Duncombe, W., Yinger, J. (1997). Why Is It So Hard to Help Central City Schools? Journal of Policy Analysis and Management, 16, (1), 85-113. Imazeki, J., Reschovsky, A. (2004b) Is No Child Left Beyond an Un (or under)funded Federal Mandate? Evidence from Texas. National Tax Journal 57 (3) 571-588.

[2] Downes (2004) What is Adequate? Operationalizing the Concept of Adequacy for New York State. http://www.albany.edu/edfin/Downes%20EFRC%20Symp%2004%20Single.pdf

[3]Hanushek, E. (2005, October). The alchemy of ‘costing out’ and adequate education. Paper presented at the Adequacy Lawsuits: Their Growing Impact on American Education conference, Cambridge, MA. Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223.

[4] For elaboration on this argument, see: Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223

Friday Finance 101: Equitable and Adequate Funding and Teacher Quality is Not an Either-Or choice!

In recent years, the casual observer of debates over public education policy might be led to believe that improving teacher quality and ensuring that low income and minority school children have access to high quality teachers has little or nothing to do with the equity or adequacy of financing of schools. The casual observer might be led to believe that there actually exists a sizable body of empirical research that confirms a) that high quality teaches matter, b) that money doesn’t matter and c) by extension money has nothing to do with recruiting, retaining or redistributing teacher quality. These arguments, while politically convenient for those hoping to avoid thorny questions of tax policy and state aid formulas, are not actually grounded in any body of decisive, empirical research. Rather, to the contrary, it is reasonably well understood that while teacher quality does indeed matter, teacher wages also matter and teacher working conditions matter, both in terms of the level of quality of the overall teacher workforce and in the distribution of quality teachers.

The modern debate over the role of teachers and teaching quality for improving student outcomes dates back to findings within the Coleman report in the 1960s. The Coleman report looked at a variety of specific schooling resource measures, most notably teacher characteristics, finding positive relationships between these traits and student outcomes. A multitude of studies on the relationship between teacher characteristics and student outcomes have followed, producing mixed messages as to which matter most and by how much.[1] Inconsistent findings on the relationship between teacher “effectiveness” and how teachers get paid – by experience and education – added fuel to “money doesn’t matter” fire. Since a large proportion of school spending necessarily goes to teacher compensation, and (according to this argument) since we’re not paying teachers in a manner that reflects or incentivizes their productivity, then spending more money won’t help.[2] In other words, the assertion is that money spent on the current system doesn’t matter, but it could if the system was to change.

Of course, in a sense, this is an argument that money does matter. But it also misses the important point about the role of experience and education in determining teachers’ salaries, and what that means for student outcomes.

While teacher salary schedules may determine pay differentials across teachers within districts, the simple fact is that where one teaches is also very important in determining how much he or she makes.[3] Arguing over attributes that drive the raises in salary schedules also ignores the bigger question of whether paying teachers more in general might improve the quality of the workforce and, ultimately, student outcomes. Teacher pay is increasingly uncompetitive with that offered by other professions, and the “penalty” teachers pay increases the longer they stay on the job.[4]

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[5] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[6] In addition, more recent studies have tackled the specific issues of relative pay noted above. Loeb and Page showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[7]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[8]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[9]

With regard to teacher quality and school racial composition, Hanushek, Kain, and Rivkin (2004) note: “A school with 10 percent more black students would require about 10 percent higher salaries in order to neutralize the increased probability of leaving.”[10] Others, however, point to the limited capacity of salary differentials to counteract attrition by compensating for working conditions.[11]

Finally, it bears noting that those who criticize the use of experience and education in determining teachers’ salaries must of course produce a better alternative, and there is even less evidence behind increasingly popular ways to do so than there is to support the policies they intend to replace. In a perfect world, we could tie teacher pay directly to productivity, but contemporary efforts to do so, including performance bonuses based on student test results,[12] have thus far failed to produce concrete results in the U.S. More promising efforts to measure productivity, such as new teacher evaluations that incorporate heavily-weighted teacher productivity measures based on their students’ test scores, are still a work in progress, and there is not yet evidence that they will be any more effective (or cost-effective) in attracting, developing or retaining high-quality teachers.

To summarize, despite all the uproar about paying teachers based on experience and education, and its misinterpretations in the context of the “Does money matter?” debate, this line of argument misses the point. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.


[1] Hanushek, E.A. (1971) Teacher Characteristics and Gains in Student Achievement: Estimation Using MicroData. Econometrica 61 (2) 280-288, Clotfelter, C.T., Ladd, H.F., Vigdor, J.L. (2007) Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review 26 (2007) 673–682, Goldhaber, D., Brewer, D. (1997) Why Don’t Schools and Teachers Seem to Matter? Assessing the Impact of Unobservables on Educational Productivity. The Journal of Human Resources, 332 (3) 505-523, Ehrenberg, R. G., & Brewer, D. J. (1994). Do school and teacher characteristics matter? Evidence from High School and Beyond. Economics of Education Review, 13(1), 1-17, Ehrenberg, R. G., & Brewer, D. J. (1995). Did teachers’ verbal ability and race matter in the 1960s? Economics of Education Review, 14(1), 1-21, Jepsen, C. (2005). Teacher characteristics and student achievement: Evidence from teacher surveys. Journal of Urban Economics, 57(2), 302-319, Jacob, B. A., & Lefgren, L. (2004). The impact of teacher training on student achievement: Quasi-experimental evidence from school reform. Journal of Human Resources, 39(1),50-79, Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 471, Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains. Review of Educational Research, 73(1), 89-122, For a recent review of studies on the returns to teacher experience, see: Rice, J.K. (2010) The Impact of Teacher Experience: Examining the Evidence and Policy Implications. National Center for Analysis of Longitudinal Data in Educational Research.

[2] Some go so far as to argue that half or more of teacher pay is allocated to “non-productive” teacher attributes, and so it follows that that entire amount of funding could be reallocated toward making schools more productive. See, for example, a recent presentation to the NY State Board of Regents from September 13, 2011 (page 32), slides by Stephen Frank of Education Resource Strategies: http://www.p12.nysed.gov/mgtserv/docs/SchoolFinanceForHighAchievement.pdf

[3] Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62

[4] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.

[5] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[6] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[7] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[8] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71. See also: Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[9] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

[10] Hanushek, Kain, Rivkin, “Why Public Schools Lose Teachers,” Journal of Human Resources 39 (2) p. 350

[11] Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438, Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[12] For recent studies specifically on the topic of “merit pay,” each of which generally finds no positive effects of merit pay on student outcomes, see: Glazerman, S., Seifullah, A. (2010) An Evaluation of the Teacher Advancement Program in Chicago: Year Two Impact Report. Mathematica Policy Research Institute. 6319-520, Springer, M.G., Ballou, D., Hamilton, L., Le, V., Lockwood, J.R., McCaffrey, D., Pepper, M., and Stecher, B. (2010). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching. Nashville, TN: National Center on Performance Incentives at Vanderbilt University, Marsh, J. A., Springer, M. G., McCaffrey, D. F., Yuan, K., Epstein, S., Koppich, J., Kalra, N., DiMartino, C., & Peng, A. (2011). A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses. Final Evaluation Report. RAND Corporation & Vanderbilt University.

Which states screw the largest share of low income children? Another look at funding fairness

Here’s a little Friday afternoon fun with the updated Census Fiscal Survey data through 2009-2010. I’ve written recently about the national school funding fairness report card, which I work on with colleagues from the Education Law Center. The report card can be found here:

http://www.schoolfundingfairness.org/

I also recently wrote a blog post about America’s Most Screwed City School Districts. It was clear to some readers that the most screwed city school districts happen to be concentrated in certain states like Illinois and Pennsylvania, and also in Connecticut which is often perceived as a reasonably well funded and fairer state (than the other two).

Par for the course, as soon as we release the School Funding Fairness report card using data from 06-07 to 08-09 (most recent available at the time we put it together), the Census Bureau releases their 2009-10 district level finance figures… leading to the usual flurry of misinterpretations of data (which I’ll get to another day). Not being able to resist the temptation, despite a heavy backlog of other work to do, I decided I had to play with the updated fiscal data. I also decided for fun to take an alternative look at the data, bridging the idea I presented on my blog about screwed city schools with the general idea of state school funding systems. I decided to ask which states screw the most low income kids.

Here’s my operational definition of screwed for this post. A district is identified as screwed (new technical term in school finance… as of a few posts ago) if a) the district has more than 50% higher census poverty than other districts in the same labor market and b) lower per pupil state and local revenues than other districts in the same labor market. As I’ve explained on numerous previous occasions, it is well understood that districts with higher poverty rates (among other factors) have higher costs of providing equal educational opportunity to their students.

I then tally the percent of statewide enrollments that are concentrated in these screwed districts to determine the share of kids screwed by their state. And here are the rankings… or at least the short list of states that screw the largest share of low income students:

Not much new here. The same culprits make up the list. Nebraska is elevated to its position of disgrace by its systematic underfunding of Omaha Public Schools, which seemed to improve for a fleeting few years, but recent data don’t look so good. Woonsocket and Pawtucket bring Rhode Island into the mix… but raising additional fun questions regarding placement of blame (another post, another day… but should the city managers/local officials have the authority to deprive children in their jurisdiction of state constitutional rights? under what circumstances and by what mechanism should the state step in? Can they?).

Here are a few graphs showing the distributions of individual districts in Illinois, Pennsylvania and Connecticut. On the horizontal axis is the relative poverty rate of districts compared to all other districts in the same core based statistical area. On the vertical axis is the state and local revenue per pupil relative to the average for all other districts in the core based statistical area.

Again, Allentown, Reading and Philadelphia are massively screwed (yep… a new school finance classification). Meanwhile… Lower Merion… in the Philly ‘burbs is not screwed at all. An intriguing contrast in Pennsylvania school finance is that Pittsburgh has long had far more adequate funding than Philadelphia for a variety of reasons. It is important to understand here that the highest poverty districts – those with 3x the average for their labor market – likely need FAR MORE revenue per pupil than their neighbors to get by – not just the same. So, while York and Harrisburg are decidedly less screwed than Allentown or Reading, they too are not in particularly good shape. They have about the same revenue per pupil as surrounding districts, and 3x the poverty rate.

Here’s Illinois:

Waukegan and Aurora East, along with Round Lake hold the coveted spots of “most screwed” but Chicago Public Schools isn’t far behind (with over 400k students). A multitude of smaller high poverty districts in the Chicago metro not shown here also have very low relative revenue per pupil.

Finally, here’s Connecticut once again:

Again, Bridgeport and New Britain, along with Waterbury (among others) remain substantially screwed. Recall from my previous post that Hartford and New Haven funding is somewhat distorted by magnet school aid.

So why does any of this matter anyway. Well, at face value it’s patently unfair to systematically deprive these districts of resources comparable to their less needy neighbors.  If money doesn’t matter to New Britain or Bridgeport, then why does it matter to Greenwich or Westport? Really, if money is so damn trivial for improving schooling quality, they why don’t all those districts in the upper right hand corner of these graphs just give all that useless money to those in the lower right hand corner. Oh, wait… perhaps money does matter…???…!!!

One thing about school finance that’s really important to understand is that the relative position of districts matters a great deal. It matters a great deal because education is a labor intensive industry. It is about getting a sufficient quantity of sufficient quality teachers in front of kids who need them. The spending behavior and negotiated agreements, and working conditions in districts like Westport and Greenwich matter for the teacher recruitment potential for Bridgeport.  The distribution of quality teachers across districts in a labor market depends on numerous factors, many of which tie back to available resources. And in these states, large numbers of children attend high need districts that simply lack resources to compete.

Notably, those districts sitting pretty in the upper left hand corner of these figures also have had traditional teacher contracts, tenure, seniority preferences and likely other policies that would make “reformers” cringe for years.  But most are doin’ just fine.  So too do the even higher spending and lower poverty elite private schools in the same labor markets! Most don’t use test scores as the basis for providing merit pay and I’m quite sure that few if any of them use test scores as the basis for firing the bottom 5% of their teachers every year. They haven’t been and aren’t being subjected to manipulative heavy handed takeovers, school closures and massive charter school expansion.

None of that reformy junk would likely do much good for the Westports, Greenwiches or Lower Merions of the US school system.  And none of that reformy junk is likely to be much good for the Bridgeports, New Britains, Allentowns, Readings, Philadelphias or Chicagos!

I find it particularly infuriating when I hear news of these “most screwed” districts being blamed for their own failure by the state officials who have deprived them systematically of resources for decades.

What these districts need as a baseline – a fair starting point – is equitable & adequate funding. Once that has been accomplished, then, and only then can we start having a reasonable conversation about how to best leverage that funding to improve student outcomes. But without the funding, there are no options for leveraging it.