What about those high income families that opted out long before the school year started?

Pro-Annual Testing of Everyone pundits are all in a tizzy about Opt-Out. In their view, parents who opt out are severely compromising accountability for our public education system. They are eroding the public interest in the most selfish possible way. What seems to irk these pundits as much as anything is the possibility that the recent pattern of opting out appears (empirical question for a later day) to be disproportionately occurring in upper middle income to upper income communities – A group, over which pundits have have little control or possible leverage [little opportunity for punitive policy – which drives them crazy].

So the pundits say, the disproportionate opting out of upper income white children from testing will severely compromise the ability of policy makers to accurately measure achievement gaps between those children and poor and minority children more compliantly sit down, shut up and fill in the bubbles (ok… point and click).

If the affluent families opt out, we really won’t know how far behind those who are less affluent really are?

Do we?

But do we anyway?

This whole line of reasoning is yet another example of the lack of demographic/contextual understanding and related number sense of those making these arguments. The edu-pundit-innumerati strike again!

These same innumerate pundits previously claimed that annual testing of everyone is absolutely necessary for accurately measuring within school and within district achievement gaps among student subgroups, totally failing to understand that few schools and districts – even when everyone is tested – actually have sufficient populations of subgroups for measuring gaps – and further that the approach most often used for measuring gaps is total BS – statistically that is. Actually, measuring within school and within district gaps and using those measures to penalize schools and districts ends up selectively penalizing only those schools for whom the gaps can be measured – integrated schools.

So then, why is this new argument equally statistically and demographically bankrupt? Certainly it would be the case that if those not taking the test were disproportionately of a certain race or of higher income, that average scores would be biased, and likely biased downward for any data aggregation that would/should include these families. So then, of course it’s a problem, right?

Well, yes… and no.

What the edu-pundit-innumerati fail to realize is that there already exist larger shares of disproportionately higher income kids in nearly every state who are already opting out of these assessments by opting into schools that generally don’t give them. Are these kids somehow not an issue of public policy concern, merely because they attend private schools (or homeschooling)?

If parents in Scarsdale, NY or Millburn, NJ opting out of state assessments matters toward our understanding of gaps in educational opportunity across children of the state by income and race, then so too do the unmeasured outcomes of children opting out of the public education system as a whole.

Here are the numbers for children between the ages of 8 and 17 (those who might fall in tested grades) for New Jersey and New York.

In New Jersey, over 110,000 children between the ages of 8 and 17 attend private schools (just under 150,000 when summing enrollments for k-12 private schools).

Slide1

In New York, over 300,000 children attend private schools (just under 350,000 when summing enrollments for k-12 private schools).

Slide2

In each state, over 10% of children in this age range do not attend the public schools.

Slide3

In New Jersey, the average Total Family Income of those in private schools is about $160k, compared to about $110k for those attending public schools.

Slide4

In New York, the average Total Family Income of those in private schools is about $140k, compared to $87k for those attending public schools.

Slide5

In other words, these states, among others, have relatively large shares of kids outside the system entirely, and the average income of their families is much higher than the average income of those inside the system.

That is, there already exists substantial bias – due to omitted data – in our measurement of gaps in educational outcomes!

Should we try to mitigate any additional bias? Perhaps. But can we pretend that if we do – if we reduce opt outs among affluent public school attendees, we’ve adequately measured outcome equity? Uh… no.

Here’s the breakout of those enrollments by primary affiliation of school, based on the most recent Private School Universe Survey from NCES.

Slide6

Slide7

So, is the National Catholic Education Association on board yet [w/CCSS perhaps, but the tests?]? Are they fully adopting/implementing annual testing of everyone?

How about the most (economically) elite schools in this mix, most of whom are members of the National Association of Independent Schools?

The reason why our School Funding Fairness report includes measures of “coverage” and income gaps by coverage is to make clear that even our measures of fiscal disparity across children attending public schools in any state suffer from the bias resulting from our inability to capture the resources available to the relatively large shares of children not in the public system at all, which, for 5 to 17 year olds, exceeds 20% in states like Delaware and Louisiana.

So, to those in a tizzy about opt out.

Chill.

Annual testing of everyone really isn’t annually testing everyone anyway, and as a result, really isn’t serving the public interest as well as you might think!

Innumerati: Blatantly, belligerently mathematically and statistically inept political hacks who really like to use numbers and statistical arguments to make their case. Almost always out of context and/or simply wrong.

Friday Graphs: Bad Teachers? or Bad Policy & Crappy Measures in New York?

A while back, I wrote this post explaining the problems of using measures of student achievement growth to try to sort out “where the bad teachers go.”

The gist of the post was to explain that when we have estimates of student achievement growth linked to teachers, and when those estimates show that average growth is lower in schools serving more low income children, or schools with more children with disabilities, we really can’t tell the extent to which these patterns indicate that weaker teachers are sorting into higher need settings, or that teachers are receiving lower growth ratings because they are in high need settings. The reformy line of argument is that it’s 100% the former. That bad teachers are in high poverty schools, and that it’s because of bad teachers that these schools underperform. Fire those bad teachers. Hire all of the average ones waiting in line.

Even the best measures of student growth, linked to teachers, addressing as thoroughly as possibly numerous contextual factors beyond teachers control, can’t totally get the job done – isolating only the teacher…. well… classroom level… effect. And, as I’ve noted in previous posts, many if not most state and district adopted measures are far from the best.. or even respectable attempts.

I explain in this policy brief, that in New Jersey, factors including student population characteristics, average resource levels available in schools, competitive wages of teachers (relative to surrounding districts) and other factors are significant predictors of differences in school average “growth” ratings. Schools with more resources and less needy students, and higher average scores to begin with, in New Jersey, get significantly higher growth ratings.

I also showed in this post that in either Massachusetts or New Jersey, teachers in schools with larger shares of their populations that are female, are less likely to receive bad ratings (Mass) or, conversely in schools that receiving higher growth scores (New Jersey). The implication, accepting reformy dogma about what these measures mean, is that our best teachers are teaching the girls.

So then what about those New York teacher ratings I addressed in the previous post. We saw, for example, that teachers rated “Ineffective” on the growth measure tend to be in high poverty schools:

Slide6

Tend to be in schools with larger classes:

Slide5

And those really effective teachers tend to be in schools with lower poverty and smaller classes.

So, does that mean that the “great” teachers are just getting the cushy jobs? Or is the rating system simply labeling them as such?  While there may indeed be some sorting, especially in a state with one of the least equitable funding systems in the nation, it certainly seems likely that the estimates of teacher effect on student achievement growth… well… simply suck! They don’t measure what they purport to measure.

They measure, to a large extent, the conditions into which teachers are placed, and NOT the effect of teachers on student outcomes.

Combining the above factors into a logistic regression analysis to predict how a handful of conditions affect the likelihood that a teacher is rated either “ineffective” (you really suck) or “developing” (you kinda suck, and we’ll tell you you really suck next year), we get the following:

NY Ratings Bias

So, even when considered together (holding the other “constant”), teachers in schools with larger classes (at constant low income share and funding gap) have greater likelihood of being rated “bad.” Teachers in schools with higher low income concentrations, even if class sizes and funding gaps are the same, are much more likely to be rated “bad.” But, teachers in schools in districts that have smaller state aid gaps, are less likely to be rated “bad.”

So, on the one hand, we can stick to the King’s grand plan….

  • Step 1 – Disproportionately label as “bad” those teachers in schools serving more low income kids, and doing so with fewer resources, including larger class sizes, and dump those lazy failing teachers out on the street…
  • Step 2 – Wait for that long line of “average” teachers to sign up to take their place… stepping into the very same working conditions of their predecessors, which likely led, at least in part, to those bad ratings….
  • Step 3 – Repeat

And the cycle continues, until a) those conditions are improved and b) the measures for rating teacher effect are also improved (if they even can be).

Alternatively, maybe the actual policy implication here is to a) reduce aid gaps and b) use that funding to improve class sizes?

UPDATE –

I figured I’d go check out that gender bias issue I found in NJ and MA. And wow – there it is again. I’ve rescaled the low income concentration and female concentration effects to relate odds changes (of being labeled bad) to a 10% point shift in enrollment (e.g. from 50% to 60% low income, or female). Here are the updated model results:

NY Ratings Bias

So once again – is it that all of the “bad” teachers are teaching in schools with higher percentages of boys? or is something else going on here? Are teachers really sorting this way? Are they being assigned by central office this way? Or is there something about a class with a larger share of boys that makes it harder to generate comparable gains on fill in the bubble, standardized tests? Why do the girls get all the good teachers? or do they?

Relinquishing Efficiency: NOLA Data Snapshots

There’s always plenty of bluster about the post-Katrina NOLA miracle. I’ve done a few posts on the topic, but none recently.

See:

The NOLA model of “relinquishment” continues to be pitched as a handy-dandy reformy solution for dismantling the dysfunctional urban school district and achieving miraculous gains in overall student outcomes (like those reported by CREDO), of course, at little or no increased expense. Indeed this latter piece is merely implied, by the complete and utter silence on the question of just how much money is being thrown at this alternative model in order to prove it “works.”

The purpose of this post is merely to put some of this NOLA bluster into context, using readily available data sources, including the NCES Common Core Public School Universe and NCES Fiscal Survey of Local Governments, along with CRDC/Ed Facts data released for states to conduct equity analyses to support their “teaching equity” plans.

First off, here are the pre, to post Katrina enrollment patterns for district and charter schools identified as within city boundaries of New Orleans:

Slide1City enrollments remain far lower than they were pre-Katrina, and any comparisons of the present, to that era, or even to the immediate post-Katrina era, when nearly all students remained displaced, are not useful. Most students are now in Charter schools, meaning that establishing a “counterfactual” comparison of charter students against “non-charter” students, as in the typical charter pissing match studies, is, well, rather difficult if not implausible.

As one might expect, once you’ve got most kids in charter schools, then the charters must somewhat mirror the population that had been in district schools, and remain in the few non-charters as of the final year of these data.

Slide2Really, no surprises here. Of course, we might find a different story if I had readily available data on children with disabilities, by the severity of those disabilities.

This next graph shows the per pupil current spending over time.

Slide3Now, that spike in 2006 is NOT because all of the sudden NOLA schools spent a whole lot more, but rather because the denominator – Pupils – nearly disappeared. Per pupil spending goes up when pupils decline, if spending does not decline commensurately.

It’s a simple math thing. But, even after the system stabilized at its new level, the state of Louisiana has seen fit to boost spending for the Recovery School District to 55% higher than state average spending. Prior to Katrina, NOLA schools were merely at parity with state averages. That’s a substantive boost. And one I’m certainly not complaining about, given the needs of these children. But certainly any claims of NOLA miracles, if they do exist, must include conversation about the “massive infusion of funding” in relative terms associated with this “relinquishment” experiment.

This increase (relative to surroundings) is greater than the boost received by Newark, NJ at any point during school funding litigation in NJ.

And where has some of that money gone? Well, this graph shows transportation expenditures per pupil over time.

Slide4While a bit volatile from year to year, the NOLA experiment seems to be leading to at least DOUBLE state average (non-rural) transportation spending per pupil – AND this is occurring in the most population dense part of the state, where one would expect average transportation costs to be lower. To put these figures into context, taking the margin of difference in transportation spending as about $600 per pupil in the most recent year, that figure is about 6% over the state average $10k per pupil operational expense (that is, consuming the first 6% of the 55% elevated spending on RSD, for a non-transportation RSD margin of 49%, still a healthy boost).

But what had been going on at ground level – within the “district” across schools – when there still existed district and charter schools? Here are some snapshots of total staffing expenditures per pupil by school organized first by low income concentration and then by special education.

Slide5Slide6Visually, it would certainly appear that the edge was being given to charter schools in terms of resources. In which case, any policy inferences based on assertions that charter schools yielded better outcomes, should certainly consider the influence of the additional resources. To clarify, the following table shows the output of  a regression comparing the per pupil staffing expenditure across charter, and “other” schools in New Orleans, for schools serving similar shares of low income children, children with disabilities and serving similar grade range distributions.

Slide7On average, the CRDC/Ed Facts data indicate that Charter Schools in New Orleans were spending $1,604 per pupil more than were “other” schools serving similar student populations. And that’s a hefty boost given that spending ranged from about $4,000 to $8,000 for most districts. That’s 40% of the $4,000 figure.

Again, any interpretation of differential effectiveness of charters versus other schools in New Orleans should consider the potential relevance of a 40% differential in staffing expenditure per pupil.

Setting aside the HUGE ACCOUNTABILITY concerns associated with this model (which no-one should ever set aside), and significant concerns over the legal rights of children and taxpayers (again, which should never be set aside), there are some potential lessons for pundits and policymakers here.  If there is even a success story to be told in NOLA (which I’m unconvinced), that success isn’t free, and it isn’t cheap.

So many pundits over time have ridiculed as the most inefficient experiment in social engineering of all time, the Kansas City desegregation plan of the 1990s. Now, there’s much misguided bluster – urban legend – in those characterizations, as I’ve written in the past. Perhaps one of my greatest fears about the NOLA experiment is that it will provide more fodder for the assertion that money doesn’t matter. Heck, they’ve thrown a lot of money at this so far. They’re just not talking about it. It’s being spent on exorbitant transportation costs, among other things.

Strangely, for now, all I hear is silence from the anti-spending, efficiency warriors of the ed policy world when it comes to NOLA.  Does that mean that money really matters (accepting the NOLA miracle characterization), or, alternatively, is NOLA proving (by not substantively improving outcomes with a 55% boost in funding) that the inefficiencies of a 100% charter/choice/unified enrollment system are equal to or greater than those of the urban school district of the past?

Data notes:

The original data sources for the above analysis are:

  1. enrollment data: http://nces.ed.gov/ccd/pubschuniv.asp
  2. fiscal data (PPCSTOT – or current operating expenditure per pupil) http://www.census.gov/govs/school/
  3. CRDC/Ed Facts School Site staffing expenditure data: http://www2.ed.gov/programs/titleiparta/equitable/laepd.xlsx

For current operating expenditure comparisons, the State of Louisiana reports different per pupil spending figures, combining RSD operated schools and Type 5 charters [whereas NCES reports RSD operated schools, where students shift from one – RSD operated – to the other – Charters – over time, both under RSD Governance]. Both, as far as I can tell, by relevant notations, exclude short term emergency funds. And both are current spending (excluding capital investment) figures. State data are reported below. Notably, the margin of difference is smaller than the operating expenditure figure above. But, interestingly, as more students shift to Type 5 charters, the margin of spending difference increases.

This is a trend worth watching over time. This margin, which is still substantial (and growing), might be consumed almost entirely by increased transportation expense, but may also continue to rise (or not?).

Note that these differences are unrelated to the school level CRDC/Ed Facts analyses above, which include independently reported staffing expenditure data on individual school sites where charter schools have sufficient additional resources  to substantially outspend (+40%) non-charters. These large differentials (huge for some schools) are likely a function of privately contributed resources which may not be showing up in either the State or NCES data.

Finally, there’s rarely need to speculate or make anecdotal claims about data being “wrong” or “different,” or whatever, when one can simply look up the relevant data and make the relevant comparison. Tables w/relevant URL citations can even be conveyed via twitter!

District(s) 2011-12 2012-13 2013-14
Other Parrish Schools $     10,543 $     10,368 $     10,611
Orleans Parish School Board $     14,273 $     14,601 $     13,527
Recovery School District (Operated & Type 5 Charters)* $     11,420 $     11,665 $     11,998
RSD Margin over “Other” 8.3% 12.5% 13.1%
https://www.louisianabelieves.com/resources/library/fiscal-data

NYSED Recommends “Teacher Effectiveness Gnomes” to Fix Persistent Inequities

I guess I knew that when ED released their “teacher equity” regs late fall of 2014, that we were in for a whole lot of stupid.

You see, there was some good in those regulations and the data released to accompany them. There was discussion of teacher salary and qualifications parity, and some financial measures provided that would allow states to do cursory analyses, based on 2011-12 data, of the extent to which there existed objectionable inequities in either cumulative salary expenditures per child across schools, or average salary expenditures. The idea was that states would set out plans to evaluate these disparities, using data provided and using their own data sources. And then, states would provide plans of action for mitigating the disparities. This is where I knew it could get silly.

But state officials in New York have far surpassed my wildest expectations.Here’s their first cut at this issue: http://www.regents.nysed.gov/meetings/2015Meetings/April/415p12hed2.pdf

In this memo, NYSED officials identify the following inequities:

According to the USED published equity profile, the average teacher in a highest poverty quartile school in New York earns $66,138 a year, compared to $87,161 for the average teacher in the lowest poverty quartile schools. (These numbers are adjusted to account for regional differences in the cost of living.) Information in the New York profile also suggests that students in high poverty schools are nearly three times more likely to have a first-year teacher, 22 times more likely to have an unlicensed teacher, and 11 times more likely to have a teacher who is not highly qualified.

& you know what? They’re right. Here’s the full continuum of average salaries and low income concentrations across NY state schools, first with, and then without NYC included.

Slide1

Slide2

As I’ve pointed out over, and over and over again on this blog, NY State maintains one of the least equitable educational systems in the nation. See, for example:

  1. On how New York State crafted a low-ball estimate of what districts needed to achieve adequate outcomes and then still completely failed to fund it.
  2. On how New York State maintains one of the least equitable state school finance systems in the nation.
  3. On how New York State’s systemic, persistent underfunding of high need districts has led to significant increases of numbers of children attending school with excessively large class sizes.
  4. On how New York State officials crafted a completely bogus, racially and economically disparate school classification scheme in order to justify intervening in the very schools they have most deprived over time.

Ah, but I’m just blowin’ hot air again, about that funding stuff, and the fact that NY State continues to severely underfund the highest need districts in the state, like this:

Slide2

But I digress. Who needs all of this silly talk (and actual data) about funding disparities anyway? And what do funding disparities possibly have to do with teacher equity problems, or salary disparities like those identified above by NYSED using USED data?

Well: https://www.youtube.com/watch?feature=player_detailpage&v=wfgnNI9-ImY&list=PLuzsMod17tiHrlaBvDcm2us_k68uxZcSy#t=801

Of course, NYSED official know better – much better what’s behind those ugly salary and ultimately, teacher qualification disparities plaguing NY State schools. The ED regs require that states first identify problems/disparities. Then, ROOT CAUSES, thus, leading to logical policy interventions – Strategery at it’s finest!

PROBLEM –> ROOT CAUSE –> STRATEGERY

So what then are the root causes of the disparities identified above by NYSED?

Through the collaborative sharing of lessons learned through the STLE program and research, the Department has determined that the following five common talent management struggles contribute significantly to equitable access:

  1. Preparation
  2. Hiring and recruitment
  3. Professional development and growth
  4. Selective retention
  5. Extending the reach of top talent to the most high-need students

Although the Department believes the challenges described here are reflective of broad “root causes” for the statewide equity gaps, it is still important for each LEA to examine their unique equity issues and potential root causes. In talking with superintendents, principals, and teachers involved in STLE, the Department was able to see that equity gaps that appear similar across contexts may in fact stem from different root causes in various LEAs. For example, one district struggling with inequitable access for low-performing students may find that inequities stem from a pool of low quality applicants, whereas a second district may find that they have a large pool of high quality applicants but tend to lose top talent early in their careers to neighboring districts who offer more leadership opportunities for teachers.

Ah… okay… I thought equitable funding to actually pay equitable salaries might have had something to do with it. How silly am I? It’s about bad teacher preparation programs which somehow produce bad teachers who ask for lower salaries in high poverty districts? and high poverty districts selectively retaining only their bad teachers, intentionally, by just not paying well. It’s a conspiracy that can be fixed by clever talent development strategies. No money, except some chump change in competitive grants, needed.

And thus, if we know that bad teacher prep and crappy local management of talent is the root cause, the solutions are really easy?

The Department believes the overall quality of teaching and learning can be raised through the implementation of comprehensive systems of talent management, including sound implementation of the teacher and principal evaluation system.

Key Component 1 (Educator Preparation): The Department will continue to support and monitor improvements to access and entry into the profession, such as the redesign of teacher and principal preparation programs through performance-based assessments, clinically grounded instruction, and innovative new educator certification pathways.

Key Component 2 (Educator Evaluation): With the foundation laid by Education Law §3012-c, the Department will continue to provide support and monitoring to LEAs as they implement enhanced teacher and principal evaluation systems that meaningfully differentiate the effectiveness of educators and inform employment decisions.

Key Component 3 (The TLE Continuum): The Department will provide resources and support to LEAs utilizing evaluation results in the design and implementation of robust career ladder

All that’s missing from this brilliant plan are the teacher effectiveness gnomes.

So yeah… it all comes down to the state’s brilliant model for rating, ranking and dumping “bad” teachers to open the door to all the really good teachers who are currently waiting in line to work in schools that …

serve high concentrations of low income and minority students,

Slide6

have larger class sizes,

Slide5

and still (and moving forward) have the largest state aid shortfalls!

Slide4

What’s really great about all of this, is that these teachers – all chomping at the bit to work in these schools for low pay – can have it all! Funding gaps and greater needs. Note that the majority of “ineffective” teachers (as so declared by growth rating along) are clustered in schools with high low income concentrations and big aid gaps. Interestingly, even those in districts with fewer low income children, are also in districts with big aid gaps.

CRDC Ed Facts Data – NY State 2011-12

To summarize – the framework laid out by ED, was:

PROBLEM –> ROOT CAUSE –> STRATEGERY

The brilliant application of that framework by NYSED was:

Problem=Huge salary & teacher qualification disparities by school poverty

Root Cause=Bad teachers, Teacher Prep & Administration

Strategery=Talent Development (fire bad teachers)

Are you kidding me? Really? In my wildest dreams…

To clarify – if it wasn’t already sufficiently clear – I do not at all accept that the patterns above represent the actual distribution of teacher effectiveness, but rather, that the crappy measures adopted by NYSED for rating teacher effect on growth systematically disadvantage those teachers serving needier students, in larger classes and schools with more scarce resources.

Yeah… I get it… NYSED and the Regents don’t pull the budget strings. The Gov has done that damage. But that doesn’t make the logic of the NYSED brief any less ridiculous!

Head… desk…

Angry Andy’s not so generous state aid deal: A look at the 2015-16 Aid Runs in NY

Not much time to write about this, but I finally got my hands on the state aid runs for NY state school districts which were, in an unprecedented and utterly obnoxious move by the Gov, held hostage throughout the budget “negotiations” (if we can call  it that).

Quick review – NY operates a state aid calculation formula built on the premise that each district, given its geographic location (labor costs) and pupil needs requires a certain target level of funding to achieve desired outcomes.

Target = Base x Pupil Needs x Regional Cost

The state then determines what share of that target shall be paid by local districts, the rest to be allocated in state aid.

State Aid = Target – Local Contribution

A few really important points are in order before I move forward with the updated estimates. First, those targets are supposed to be aligned with costs of achieving desired outcomes. Higher outcomes cost more to achieve, with greater marginal cost effects where student needs are higher. As I’ve explained previously, the state has continued to increase those outcome targets, but has continued to lower the funding target. This is a formula for failure!

And, in 2015-16, they’ve done it again. The “base cost” figure which drives the formula has again been decreased, thus leveling down target funding across the board, all else equal.

Slide1

So, with this in mind, any/all funding gaps I discuss below should be considered only funding gaps with respect to what the state would like to pretend is its full funding obligation. What in reality is a low-balled, manipulated figure that downplays substantially the true obligation with respect to current outcome goals. The actual full funding obligation, given increased standards over time, is likely much higher… much higher. There’s no excuse for lowering the target – and continuing year after year to push the date for hitting that target out further. None.

However, from the state perspective, this manipulative game of lowering the outcome target can make it appear that they are getting closer to hitting it. Separately, as I explained on another recent post, one can make the state aid shortfalls look less bad if one requires a higher local contribution, another game used in previous budget years.

Let’s start with the positive. Yes, the adopted state budget does, on average, increase per pupil state aid and does so in higher amounts in districts serving needier pupils:

Slide6

Not bad. We’ve got districts getting what would appear to be hundreds of dollars per pupil in increased state aid. But, remember, this is only a small dent in the funding gaps. Let’s first look at the funding gaps for 2015-16 for those districts Angry Andy called miserable failures who should be subjected to the death penalty.

Slide2

Here, we’ve got districts that in the best case, are still being shorted around $1,500 per pupil in state aid. Every one of Angry Andy’s failing districts will continue to be substantially underfunded – against the state’s own low-ball estimates – for yet another year. All in the name of Angry Andy’s Awesome Austerity Experiment. Regarding a similar “experiment” in Kansas, a 3 judge panel noted it is experimenting with our children which have no recourse from a failure of the experiment.”

And what about small city school districts, who recently had their case heard in Albany? Well, first off, some of them are among the Angry Andy failures.

Slide3

And generally, their state aid gaps remain large – really large. And again, these are gaps with respect to low-balled targets – and after jacking up the supposed local responsibility to fund those targets.

So, who’s to blame here? Well, obviously, it’s not the funding gaps – it’s those lazy teachers and the complicit administrators who give those teachers good ratings even when they can’t produce test score gains.

I close with an update of the 50 districts with the largest funding gaps going into 2015-16. And here they are:

Slide4

Slide5

  For previous reports/lists, see:

  1. Statewide Policy Brief with NYC Supplement: BBaker.NYPolicyBrief_NYC
  2. 50 Biggest Funding Gaps Supplement: 50 Biggest Aid Gaps 2013-14_15_FINAL

On School Finance Equity & Money Matters: A Primer

Conceptions of Equity, Equal Opportunity and Adequacy

Reforms across the nation to state school finance systems have been focused on simultaneously achieving equal educational opportunity and educational adequacy. While achieving and maintaining educational adequacy requires a school finance system that consistently and equitably meets a certain level of educational outcomes, it is important to maintain equal education opportunity in those cases where the funding provided falls below adequacy thresholds. That is, whatever the outcome currently attained across the system, that outcome should be equally attainable regardless of where a child resides or attends school and regardless of his or her background.

Conceptions of school finance equity and adequacy have evolved over the years. Presently, the central assumption is that state finance systems should be designed to provide children, regardless of where they live and attend school, with equal opportunity to achieve some constitutionally adequate level of outcomes.[i] Much is embedded in this statement and it is helpful to unpack it, one layer at a time.

The main concerns of advocates, policymakers, academics and state courts from the 1960s through the 1980s were to a) reduce the overall variation in per-pupil spending across local public school districts; and b) disrupt the extent to which that spending variation was related to differences in taxable property wealth across districts. That is, the goal was to achieve more equal dollar inputs – or nominal spending equity – coupled with fiscal neutrality – or reducing the correlation between local school resources and local property wealth. While modern goals of providing equal opportunity and achieving educational adequacy are more complex and loftier than mere spending equity or fiscal neutrality, achieving the more basic goals remains relevant and still elusive in many states.

An alternative to nominal spending equity is to look at the real resources provided across children and school districts: the programs and services, staffing, materials, supplies and equipment, and educational facilities provided. (Still, the emphasis is on equal provision of these inputs.)[ii] Providing real resource equity may, in fact, require that per-pupil spending not be perfectly equal if, for example, resources such as similarly qualified teachers come at a higher price (competitive wage) in one region than in another. Real resource parity is more meaningful than mere dollar equity. Further, if one knows how the prices of real resources differ, one can better compare the value of the school dollar from one location to the next.

Modern conceptions of equal educational opportunity and educational adequacy shift emphasis away from schooling inputs and onto schooling outcomes and more specifically equal opportunity to achieve some level of educational outcomes. References to broad outcome standards in the school finance context often emanate from the seven standards[iii] articulated in Rose v. Council for Better Education,[iv] a school funding adequacy case in 1989 in Kentucky argued by scholars to be the turning point from equity toward adequacy in school finance legal theory.[v] There are two separable but often integrated goals here – equal opportunity and educational adequacy. The first goal is achieved where all students are provided the real resources to have equal opportunities to achieve some common level of educational outcomes. Because children come to school with varied backgrounds and needs, striving for common goals requires moving beyond mere equitable provision of real resources. For example, children with disabilities and children with limited English language proficiency may require specialized resources (personnel), programs, materials, supplies, and equipment. Schools and districts serving larger shares of these children may require substantively more funding to provide these resources. Further, where poverty is highly concentrated, smaller class sizes and other resource-intensive interventions may be required to strive for those outcomes commonly achieved by the state’s average child.

Meanwhile, conceptions of educational adequacy require that policymakers determine the desired level of outcome to be achieved. Essentially, adequacy conceptions attach a “level” of outcome expectation to the equal educational opportunity concept. Broad adequacy goals are often framed by judicial interpretation of state constitutions. It may well be that the outcomes achieved by the average child are deemed to be sufficient. But it may also be the case that the preferences of policymakers or a specific legal mandate are somewhat higher (or lower) than the outcomes achieved by the average child. The current buzz phrase is that schools should ensure that children are “college ready.” [vi]

One final distinction, pertaining to both equal educational opportunity and adequacy goals, is the distinction between striving to achieve equal or adequate outcomes versus providing the resources that yield equal opportunity for children, regardless of their backgrounds or where they live to achieve those outcomes. Achieving equal outcomes is statistically unlikely at best, and of suspect policy relevance, given that perfect equality of outcomes requires leveling down (actual outcomes) as much as leveling up. The goal of school finance policy in particular is to provide the resources to offset pre-existing inequalities in the likelihood that one child has greater chance of achieving the desired outcome levels than any other.

[i] Baker, B. D., Green, P. C. (2009) Conceptions, Measurement and Application of Educational Adequacy Standards. In D.N. Plank (Ed.) AERA Handbook on Education Policy. New York: Routledge.

Baker, B., & Green, P. (2014). Conceptions of equity and adequacy in school finance. Handbook of research in education finance and policy, 203-221.

Baker, B., & Green, P. (2008). Conceptions of equity and adequacy in school finance. Handbook of research in education finance and policy, 203-221

[ii]               While often treated as a newer approach to equity analysis than measuring pure fiscal inputs, equity evaluations of real resources pre-date modern school finance equity, often being used for example to evaluate the uniformity of segregated black and white schools operating in the pre-Brown, “separate but equal” era.

Baker, B. D., & Green, P. C. (2009). Does increased state involvement in public schooling necessarily increase equality of educational opportunity? The Rising State: How State Power is Transforming Our Nation’s Schools, 133.

[iii]              As per the court’s declaration: “an efficient system of education must have as its goal to provide each and every child with at least the seven following capacities: (i) sufficient oral and written communication skills to enable students to function in a complex and rapidly changing civilization; (ii) sufficient knowledge of economic, social, and political systems to enable the student to make informed choices; (iii) sufficient understanding of governmental processes to enable the student to understand the issues that affect his or her community, state, and nation; (iv) sufficient self-knowledge and knowledge of his or her mental and physical wellness; (v) sufficient grounding in the arts to enable each student to appreciate his or her cultural and historical heritage; (vi) sufficient training or preparation for advanced training in either academic or vocational fields so as to enable each child to choose and pursue life work intelligently; and (vii) sufficient levels of academic or vocational skills to enable public school students to compete favorably with their counterparts in surrounding states, in academics or in the job market.

Rose v. Council for Better Educ., Inc., 790 S.W.2d 186, 212(Ky. 1989).

http://law-apache.uky.edu/wordpress/wp-content/uploads/2012/06/Thro-II.pdf

[iv] Rose v. Council for Better Educ., Inc., 790 S.W.2d 186 (Ky. 1989).

[v] Clune, W. H. (1994). The shift from equity to adequacy in school finance. Educational Policy, 8(4), 376-394.

[vi] http://www.parcconline.org/pennsylvania

School Finance Reforms & Student Outcomes

There exists an increasing body of evidence that substantive and sustained state school finance reforms matter for improving both the level and distribution of short-term and long-run student outcomes. A few studies have attempted to tackle school finance reforms broadly applying multi-state analyses over time. Card and Payne (2002) found “evidence that equalization of spending levels leads to a narrowing of test score outcomes across family background groups.”[i] (p. 49) Most recently, Jackson, Johnson & Persico (2015) evaluated long-term outcomes of children exposed to court-ordered school finance reforms, finding that “a 10 percent increase in per-pupil spending each year for all twelve years of public school leads to 0.27 more completed years of education, 7.25 percent higher wages, and a 3.67 percentage-point reduction in the annual incidence of adult poverty; effects are much more pronounced for children from low-income families.”(p. 1) [ii]

Numerous other researchers have explored the effects of specific state school finance reforms over time, applying a variety of statistical methods to evaluate how changes in the level and targeting of funding affect changes in outcomes achieved by students directly affected by those funding changes. Figlio (2004) explains that the influence of state school finance reforms on student outcomes is perhaps better measured within states over time, explaining that national studies of the type attempted by Card and Payne confront problems of a) the enormous diversity in the nature of state aid reform plans, and b) the paucity of national level student performance data.[iii]

Several such studies provide compelling evidence of the potential positive effects of school finance reforms. Studies of Michigan school finance reforms in the 1990s have shown positive effects on student performance in both the previously lowest spending districts, [iv] and previously lower performing districts. [v] Similarly, a study of Kansas school finance reforms in the 1990s, which also involved primarily a leveling up of low-spending districts, found that a 20 percent increase in spending was associated with a 5 percent increase in the likelihood of students going on to postsecondary education.[vi]

Three studies of Massachusetts school finance reforms from the 1990s find similar results. The first, by Thomas Downes and colleagues, found that the combination of funding and accountability reforms “has been successful in raising the achievement of students in the previously low-spending districts.”(p. 5)[vii]The second found that “increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students.”[viii] The most recent of the three, published in 2014 in the Journal of Education Finance, found that “changes in the state education aid following the education reform resulted in significantly higher student performance.”(p. 297)[ix] Such findings have been replicated in other states, including Vermont. [x]

Indeed, the role of money in improving student outcomes is often contested. Baker (2012) explains the evolution of assertions regarding the unimportance of money for improving student outcomes, pointing out that these assertions emanate in part from misrepresentations of the work of Coleman and colleagues in the 1960s, which found that school factors seemed less associated with student outcome differences than did family factors. This was not to suggest, however, that school factors were entirely unimportant, and more recent re-analyses of the Coleman data using more advanced statistical techniques than available at the time clarify the relevance of schooling resources.[xi]

Hanushek (1986) ushered in the modern era “money doesn’t matter” argument, in a study in which he tallied studies reporting positive and negative correlations between spending measures and student outcome measures, proclaiming as his major finding:

“There appears to be no strong or systematic relationship between school expenditures and student performance.” (p. 1162)[xii]

Baker (2012) summarized re-analyses of the studies tallied by Hanushek, wherein authors applied quality standards to determine study inclusion, finding that more of the higher quality studies yielded positive findings with respect to the relationship between schooling resources and student outcomes.[xiii] While Hanushek’s above characterization continues to permeate policy discourse over school funding, often used as evidence that “money doesn’t matter,” it is critically important to understand that this statement is merely one of uncertainty about the direct correlation between spending measures and outcome measures, based on studies prior to 1986. Neither this statement, nor the crude tally behind it ever provided any basis for assuming with certainty that money doesn’t matter.

A separate body of literature challenges the assertion of positive influence of state school finance reforms in general and court ordered reforms in particular. Baker and Welner (2011) explain that much of this literature relies on anecdotal characterizations of lagging student outcome growth following court ordered infusions of new funding. Hanushek and Lindseth (2009) provide one example of this anecdote-driven approach in a book chapter which seeks to prove that court-ordered school funding reforms in New Jersey, Wyoming, Kentucky, and Massachusetts resulted in few or no measurable improvements. However, these conclusions are based on little more than a series of descriptive graphs of student achievement on the National Assessment of Educational Progress in 1992 and 2007 and an undocumented assertion that, during that period, each of the four states infused substantial additional funds into public education in response to judicial orders. That is, the authors merely assert that these states experienced large infusions of funding, focused on low income and minority students, within the time period identified. They necessarily assume that, in all other states which serve as a comparison basis, similar changes did not occur. Yet they validate neither assertion.

Baker and Welner (2011) explain that Hanushek and Lindseth failed to measure whether substantive changes had occurred to the level or distribution of school funding as well as when and for how long. In New Jersey, for example, infusion of funding occurred from 1998 to 2003 (or 2005), thus Hanushek and Lindseth’s window includes 6 years on the front end where little change occurred. Kentucky reforms had largely faded by the mid to late 1990s, yet Hanushek and Lindseth measure post reform effects in 2007. Further, in New Jersey, funding was infused into approximately 30 specific districts, but Hanushek and Lindseth explore overall changes to outcomes among low-income children and minorities using NAEP data, where some of these children attend the districts receiving additional support but many did not.[xiv] Finally, the authors concede that Massachusetts did, in fact experience substantive achievement gains, but attribute those gains to changes in accountability policies rather than funding.

In equally problematic analysis, Neymotin (2010) set out to show that court ordered infusions of funding in Kansas following Montoy v. Kansas led to no substantive improvements in student outcomes. However, Neymotin evaluated changes in school funding from 1997 to 2006, but the first additional funding infused following the January 2005 Supreme Court decision occurred in the 2005-06 school year, the end point of Neymotin’s outcome data.[xv] Finally, Greene and Trivitt (2008) present a study in which they claim to show that court ordered school finance reforms let to no substantive improvements in student outcomes. However, the authors test only whether the presence of a court order is associated with changes in outcomes, and never once measure whether substantive school finance reforms followed the court order, but still express the conclusion that court order funding increases had no effect.[xvi]

To summarize, there exist no methodologically competent analyses yielding convincing evidence that significant and sustained funding increases provide no educational benefits, and a relative few which do not show decisively positive effects.[xvii] On balance, it is safe to say that a sizeable and growing body of rigorous empirical literature validates that state school finance reforms can have substantive, positive effects on student outcomes, including reductions in outcome disparities or increases in overall outcome levels.[xviii]

Schooling Resources & Student Outcomes

The premise that money matters for improving school quality is grounded in the assumption that having more money provides schools and districts the opportunity to improve the qualities and quantities of real resources. The primary resources involved in the production of schooling outcomes are human resources – or quantities and qualities of teachers, administrators, support and other staff in schools. Quantities of school staff are reflected in pupil to teacher ratios and average class sizes. Reduction of class sizes or reductions of overall pupil to staff ratios require additional staff, thus additional money, assuming the wages and benefits for additional staff remain constant. Qualities of school staff depend in part on the compensation available to recruit and retain them – specifically salaries and benefits, in addition to working conditions. Notably, working conditions may be reflected in part through measures of workload, like average class sizes, as well as the composition of the student population.

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[xix] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[xx] Loeb and Page (2000) tackled the specific issues of relative pay noted above. They showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[xxi]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[xxii]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[xxiii]

Others have argued that the dominant structure of teacher compensation which ties salary growth to years of experience and degrees obtained, despite weak correlations between those measures and student achievement gains, creates inefficiencies that negate the overall relationship between school spending and school quality. [xxiv] This argument is built on the assertion that existing funds could instead be used to compensate teachers according to (measures of) their effectiveness, while dismissing high cost “ineffective” teachers, replacing them with better ones with existing resources, thus achieving better outcomes with the same or less money.[xxv]

This argument depends on three assumptions. First, that adopting a pay-for-performance, rather than step-and-lane salary model would dramatically improve performance at the same or less expense. Second, that shedding the “bottom 5% of teachers” according to statistical estimates of their “effectiveness” can lead to dramatic improvements at equal or lower expense. Third and finally, both the incentive pay argument and deselecting the bottom 5% argument depend on sufficiently accurate and precise measures of teaching effectiveness, across settings and children.

Existing studies of pay for performance compensation models fail to provide empirical support for this argument – either that these alternatives can substantially boost outcomes, or that they can do so at equal or lower total salary expense.[xxvi] Simulations purporting to validate the long run benefits of deselecting “bad” teachers depend on the average pool of replacements lining up to take those jobs being substantively better than those who were let go (average replacing “bad”). Simulations promoting the benefits of “bad teacher” deselection assume this to be true, without empirical basis, and without consideration for potential labor market consequences of the deselection policy itself.[xxvii] Finally, existing measures of teacher “effectiveness” fall well short of these demands.[xxviii]

Most importantly, arguments about the structure of teacher compensation miss the bigger point – the average level of compensation matters with respect to the average quality of the teacher labor force. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.

Ample research indicates that children in smaller classes achieve better outcomes, both academic and otherwise, and that class size reduction can be an effective strategy for closing racial or socio-economic achievement gaps. [xxix] While it’s certainly plausible that other uses of the same money might be equally or even more effective, there is little evidence to support this. For example, while we are quite confident that higher teacher salaries may lead to increases in the quality of applicants to the teaching profession and increases in student outcomes, we do not know whether the same money spent toward salary increases would achieve better or worse outcomes if it were spent toward class size reduction. Some have raised concerns that large scale-class size reductions can lead to unintended labor market consequences that offset some of the gains attributable to class size reduction (such as the inability to recruit enough fully qualified teachers). For example, studies of California’s statewide class size reduction initiative suggest that as districts across the socioeconomic spectrum reduced class sizes, fewer high quality teachers were available in high poverty settings.[xxx]

Many over time have argued the need for more precise cost/benefit analysis regarding the tradeoffs between applying funding to class size reduction versus increased compensation.[xxxi] Still, the preponderance of existing evidence suggests that the additional resources expended on class size reductions do result in positive effects. Both reductions to class sizes and improvements to competitive wages can yield improved outcomes, but the efficiency gains of choosing one strategy over the other are unclear, and local public school districts rarely have complete flexibility to make tradeoffs.[xxxii] Class size reduction may be constrained by available classrooms. Smaller class sizes and reduced total student loads are a relevant working condition simultaneously influencing teacher recruitment and retention.[xxxiii] That is, providing smaller classes may partly offset the need for higher wages for recruiting or retaining teachers. High poverty schools require a both/and rather than either/or strategy when it comes to smaller classes and competitive wages.

As discussed above, achieving equal educational opportunity requires leveraging additional real resources, lower class sizes and more intensive support services, in high need settings. Merely achieving equal qualities of real resources, including equally qualified teachers, likely requires higher competitive wages, not merely equal pay in a given labor market. As such, higher need settings may require substantially greater financial inputs than lower need settings. Lacking sufficient financial inputs to do both, districts must choose one or the other. In some cases, higher need districts may lack sufficient resources to do either.

Notes

[i] Card, D., and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

[ii] Jackson, C. K., Johnson, R., & Persico, C. (2014). The Effect of School Finance Reforms on the Distribution of Spending, Academic Achievement, and Adult Outcomes (No. w20118). National Bureau of Economic Research.

Jackson, C. K., Johnson, R., & Persico, C. (2015). The Effects of School Spending on Educational and Economic Outcomes: Evidence from School Finance Reforms (No. w 20847) National Bureau of Economic Research.

[iii] Figlio, D. N. (2004) Funding and Accountability: Some Conceptual and Technical Issues in State Aid Reform. In Yinger, J. (Ed.) p. 87-111 Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. MIT Press.

[iv] Roy, J. (2011). Impact of school finance reform on resource equalization and academic performance: Evidence from Michigan. Education Finance and Policy, 6(2), 137-167.

Roy (2011) published an analysis of the effects of Michigan’s 1990s school finance reforms which led to a significant leveling up for previously low-spending districts. Roy, whose analyses measure both whether the policy resulted in changes in funding and who was affected, found that “Proposal A was quite successful in reducing interdistrict spending disparities. There was also a significant positive effect on student performance in the lowest-spending districts as measured in state tests.” (p. 137)

[v] Papke, L. (2005). The effects of spending on test pass rates: evidence from Michigan. Journal of Public Economics, 89(5-6). 821-839.

Hyman, J. (2013). Does Money Matter in the Long Run? Effects of School Spending on Educational Attainment. http://www-personal.umich.edu/~jmhyman/Hyman_JMP.pdf.

Papke (2001), also evaluating Michigan school finance reforms from the 1990s, found that “increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance.” (p. 821)

Most recently, Hyman (2013) also found positive effects of Michigan school finance reforms in the 1990s, but raised some concerns regarding the distribution of those effects. Hyman found that much of the increase was targeted to schools serving fewer low income children. But, the study did find that students exposed to an additional “12%, more spending per year during grades four through seven experienced a 3.9 percentage point increase in the probability of enrolling in college, and a 2.5 percentage point increase in the probability of earning a degree.” (p. 1)

[vi] Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284. (p. 275)

[vii] Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

[viii] Guryan, J. (2001). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

“The magnitudes imply a $1,000 increase in per-pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures.” (p. 1)

[ix] Nguyen-Hoang, P., & Yinger, J. (2014). Education Finance Reform, Local Behavior, and Student Performance in Massachusetts. Journal of Education Finance, 39(4), 297-322.

[x] Downes had conducted earlier studies of Vermont school finance reforms in the late 1990s (Act 60). In a 2004 book chapter, Downes noted “All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post-Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance.” (p. 312)

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (Ed.), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

[xi] Konstantopolous, S., Borman, G. (2011) Family Background and School Effects on Student Achievement: A Multilevel Analysis of the Coleman Data. Teachers College Record. 113 (1) 97-132

Borman, G.D., Dowling, M. (2010) Schools and Inequality: A Multilevel Analysis of Coleman’s Equality of Educational Opportunity Data. Teachers College Record. 112 (5) 1201-1246

[xii] Hanushek, E.A. (1986) Economics of Schooling: Production and Efficiency in Public Schools. Journal of Economic Literature 24 (3) 1141-1177. A few years later, Hanushek paraphrased this conclusion in another widely cited article as “Variations in school expenditures are not systematically related to variations in student performance”

Hanushek, E.A. (1989) The impact of differential expenditures on school performance. Educational Researcher. 18 (4) 45-62

Hanushek describes the collection of studies relating spending and outcomes as follows:

“The studies are almost evenly divided between studies of individual student performance and aggregate performance in schools or districts. Ninety-six of the 147 studies measure output by score on some standardized test. Approximately 40 percent are based upon variations in performance within single districts while the remainder look across districts. Three-fifths look at secondary performance (grades 7-12) with the rest concentrating on elementary student performance.” (fn #25)

[xiii] Baker, B. D. (2012). Revisiting the Age-Old Question: Does Money Matter in Education?. Albert Shanker Institute.

Relevant re-analyses include:

Greenwald, R., Hedges, L., Laine, R. (1996) The Effect of School Resources on Student Achievement. Review of Educational Research 66 (3) 361-396

Wenglinsky, H. (1997) How Money Matters: The effect of school district spending on academic achievement. Sociology of Education 70 (3) 221-237

[xiv] Hanushek (2006) goes so far as to title a concurrently produced volume on the same topic “How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm Our Children.” [emphasis added] The premise that additional funding for schools often leveraged toward class size reduction, additional course offerings or increased teacher salaries, causes harm to children is, on its face, absurd. The book which implies as much in its title never once validates that such reforms ever cause observable harm. Rather, the title is little more than a manipulative attempt to instill fear of pending harm in mind of the un-critical spectator. The book also includes two examples of a type of analysis that occurred with some frequency in the mid-2000s which also had the intent of showing that school funding doesn’t matter. These studies would cherry pick anecdotal information on either or both a) poorly funded schools that have high outcomes or b) well-funded schools that have low outcomes (see Evers & Clopton, 2006, Walberg, 2006).

[xv] Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Hanushek, E. A., and Lindseth, A. (2009). Schoolhouses, Courthouses and Statehouses. Princeton, N.J.: Princeton University Press., See also: http://edpro.stanford.edu/Hanushek/admin/pages/files/uploads/06_EduO_Hanushek_g.pdf

Hanushek, E. A. (ed.). (2006). Courting failure: How school finance lawsuits exploit judges’ good intentions and harm our children (No. 551). Hoover Press.

Evers, W. M., and Clopton, P. (2006). “High-Spending, Low-Performing School Districts,” in Courting Failure: How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm our Children (Eric A. Hanushek, ed.) (pp. 103-194). Palo Alto, CA: Hoover Press.

Walberg, H. (2006) High Poverty, High Performance Schools, Districts and States. in Courting Failure: How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm our Children (Eric A. Hanushek, ed.) (pp. 79-102). Palo Alto, CA: Hoover Press.

Hanushek, E. A., and Lindseth, A. (2009). Schoolhouses, Courthouses and Statehouses. Princeton, N.J.: Princeton University Press., See also: http://edpro.stanford.edu/Hanushek/admin/pages/files/uploads/06_EduO_Hanushek_g.pdf

[xvi] Greene, J. P. & Trivitt, (2008). Can Judges Improve Academic Achievement? Peabody Journal of Education, 83(2), 224-237.

Neymotin, F. (2010) The Relationship between School Funding and Student Achievement in Kansas Public Schools. Journal of Education Finance 36 (1) 88-108.

[xvii] Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

[xviii] Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Two reports from Cato Institute are illustrative (Ciotti, 1998, Coate & VanDerHoff, 1999).

Ciotti, P. (1998). Money and School Performance: Lessons from the Kansas City Desegregations Experience. Cato Policy Analysis #298.

Coate, D. & VanDerHoff, J. (1999). Public School Spending and Student Achievement: The Case of New Jersey. Cato Journal, 19(1), 85-99.

[xix] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[xx] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[xxi] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[xxii] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

See also:

Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[xxiii] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

[xxiv] Hanushek, E. A. (2011). The economic value of higher teacher quality. Economics of Education Review, 30(3), 466-479.

[xxv] Hanushek, E. A. (2009). Teacher deselection. Creating a new teaching profession, 168, 172-173.

[xxvi] Springer, M. G., Ballou, D., Hamilton, L., Le, V. N., Lockwood, J. R., McCaffrey, D. F., … & Stecher, B. M. (2011). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching (POINT). Society for Research on Educational Effectiveness.

Yuan, K., Le, V. N., McCaffrey, D. F., Marsh, J. A., Hamilton, L. S., Stecher, B. M., & Springer, M. G. (2012). Incentive Pay Programs Do Not Affect Teacher Motivation or Reported Practices Results From Three Randomized Studies. Educational Evaluation and Policy Analysis, 0162373712462625.

Goodman, S. F., & Turner, L. J. (2013). The design of teacher incentive pay and educational outcomes: Evidence from the New York City bonus program. Journal of Labor Economics, 31(2), 409-420.

Goodman, S., & Turner, L. (2011). Does Whole-School Performance Pay Improve Student Learning? Evidence from the New York City Schools. Education Next, 11(2), 67-71.

[xxvii] Baker, B. D., Oluwole, J. O., & Green III, P. C. (2013). The Legal Consequences of Mandating High Stakes Decisions Based on Low Quality Information: Teacher Evaluation in the Race-to-the-Top Era. education policy analysis archives, 21(5), n5.

[xxviii] Baker, B. D., Oluwole, J. O., & Green III, P. C. (2013). The Legal Consequences of Mandating High Stakes Decisions Based on Low Quality Information: Teacher Evaluation in the Race-to-the-Top Era. education policy analysis archives, 21(5), n5.

[xxix] See http://www2.ed.gov/rschstat/research/pubs/rigorousevid/rigorousevid.pdf;

Jeremy D. Finn and Charles M. Achilles, “Tennessee’s Class Size Study: Findings, Implications, Misconceptions,” Educational Evaluation and Policy Analysis, 21, no. 2 (Summer 2009): 97-109;

Jeremy Finn et. al, “The Enduring Effects of Small Classes,” Teachers College Record, 103, no. 2, (April 2001): 145–183; http://www.tcrecord.org/pdf/10725.pdf;

Alan Krueger, “Would Smaller Class Sizes Help Close the Black-White Achievement Gap.” Working Paper #451 (Princeton, NJ: Industrial Relations Section, Department of Economics, Princeton University, 2001) http://www.irs.princeton.edu/pubs/working_papers.html;

Henry M. Levin, “The Public Returns to Public Educational Investments in African American Males,” Dijon Conference, University of Bourgogne, France. May 2006. http://www.u-bourgogne.fr/colloque-iredu/posterscom/communications/LEVIN.pdf;

Spyros Konstantopoulos Spyros and Vicki Chun, “What Are the Long-Term Effects of Small Classes on the Achievement Gap? Evidence from the Lasting Benefits Study,” American Journal of Education 116, no. 1 (November 2009): 125-154.

[xxx] Jepsen, C., Rivkin, S. (2002) What is the Tradeoff Between Smaller Classes and Teacher Quality? NBER Working Paper # 9205, Cambridge, MA. http://www.nber.org/papers/w9205

“The results show that, all else equal, smaller classes raise third-grade mathematics and reading achievement, particularly for lower-income students. However, the expansion of the teaching force required to staff the additional classrooms appears to have led to a deterioration in average teacher quality in schools serving a predominantly black student body. This deterioration partially or, in some cases, fully offset the benefits of smaller classes, demonstrating the importance of considering all implications of any policy change.” p. 1

For further discussion of the complexities of evaluating class size reduction in a dynamic policy context, see:

David Sims, “A Strategic Response to Class Size Reduction: Combination Classes and Student Achievement in California,” Journal of Policy Analysis and Management, 27(3) (2008): 457–478

David Sims, “Crowding Peter to Educate Paul: Lessons from a Class Size Reduction Externality,” Economics of Education Review, 28 (2009): 465–473.

Matthew M. Chingos, “The Impact of a Universal Class-Size Reduction Policy: Evidence from Florida’s Statewide Mandate,” Program on Education Policy and Governance Working Paper 10-03 (2010).

[xxxi] Ehrenberg, R.G., Brewer, D., Gamoran, A., Willms, J.D. (2001) Class Size and Student Achievement. Psychological Science in the Public Interest 2 (1) 1-30

[xxxii] Baker, B., & Welner, K. G. (2012). Evidence and rigor scrutinizing the rhetorical embrace of evidence-based decision making. Educational Researcher, 41(3), 98-101.

[xxxiii] Loeb, S., Darling-Hammond, L., & Luczak, J. (2005). How teaching conditions predict teacher turnover in California schools. Peabody Journal of Education, 80(3), 44-70.

Isenberg, E. P. (2010). The Effect of Class Size on Teacher Attrition: Evidence from Class Size Reduction Policies in New York State. US Census Bureau Center for Economic Studies Paper No. CES-WP-10-05.

Angry Andy’s Failing Schools & the Finger of Blame

NY Governor Andrew Cuomo’s office has released a report in which it identifies what it refers to in bold type on the cover as “Failing Schools.”

Report here: https://www.governor.ny.gov/sites/governor.ny.gov/files/atoms/files/NYSFailingSchoolsReport.pdf

Presumably, these are the very schools on which Angy Andy would like to impose death penalties – or so he has opined in the past.

The report identifies 17 districts in particular that are home to failing schools. The point of the report is to assert that the incompetent bureaucrats, high paid administrators and lazy teachers in these schools simply aren’t getting the job done and must be punished/relieved of their duties. Angry Andy has repeatedly vociferously asserted that he and his less rabid predecessors have poured obscene sums of funding into these districts for decades. Thus – it’s their fault – certainly not his, for why they stink!

Slide3Slide4

I have addressed over and over again on this blog the plight of high need, specifically small city school districts under Governor Cuomo.

  1. On how New York State crafted a low-ball estimate of what districts needed to achieve adequate outcomes and then still completely failed to fund it.
  2. On how New York State maintains one of the least equitable state school finance systems in the nation.
  3. On how New York State’s systemic, persistent underfunding of high need districts has led to significant increases of numbers of children attending school with excessively large class sizes.
  4. On how New York State officials crafted a completely bogus, racially and economically disparate school classification scheme in order to justify intervening in the very schools they have most deprived over time.

I have also written reports on New York State’s underfunding of the school finance formula – a formula adopted to comply with prior court order in CFE v. State.

  1. Statewide Policy Brief with NYC Supplement: BBaker.NYPolicyBrief_NYC
  2. 50 Biggest Funding Gaps Supplement: 50 Biggest Aid Gaps 2013-14_15_FINAL

Among my reports is one in which I identified the 50 districts with the biggest state aid shortfalls with respect to what the state itself says these districts require for providing a sound basic (constitutional standard) education.  Districts across NY state have funding gaps for a variety of reasons, but I have shown in the past that it is generally districts with greater needs – high poverty concentrations & more children with limited English language proficiency, as well as more minority children – which tend to have larger funding gaps.

I have also pointed out very recently on this blog that some high need upstate cities in NY have had persistently inequitable/inadequate funding for decades, including this one from Angry Andy’s hit list.

Slide4

Personally, even I was shocked to see the relationship between my 50 most underfunded districts list and Angry Andy’s 17 districts that suck.

NY State has over 650 school districts, many of which may be showing relatively low test scores for a variety of reasons, including & especially due to serving high concentrations of needy students.

Based on my updated 2015 runs (final adopted budget) of 50 biggest state aid shortfalls, 12 of Angry Andy’s sucky 17 had among the 50 largest state aid shortfalls.

Yeah… that’s right… 12 of 17 had really big funding shortfalls.

5 of the top 10 biggest funding shortfall districts are on Angry Andy’s list. Yeah.. the list of schools that have supposedly been subjected to obscene amounts of support and additional funding, but due only to their own ineptitude, have failed.

So how big are those funding shortfalls? How much state aid is supposed to be allocated to these districts to provide a sound basic education? Here are a few cuts at the numbers. First, here are the failing 17, by their state aid gap rank for 2014 and 2015. Included also are their state aid gaps per Aidable Foundation Pupil Unit. Note that their gaps per actual warm body – enrolled pupil – are larger (TAFPU includes some additional “weighted” pupils).

But even with this conservative figure, Hempstead’s gap – the amount of state aid they are not getting with respect to their calculated target – is over $6,000 per pupil. Yes – OVER $6,000 PER PUPIL!  (where’s that NY lottery guy when you need him?). Note that the apparent reduction in gaps from 2014 to 2015 occurs due to a manipulation by the state of funding targets and required local contributions – with a smaller share of that reduction actually coming from new state aid.

Slide2All of these are high need districts, having Pupil Need Index values well above 1.5.

Here’s what it looks like in graph form, with local contribution, actual state aid and the gap identified.

Slide1In some cases, the actual state aid received is not a whole lot more than the gap. All of Angry Andy’s failing districts have substantial shortfalls from the funding targets.  Funding targets that were specifically identified as funding needed to achieve desired outcome levels.

Notably, as I’ve explained in the past – the outcome levels used for determining those funding targets were much lower than the outcome levels expected under the state’s current testing and accountability system.

Even then, the state’s approach to estimating the cost of achieving those (much lower) outcomes results in a low-ball manipulated number. (I actually have a book chapter that explains this as an exemplar of classic school finance manipulation)

So, where should that finger of blame point here? 

Or is this just how things work these days – slash the funding of the highest need districts – call them failing – close their schools – give their property and their teacher’s jobs to someone else – and claim victory – leaving others, years down the line to clean up your mess?

Angry Andy – this is your mess. Now do the right thing and fix it!

 

 

Disclaimer: Yes, I spent all day Monday this week testifying at trial about the funding shortfalls for New York State districts, specifically Small City districts with a pending lawsuit against the state. My opinions are the same here as they were there, and have been for several years as reflected in numerous published sources. That’s because my opinions here merely reflect the factual status of the state school finance system in New York, as represented by the state’s own formula calculations and data.

 

 

 

Families for Excellent Schools Totally Bogus Analysis of NYC Schools

Families for Excellent Schools of New York – the Don’t Steal Possible folks – has just released an impossibly stupid analysis in which they claim that New York City is simply throwing money at failure. Spending double on failing schools what they do on totally awesome ones (if they really have any awesome ones). A link to their press release is here:

http://www.familiesforexcellentschools.org/news/press-release-cost-failure

And what is their astounding new evidence that validates that NYC is stealing possible by throwing money at failing schools? Well, they ever so carefully identified the 50 worst and 50 totally awesomest schools in the city, and then took the average of their per pupil budgets to show that the worst schools are substantially outspending the awesomest ones. Thus – money doesn’t matter- especially when in the hands of schools under the governance of their nemesis Mayor BDB and his possible-thieving lackeys.

Oh, where to even begin on this analysis. Let’s peel it all back a little, one layer at a time. Let’s begin with the fact that New York City a while back, under their favored Mayor Bloomberg, adopted something called Fair Student Funding. That formula was designed to drive systematically more money to schools serving higher need populations, including schools with higher shares of children with disabilities and higher shares of low income children. In other words, we would expect that the schools with the largest per pupil budgets in the city would be the ones serving the highest need student populations.

At the same time, we might expect that schools serving the neediest pupils might have lower average performance, even with the additional resources to leverage. The Families for Excellent Schools analysis fails to take into account any of this, using the worst possible outcome measures and failing entirely to consider differences in needs and related costs when evaluating the spending measures. Thus, they draw bold conclusions like the following:

At the middle school level, the bottom 50 schools received an average $30,256 per pupil, compared with $16,277 at the top 50 middle schools.

I happen to have a rich data set on NYC schools from 2008 to 2010, including charter schools, which has been used in this report and in a forthcoming peer reviewed article in Education Finance and Policy. The spending measure in particular is documented in detail in this report.

While these data are a few years old now, they are still useful for illustrating the utter ridiculousness of the FES analysis and for illustrating how far more appropriate analysis yields far more logical findings.

First, let’s look at how per pupil spending is distributed across NYC Middle Schools back in 2010

Figure 1 – Spending and Special Education (NYC Middle Schools 2010)

Slide1

Hmmm… so schools that spend up near that $30k mark tend to have 30% or so special ed, compared to, oh, about 10% in schools spending under $15k.  That might explain some of their findings (uh… or nearly all!)

Figure 2 – Spending and % Free Lunch (NYC Middle Schools 2010)

Slide2

Hmm… so, schools that are low spending tend to have relatively low shares of low income children, as well as very few special education children.

Indeed, it’s true, that schools with more low income children and more children with disabilities actually have larger per pupil budgets. Reports by the City’s Independent Budget Office have found similarly, but have pointed out that the Fair Student Funding formula never really reached full implementation.

And what about unadjusted performance level measures and student outcomes? Well, as one might expect proficiency rates do tend to be lower in schools with higher rates of low income children and children with disabilities.

Figure 3 – % Special Education and Math Grade 8 Proficiency

Slide3

Figure 4 – % Low Income and Math Grade 8 Proficiency (NYC Middle Schools 2010)

Slide4

And thus, by logical extension, the higher spending schools which serve needier populations also have the lower outcomes. Which tells us freakin nothing!

So what’s the next step here? Well, first of all, what we really want to know is not what the average performance LEVEL (proficiency rate, or mean scale scores) of a school is, but whether the school is producing achievement gains for its students. So my data set also includes a measure of school value-added, constructed from the teacher value added reports released a few years back (wherein a school’s value added is the mean of its teachers’ value added). New York City’s value added model accounts (at least partly) for student characteristics, making for fairer comparisons of what schools themselves contribute to student outcomes. For example:

Figure 5 – % Special Education and School Value Added (NYC Middle Schools 2010)

Slide5

Figure 6 – % Low Income and School Value Added (NYC Middle Schools 2010)

Slide6

As we can see, value added outcomes of students are much less biased by the student population characteristics of the school.

But, there’s one more step we have to take here. If we really want to know whether there’s any relationship between school spending and student outcomes – or reveal the god-awful finding of FES that more spending actually harms students (when in the district schools), lowering their outcomes, making them into failures – we need to run a somewhat more complicated model to tease out that relationship.

We can run that model in either of two directions – asking:

  • Given their student populations and scale of operation, do schools with higher per pupil budgets produce higher (or lower) student achievement gains? In other words, in the current system, is higher spending associated with greater achievement gains? This is a Production Function.

Value Added Outcomes = f(Students, Scale, Spending)

  • Given the outcomes currently achieved across schools, given their student populations, what is a school expected to spend (or need to spend) to achieve those outcomes? In other words, in the current system, does it cost more to achieve higher outcomes? This is a Cost Function.

Spending = f(Value Added Outcomes, Students, Scale, Inefficiency)

So, with my approximately 260 regular middle schools per year from 2008 to 2010, I run a few models, where the goal here is simply to determine whether there really exists a massive negative relationship between spending and outcomes, as Families for Crappy Analysis would imply, or whether, in fact that relationship works in the opposite (and more likely) direction. I use a modeling approach similar to that used here.

Figure 7 – Stochastic Frontier Production Model (NYC Middle Schools 2008-2010)

Slide7

In plain language, higher spending is associated with higher value-added!

Figure 8 – Stochastic Frontier Cost Model (NYC Middle Schools 2008-2010)

Slide8

In plain language, achieving higher value added comes at a higher per pupil cost (spending, less inefficiency).

Shockingly, well, not really, I find that in each case there exists a statistically significant positive relationship between per pupil spending and the value added a school’s teachers contribute. In other words, when you do a more reasonable analysis of these data, rather than some total BS tabulation like that of Families for Agenda Driven schlock, you get a reasonable result.

Now, I have one more piece of this puzzle to add. The hacks behind this joke of an analysis propose as their policy solution, further expansion of charter schooling, shifting more resources over to charters and away from these over funded failing district schools.

The irony here is that New York City charter schools have achieved the “successes” they have, at least in part a) while spending far more per pupil than otherwise similar district schools and b) while serving less need student populations.

Let’s look at where those charters lie when placed into the graphs above on spending and student needs.

Figure 9 – Special Education Shares and per Pupil Spending (NYC Middle Schools 2010)

Slide9

Figure 10 – Low Income Shares and per Pupil Spending (NYC Middle Schools 2010)

boe v charter

So, what they are really saying with this junk analysis is that city leaders should take money away from children with special needs in district run schools – the schools that actually serve such children – because those children aren’t “proficient” on state tests. And that money should instead be diverted to schools that serve even fewer of those children than the average district school, while already significantly outspending them. Well, that makes sense.

So the story line goes:

Money matters for us, but not for you.

Money spent on us, by us, is good.

On you, by you is bad!

Really? Really? I just can’t take this anymore.

For a more thorough summary of real research on this topic, see: http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Note: All spending and student population measures thoroughly documented in this report.

 

Ed Writers – Try looking beyond propaganda & press releases for success stories

UPDATED 2/5/2015

I enter into this blog post knowing full well that this is a lose-lose deal.  Rating and comparing school quality, effectiveness or efficiency with existing publicly available data is, well, difficult if not impossible. But I’m going there in this post.

Why? Well, one reason I’m going there is that I’m sick of getting e-mail and phone inquiry after inquiry about the same charter schools – and only charter schools – asking how/why are they creating miracle outcomes. I try to explain that there may be more to the story. The reporter then says that the charter school’s data person says I’m wrong – validating their miracle outcomes (despite their own data not being publicly available/replicable, etc. and often with reference to awesome outcomes reported in popularly cited studies of totally different charter schools).

But we may be having our conversation about the wrong schools to begin with.  The whole conversation starts perhaps with a call from the school’s own PR lackey to the local paper, along with a self-congratulatory press release, or alternatively, from the local news outlet itself following up on preconceived notions of which schools are doing miracle work (for a slow news day).  It’s not just that it seems always to be about charter schools, but that it seems to be about the same charter schools every time.

If I wanted my graduate students to figure out what makes successful schools  tick, I’d want them to use a more thoughtful and rigorous selection strategy to identify those schools – rather than merely responding to press releases or preconceived notions.

What if instead, we started with a statistical analysis of all schools, from there, figuring out which schools actually do beat expectations? Which schools achieve greater gains than would be expected, given the students they serve and the resources they have available? There may indeed be some charter schools in this mix. I’d be surprised if there weren’t. They may or may not be the usual suspects. But also, there may be some traditional district schools in this mix. They (under the radar charters and district schools) just may not be puttin’ out those press releases or have PR lackeys hooked in with local media.

To begin with, let me clarify these terms – quality, effectiveness and efficiency – and explain how they have different meanings for different constituents – specifically for parent consumers versus policy makers.

First and foremost when we think of schools we must think of all of the stuff that goes into them and the community which surrounds them – which includes the qualities of the employees who work there, the children who attend and families who interact with the school, the facilities, the local taxpayer support, or not, for the schools. It’s a package deal. When a family chooses where to live or where to send their child to school, they are choosing not only the teachers, but also the building, and the peer group.

  • Quality (unconditional) – we might broadly think of quality as the full package of what a school has to offer – including all of that stuff listed above, and how that stuff ultimately relates to how many kids go to college and where, what kinds of test scores kids get along the way (to the extent that they have any predictive value), what kinds of programs and services are offered and so on. But, as we know, quality in this broad sense, is highly related to community wealth, income and education levels and support provided for local schools. This is quality in an “unconditional” sense. “Best High School” Ratings like those in our popular monthly magazines found in dentists offices in the ‘burbs – those are classic unconditional rankings.  Numbers of kids taking AP courses – average SAT scores, numbers of kids attending selective colleges are common measures and whether these outcomes are a function of the families and communities, or anything special the school might do is of minimal consequence.
  • Effectiveness – One might consider “effectiveness” to be a conditional measure of quality – or at least I will frame it that way here.  Effectiveness measures attempt to sort out whether and to what extent actual differences in schools contribute to those outcomes listed above. That is, if two schools served similar student populations, do they achieve different measured results? These are “conditional” comparisons – estimates the “effectiveness” of a school take into consideration those differences in children who attend the school. These measures are of greater interest to policymakers. We want to know not only if a school has high test scores, or shows strong growth, but also whether they do so while serving student populations similar to other schools. We want to know this in part so that we can draw inferences about whether the methods used by the school might be transferable. But, these measures are still only partly conditional. It may be that one school is more effective with certain children because it has access to more resources – has smaller class sizes, more specialized teachers, or has been able to recruit and retain a stronger team of teachers and administrators by paying more competitive wages. The school may be more “effective” because it has the resources to be more effective.
  • Efficiency – Efficiency measures take the effectiveness measures one step further – considering not only if schools are able to produce comparable outcomes for comparable children, but also if they are able to do so with comparable resources. These measures are conditional on both student characteristics AND resources, and should provide us with a better picture of whether schools, given who they serve and the aggregate resources they have, are generating greater or lesser growth in student outcomes (based on the limited available measures).

At best – at best – at best – much like estimating teacher/classroom influences on student achievement growth – estimating school relative efficiency is imprecise and as much art as science. (see: http://cepa.stanford.edu/sites/default/files/2002316.pdf#page=19) As I often say, the art of working with existing data (publicly available or not) is the art of doing the “least bad analysis” possible.

So, all of that said, I’ve taken it on myself here to gather up data on school characteristics from 2010 to 2014 in New Jersey, the state’s student growth percentile data, as well as using statewide staffing file data to construct measures of school aggregate resources. These are updated versions of the models I use in this post: https://njedpolicy.wordpress.com/2014/10/31/research-note-on-student-growth-the-productivity-of-new-jersey-charter-schools/ Code is provided below.

Because there are geographic differences in economic, demographic and other environmental conditions I compare schools to all schools serving similar grade ranges in the same county, along with similar demographics and resource levels. Yes, the data are less precise than I’d like. But they are equally imprecise for everyone and publicly available (no one got to submit their own super secret version of data).

First, here’s a quick look at the models (each also contain a dummy variable for each county and for each year of background data):

Slide1Slide2

As I’ve shown previously, these various factors explain a lot of the variation in school level growth measures, even state officials continue to live in denial (and construct consequential policies on that denial). Student population characteristics and resources are both associated with overall growth, explaining nearly 50% of the variation in some cases.

Across all 8 models, I can calculate which schools most consistently showed greater, or lesser “growth” on state assessments than predicted, given their students and resource levels. While I could have applied trickier statistical models – stochastic frontier, etc. – these really don’t change the rankings that much.  And that’s what I’ve done to generate the following list of the “Top 50 productive efficient schools in New Jersey.” Notably absent here are any schools that serve only upper grades and thus have not growth percentile measure to model.

UPDATED LIST OF SCHOOLS! Updated NJ Rankings

Updated, updated Top 50 (not much change)

NJ Top 50

Now… is this list really all that meaningful? I’m not sure I’d go that far. Ratings are certainly somewhat sensitive to model specification, seem to shift from math to language arts and from year to year. There may indeed be some totally screwy results in these runs – as often happens when we try to take a model which relies on patterns across thousands to characterize the position of any one point.

However, it’s at least more defensible than relying on press releases and preconceived notions.  AND, at the very least it’s a whole lot more interesting than hearing the same old story.

At least according to this list, if you’re looking for an interesting charter school to visit, check out Discovery. I know nothing about it… but its numbers POP here. If you’re looking for another school in Newark, how about Hawthorne Ave? which seems to beat most Newark Charter and District schools.

And to those out in the schools? Please don’t make too much of this list. It is what it is – based on narrowly defined outcome measures and excessively crude population measures.

Bonus Charts!

Schools within the City of Newark

Slide1

Charter Schools Statewide

Slide2

=====================

Geek stuff:

1) model output Stata output

2) data set building part I Step 1-Staffing Files

3) data set building part II Step 2-School Level Variables

4) data set building part III Step 3-School Resource Aggregation

More Money, More Money, More Money? Have we really ever tried sustained, targeted school funding for America’s neediest children?

I’m no-longer surprised these days by the belligerent wrongness of rhetoric around school funding equity and adequacy. Arguably, much of the supporting rationale for the current (and other recent) education reforms is built on the house of cards that when it comes to financing equitably and adequately our public school systems – especially those serving our neediest children, we’ve been there and done that. In fact, we’ve been there and done that for decades. It just never ends. For example, as recently summarized in regional New York State news outlet:

Cuomo said more money isn’t necessarily the answer.

“We’ve been putting more money into failing schools for decades,” he said. “Over the last 10 years, 250,000 children went through those failing schools, and New York government did nothing.

http://poststar.com/news/local/teacher-evaluations-are-baloney-cuomo-says/article_f51aefda-a1c0-11e4-8d43-1fe6f62ba3e6.html

Yes – that’s it – for decades New York State has simply been pouring more and more money into failing schoolsall of its “failing” schools, and all for naught.

Similarly, data free ideology tells us that the Commonwealth of Pennsylvania has made a fiscal experiment of Philadelphia, pouring massive sums of funding into that city’s schools, well, again, nearly forever! And to higher amounts that, well, really any other district in or in near proximity to the Commonwealth! Or so says edu-pundit Andrew Smarick on twitter just over a year ago. Here are a few gems from his Twitter rant on Philly schools:

Philly’s district = terrible for decades, families left, as a result it’s bankrupt. Gotten huge state funding for yrs to prop it up.

I know Philly gets among (if not THE) highest levels of funding from the state. I also know it’s been losing thousands of students.

As per the usual course, we are now also told that the exorbitant – highest in the country in fact – spending of Camden public schools and their persistent failure are yet another (anecdotal) proof positive that throwing money down the rat hole of government schools serving high poverty neighborhoods, is well, pointless. Or so says a new Reason Foundation documentary using Camden as the anecdote du jour:

 In Camden, per pupil spending was more than $25,000 in 2013, making it one of the highest spending districts in the nation. http://reason.com/reasontv/2015/01/26/money-didnt-fix-camdens-failing-schools

Well, actually, the only national database of per pupil spending available even by this date go up to 2011-12. So I’m not quite sure what they’re talking about. In 2012, taking census fiscal survey data (http://www.census.gov/govs/school/) divided by the regional cost index (http://bush.tamu.edu/research/faculty/taylor_CWI/) Camden comes in ranked 1,128th of about 13,376 districts with complete data. So yes, it’s relatively high – Top 10%. But that quote above is totally made up (since the data aren’t even available to support it) and, well, exaggerated, at best.

Nor is it necessarily useful to try to use such bombastic (and made up) anecdotes to “prove” a point that’s been repeatedly rebutted by actual research!

In addition to the studies noted at the link above, I’ve addressed in two lengthy academic articles this propensity to drive policy rhetoric with half true (at the very best) anecdotes (hoping no one will ever actually fact check). For years, Newark (the state of New Jersey) and Kansas City, Missouri (as a result of desegregation litigation from the 1980s through early 1990s) were frequently used as proof that money doesn’t matter. See these two articles for the through debunking:

  • Green III, P. C., & Baker, B. D. (2006). Urban Legends, Desegregation and School Finance: Did Kansas City Really Prove That Money Doesn’t Matter. Mich. J. Race & L., 12, 57.
  • Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Something I find particularly baffling is the tendency of those crafting these anecdotes to not even bother to check whether, to what extent and for how long funding was actually poured into these districts – or any high need districts. Have we really been there and done that? Have state governments across this great land already done all they can to provide fiscal sustenance to those with the greatest educational needs? And all for naught? Thus bring on the cost cutting reforms?

Well, even the most fractionally-witted reader knows that deep disparities still exist in state school finance systems.

What is less well known – or less frequently illustrated – is that in many cases, not much has changed in as much as twenty years!

And what is also less well known is that even in those cases where funding was targeted to areas of greater need, that funding really never reached the levels that would have been needed to make substantive progress on closing achievement gaps – even with surrounding districts’ children, and that funding while coming in the occasional burst in select locations, was never really sustained even as long as it would take a single cohort of children to pass through a k-12 system.

We do have some evidence from peer reviewed literature on what it might actually take to provide children in higher poverty settings with equal opportunity to achieve comparable outcomes to their more economically advantaged peers. William Duncombe and John Yinger (2005) find that as we move from a school or district with 0% to 100% low income children, the cost per pupil of achieving common outcome target doubles, or more than doubles when using Census poverty rates as the measure.[1] In other words, it would take a substantial infusion of funding to have any real shot at closing achievement gaps.

Let’s take a look at a) some persistently financially deprived settings over time and b) some settings where we have supposedly been there and done that (with money). For the following figures, I use the strategy that I’ve used previously – comparing the relative poverty and relative per pupil revenues (and spending) of school districts – compared to all other districts in the same labor market. Here’s my explanation for using this approach, from a recent Center for American Progress report:

It is important to understand that the value of any given level of education funding, in any given location, is relative. That is, it matters less whether a district spends $10,000 per pupil or $20,000 per pupil than how that funding compares to other districts operating in the same regional labor market—and, for that matter, how that money relates to other conditions in the regional labor market.

The first reason relative funding matters is that schooling is labor intensive. The quality of schooling depends largely on the ability of schools or districts to recruit and retain quality employees. The largest share of school districts’ annual operating budgets is tied up in the salaries and wages of teachers and other school workers. The ability to recruit and retain teachers in a school district in any given labor market depends on the wage a district can pay to teachers relative to other surrounding schools or districts and relative to nonteaching alternatives in the same labor market.[2] The second reason is that graduates’ access to opportunities beyond high school is largely relative and regional. The ability of graduates of one school district to gain access to higher education or the labor force depends on the regional pool in which the graduate must compete.[3]

First let’s look at longitudinal trends for some of the very districts I identified as suffering severe financial disadvantage in my Center for American Progress Report:

Slide1

[poured money in for decades, really?]

Slide2

Slide3

[note that this time period starts over a decade after the economic decline chronicled in the song]

Slide4

Slide6

Slide7

Slide8

Slide9

Note that in some cases, relative poverty declines as poverty rates of surrounding districts increases.

Next, let’s take a look at some of those districts that supposedly provide proof positive – irrefutable evidence – that decades worth of highest in the nation spending – was all for naught. One need not even address the “naught” part if the “all for” part was, well, untrue.

Slide12

First, here’s Kansas City – where funding was indeed elevated in the early 1990s, scaling up beginning around 1989. We discuss this case extensively in this article.

  • Green III, P. C., & Baker, B. D. (2006). Urban Legends, Desegregation and School Finance: Did Kansas City Really Prove That Money Doesn’t Matter. Mich. J. Race & L., 12, 57.

However, it is important to note that this funding, after 1995 tapered off rapidly and never really rebounded, except for a temporary spike around 2009, when the district’s boundaries were reduced (when the remaining majority white corner of the district annexed itself to neighboring Independence school district). Throughout the 2000s, but for this temporary blip, Kansas City’s state and local revenues per pupil have hovered marginally above labor market averages, while the district’s poverty rate has hovered at more than 2.5 times the labor market average rate.

How about Newark and, now Camden. While these cities are clearly better off than Philly, Bridgeport or Utica, it’s not like there’s really been a sustained massive infusion of funding over time. Both start the period with funding near labor market average levels and with double (Newark) to more than triple (Camden) the labor market poverty rates. Funding escalates from the late 1990s in Newark, and mid-2000s in Camden to about a 34% (Newark) and temporarily over 50% (Camden) margin over labor market averages, coming to rest around1/3 above labor market averages for both. But given their relative poverty and estimates of costs associated with poverty, even these margins fall well short. Further, they are only sustained for about 4 years (Camden) to almost ten years (at smaller margins, in Newark).

Slide10

Slide11

Should we expect some bang for this buck? Yes. And the broader literature on the topic certainly suggests that such bang exists.

But the point of this post is that we have to first look at how hard we really have tried thus far, and to acknowledge those places where we haven’t tried at all.

Yeah we’ve put some though really not all that much effort, given the extreme needs, into places like Newark and Camden.

But other places? like:

Philadelphia

Allentown

Reading

Utica, NY

Chicago

Waukegan, IL

Bridgeport, CT

New Britain, CT

We’ve never really tried. These districts and the children they serve have never – in the past 20 years been given a fair shot. And even more depressing is that while policymakers and the fabricati (hack pundits who simply make shit up if it serves their purpose) that (dis)inform them have perpetuated these myths and lies, things have only gotten worse.

=======

NOTES

[1] Duncombe, W., & Yinger, J. (2005). How much more does a disadvantaged student cost?. Economics of Education Review, 24(5), 513-532.

[2] Bruce D. Baker, “Revisiting the Age-Old Question: Does Money Matter in Education?” (Washington: Albert

Shanker Institute, 2012), available at http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf.

[3] Bruce D. Baker and Preston C. Green III as well as William Koski and Rob Reich explain that to a large extent, education operates as a positional good, whereby the advantages obtained by some necessarily translate to disadvantages for others. For example, Baker and Green explain that, “In a system where children are guaranteed only minimally adequate K–12 education, but where many receive far superior opportunities, those with only minimally adequate education will have limited opportunities in higher education or the workplace.” Bruce D. Baker and Preston C. Green, “Conceptions of Equity and Adequacy in School Finance.” In Helen F. Ladd and Edward B. Fiske, eds., Handbook of Research in Education Finance and Policy (New York: Routledge, 2008), p. 203–221; Koski and Rob Reich, “When “Adequate” Isn’t: The Retreat From Equity in Educational Law and Policy and Why It Matters,” Emory Law Review 56 (3) (2006): 545–618, available at http://www.law.emory.edu/fileadmin/journals/elj/56/3/Koski___Reich.pdf.