Racing Where? (the DFER list)

A few quick comments on the Democrats for Ed Reform list of states in line for RttT funds.

  • The list includes two of the only states that year after year maintain a pattern of higher poverty school districts receiving systematically fewer resources than lower poverty school districts – New York and Illinois. Colorado is also on the systematically regressive funding list – but is not a year after year standout like the other two.
  • The list includes the two states which allocate the smallest share of their gross state product to public education – Louisiana and Delaware. To add insult to injury, based on American Community Survey Data from 2007, neither Delaware nor Louisiana even serves 80% of its 6 to 16 year olds in the public school system (a “coverage” metric).
  • The list includes the state with the absolute lowest cost and need adjusted per pupil state and local revenue among all states – Tennessee.

The DFER post notes that Illinois’ chances aren’t lookin’ as good as earlier this year. But, TN, DE, CO and LA  sound like strong contenders! (?)

Details on methods and analysis behind these findings available on request in  the near future. Cheers!

Pondering the Usefulness of Value-Added Assessment of Teachers

Value-added teacher assessment has been a mantra for education “reformers” throughout the debate over Race to the Top. We’ve got to evaluate teachers and make hiring and firing decisions on the basis of real student performance measures – you know, like businesses – like the real world does! (A highly questionable assumption indeed – AIG bonuses anyone?).

I address the technical issues with value-added assessment of teachers here, indicating just how premature these assertions are from a technical standpoint.

https://schoolfinance101.wordpress.com/2009/11/07/teacher-evaluation-with-value-added-measures/

At present, good value added measures are little more than  a really cool (if not totally awesome) research tool, but most of the best analyses of value-added as a tool for teacher evaluation suggest that even in the best of cases there still exist potentially problematic biases.

Let’s set these technical issues aside for now and explore some practical issues. For example, just how many teachers in a public education system could even be evaluated with value-added assessment? Consider these constraints.

  1. Most states, like New Jersey, implement yearly assessments in grades 3 through 8, and perhaps end of course or some HS exit exam. (I’ll set aside concerns over the fact that annual, rather than fall-spring assessment captures vast differences in summer learning which play out by student economic status – advantaging some teachers and disadvantaging others, depending on which kids they have).
  2. In most cases, the established and more reliable tests exist only in language arts and math, though some states have implemented science and/or social studies tests which are arguably less cumulative.
  3. The most reliable VA assessment of teachers occurs where there exist multiple points of historical scores on students prior to the observed teacher  (smaller technical point). This really casts doubt on the usefulness of VA assessment for evaluating teachers who have kids in their first few years of being assessed (grades 3 and 4 in NJ and many states).
  4. by the time a student hits middle school, they typically interact with multiple teachers who may have simultaneous influences on each others’ content area success. Even if we ignore this, at best we can look at the language arts and math teachers in the middle school setting.
  5. you have to jump over those untested grade 9 and 10 students and their teachers. If we have end of course exams, we don’t know what the beginning of course status necessarily was – at least in a VA modeling sense.

So, here is a listing of the certified staffing in New Jersey (below) in 2008 based on their grade levels and areas of teaching. The list does not include everyone, but does capture the main assignment (JOB Code 1) for the vast majority of school assigned teaching (and principal) personnel.

What this list shows us is that in the best possible case, in a state with annual Grades 3 to 8 assessment and shifting to end of course exams, we might be able to generate VA estimates of effectiveness for about 10% or 20% (just saw that “ungraded elementary” group) of the teachers. That is, 10% (up to 20%) would be subject to a different evaluation system than the rest. In fact, nearly 50% of teachers would be infeasible to evaluate at all. Indeed they are an important 10% (or perhaps 20%).

Okay, so maybe this would create incentive for the real gunners in the mix of potential teachers to dive into those areas evaluated by VA. There exists an equal if not stronger possibility that the real gunners in the mix of potential teachers will avoid those classrooms of kids, schools or districts where – in the evaluated content areas and grade levels – they face an uphill battle to improve outcomes (hopefully, some will welcome the challenge).

There are some obvious solutions to this dilemma –

  1. Test everything, every year by cumulative measures, fall and spring. Okay. That seems a bit absurd, but it might be a good economic stimulus for the testing industry. I still struggle with how we would evaluate teachers in supporting roles, as many of those listed below or teachers in the Arts and Music (perhaps applause meters… but only if we measure applause gain from concert to concert, rather than applause level?). What about vocational education?
  2. Just dump all of those teachers and all of that frivolous stuff kids don’t really need and assign each group of kids a 12 year sequence of reading and math teachers. Some have actually argued that this really should be done, especially in higher poverty and/or underperforming schools. Why, for example, should a school with inadequate math and reading scores offer instrumental music or advanced math or journalism courses? (Put down that saxophone and pick up that basic math book Mr. Parker!) The reality is that high poverty and underperforming schools in New Jersey and elsewhere already have concentrated their teaching staff on core activities to the extent that kids in poor urban schools have much less access to arts and athletics.

I personally have significant concerns over the idea that poor urban kids should have access to a string of remedial reading and math teachers over time and nothing else, but kids in affluent neighboring suburbs should be the ones with additional access to foreign languages, tennis and lacrosse teams and elite jazz ensembles (this one really irks me) and orchestras. Quite honestly, successful participation in these activities is highly relevant to college admission – at least at the competitive schools. Certainly, the affluent communities are not going to go along with dumping all of these things.

So, if we can’t test everything every year and if it is offensive to argue for dumping all areas that aren’t or can’t reasonably be evaluated, then we have a significant gap in the usefulness of VA teacher assessment.

I did this tally very quickly using 2007-08 NJ staffing files. Feel free to tally and re-tally and post alternative counts below. Note that most of the special education teachers are missing from the tally below because I’ve not yet fully recoded them for 2008. While I have done so for earlier years, those years of the staffing files don’t break out content area for MS teachers or grade level for elem teachers. About 14% of teachers in 2005 or 2006 data were special education. At a maximum, I get to about 20% of teachers as ungraded elementary and about another 5% or so potentially relevant in 2005 and 2006 for VA assessment (without ability to remove untested grades).

Main Assignment Number of Teachers % of Teachers Potentially Reliable VA Assessment No Assessment at All
Art 3,106 2.84 X
Basic Skills 1,779 1.63 X
Bilingual 697 0.64 X
Computer 917 0.84 X
Coord/Director 1,263 1.15 X
Counselors 29 0.03 X
Elem English 522 0.48
Elem Math 535 0.49
Elem Science 381 0.35
Ungraded Elem 11,308 10.33 ?
ESL 1,700 1.55 X
FCS 837 0.76 X
Grades 1 to 3 12,006 10.97
Grades 4 to 6 7,012 6.41 X
Grades 6 to 8 1,305 1.19 ?
HS English 13 0.01
HS English 5,041 4.61
HS Math 4,727 4.32
HS Science 4,391 4.01
HS Soc Studies 3,968 3.63 X
HS World Language 4,460 4.08 X
Indus Arts 1,217 1.11 X
Kindergarten 321 0.29 X
Kindergarten 3,565 3.26 X
MS Lang Arts 2,844 2.6 X
MS Math 2,439 2.23 X
MS Science 1,669 1.53 ?
MS Soc Studies 1,629 1.49 ?
MS World Language 440 0.4 X
Music 3,665 3.35 X
PE 6,963 6.36 X
Perf Arts 222 0.2 X
Preschool 1,052 0.96 X
Preschool 557 0.51 X
Principal 2,172 1.98 ?
Psychologist 1,545 1.41 X
SC Spec Educ 163 0.15 X
SC Spec Educ 6,747 6.17 X
SE RR/Inclusion 963 0.88 X
Supervisor 2,360 2.16 X
Vice Principal 1,828 1.67 X
Voc Ed 1,067 0.98 X
Total 109,433 (of about 142,000 recoded) 11.24 47.01

Okay – So New Jersey is just probably a wacky inefficient example that has way too many of those extra teachers in trivial and wasteful assignments. Well, here’s the breakout of Illinois teachers for 2008.


I could go on, and do this for Missouri, Minnesota, Wisconsin, Iowa, Washington and many others showing generally the same pattern. I chose New Jersey above  because the most recent years of NJ data actually break out the grade level assignment of most elementary teachers so we can see how many grades 1 through 3 teachers would fall outside the evaluation system.

My point here is not to try to trash VA evaluation of teachers, but rather to point out just how little – even in a practical sense – the pundits who are pitching immediate action on using VA for hiring and firing teachers and providing incentive pay have bothered to think about even the most basic issues. Not the technical and statistical issues, but really simple stuff like just how many teachers would even be evaluated under such a system. And more importantly, since this is supposedly about “incentives” – just what kind of incentives this selective evaluation might create.

Teacher Evaluation with Value Added Measures

This month, the special issue of the journal Education Finance and Policy on value-added measurement of student outcomes was published. The table of contents is here:

http://www.mitpressjournals.org/toc/edfp/4/4

This is good stuff, authored by leading educational measurement and statistics researchers and economists. These articles provide some important cautionary tales regarding the application of value-added measures of student outcomes for teacher evaluation. Here is a policy brief with a more user friendly summary of some of the content of the special issue:

http://www.wcer.wisc.edu/publications/highlights/v19n3.pdf

Here’s a recent working paper by Jesse Rothstein, Princeton economist who also has an article in the special issue:

http://gsppi.berkeley.edu/faculty/jrothstein/published/rothstein_vam2.pdf

Here’s the concluding sentence of the abstract Rothstein’s paper:

Results indicate that even the best feasible value added models may be substantially biased, with the magnitude of the bias depending on the amount of information available for use in classroom assignments.

On average, the articles in the special issue do show some promise for using value-added assessment in teacher evaluation, with a number of really important caveats and technical stipulations.

Yes, we need access to more student assessment data with linkages to specific teachers – including the range of teachers across which middle and secondary students interact (it’s not as simple as linking the single teacher to a group of children). We need access to such data across multiple states and their assessment systems. Scaling properties of data and test noise play a major role in the precision with which one can isolate teacher or classroom level effects. We have little or no idea, for example, of the extent to which analyses using North Carolina or Texas assessment data relate to New Jersey assessment data, the statistical properties of those data and their usefulness or lack thereof for estimating teacher or classroom effects (unless there are technical papers out there on NJ tests of which I am unaware).

So, these are the main reasons we need to tear down firewalls – to advance the art, science and statistics of value added modeling, school and teacher evaluation and to uncover potential shortcomings where they exist.

Policymakers and pundits diving in head first on these issues need, quite simply, to chill out, perhaps read the special issue above and heed the advice earlier this year from the National Academy of Sciences and figure out how to do this right if we’re going to do it at all.

Diving in too quickly and doing it wrong will make it that much harder to do it right in the long run and will provide that much more ammunition for resistance.

Dear DOE – Wrong Again!

After starting my day with this NPR brief:

http://www.npr.org/templates/story/story.php?storyId=113533704

I am again perplexed by what Department of Ed Officials are thinking, who is advising them and what analyses are actually being done before certain states are identified as “good” and others as “evil.” In this story, DOE officials are chastising the states of Pennsylvania, Massachusetts and Connecticut for playing a shell game with ARRA funds – filling budget holes with those funds and not using the funds to prop up/increase public education support.

Here’s the link to the report from DOE:

http://media.npr.org/assets/news/2009/10/06/stimulus.pdf

The problem here is that the DOE’s metrics for evaluating whether a state is “good” or “evil” are, well, entirely screwed up and meaningless. I can’t think of  softer way to phrase that. As such, the DOE continues to criticize states like MA and PA, which are doing reasonably well (now that the PA budget is nearing adoption) and the DOE is missing entirely those states which have done particularly “evil” things with ARRA funds.

For example, DOE’s primary concern regarding Massachusetts is that the state percent of total education funding will not be the same as it was in 2006.

In order to meet the requirements for the MOE waiver, a State must show that it is spending at least as much State money on education, as a percentage of total revenues, as it did in the previous year.

Given DOE phrasing, it appears that they mean the percent of total state revenues that are allocated to education. This is hardly a meaningful metric because it has little to do with the availability of resources to children in school districts and little to do with measuring a state’s “effort” for public education. A state could simply have slashed taxes and cut dramatically their total budget, slashing all public services left and right, including public schools – all the while, still spending the same share on public schools. Silly.

A more reasonable perspective would be to look at whether cumulative state, local and ARRA resources are actually assisting districts in maintaining and/or expanding services over prior year. Looking only at the state aid apportionment tells us very little. Based on district by district runs of 2008-09 and 2009-10 Massachusetts Chapter 70 aid program, it would appear for2010 that districts will receive modest per pupil increases in the sum of state and local (with ARRA) funds. The increases are partially funded by expected increases in minimum local contributions (which might easily be considered state resources).

http://finance1.doe.mass.edu/chapter70/chapter_10.xls

Pennsylvania is a unique case since the state was a budget impasse until very recently. However, while at impasse, the high end in the debate included a plan to continue substantial increases to support the new school finance formula which begins to resolve substantial disparities among PA school districts. The low end was to merely hold districts at prior year Basic Education Funding Levels. To the best of my understanding, the final solution is nearer the high end than the low end and does support some significant increases toward the phase in of the new formula. I may be wrong since I’ve yet to see the district by district run of state and local BEF resources. So, even if PA had landed on the low end scenario, it would have been the same as New York for 2010 and 2011. New York, also phasing in a new formula, stopped phase in entirely, and froze foundation funding for 2010 and 2011. So how is PA worse than NY? Should NY be on the hit list for DOE?

Many states did far worse things with their stabilization funds than NY, PA or MA (or perhaps even CT) which used them to… well… stabilize! For example, Kansas actually implemented per pupil cuts in foundation budgets – actual reductions over prior year cumulative per pupil resources – not just failing to meet an increase target. Even worse, the per pupil cuts are systematically larger in higher poverty than in lower poverty districts. http://www.ksde.org/LinkClick.aspx?fileticket=J%2bZiki0vnrc%3d&tabid=119&mid=8049

The poorer the district, the larger the per pupil cut.

How is that better than PA and MA? Alabama also cut districts substantially, though not necessarily systematically by poverty.

Nebraska made a really fun move. Nebraska altered the primary aid formula which ARRA funds were to flow through and then used the modified formula to provide per pupil increases to the affluent and middle class suburban districts around Omaha, but held Omaha roughly constant over prior year funding. So, Nebraska used ARRA funds to restore inequities that had persisted before Omaha fought back in recent years. http://ess.nde.state.ne.us/SchoolFinance/StateAid/Default.htm

Guess what DOE – you can maintain the same state share of funding if you just cut everyone’s budget! Cut everyone’s state aid and their local contribution toward foundation aid and state share can stay constant. Even more fun, you can actually use additional state resources to drive more funds to those districts with less need and create even greater inequities? And you can prop up those inequities with ARRA funds? No harm, no foul under current DOE metrics.

DOE, am I missing something here? I’ll gladly help out for a nominal fee. But this is just getting absurd!

Cordially,

SchoolFinance101

=========

A quick lesson for DOE. What matters for the operation of local public school districts is the sum of the resources available. In many if not most states, Foundation Aid formulas are the formulas that identify the “sum” of state and local resources to be provided for annual operating budgets. The state share of that sum is backed out after determining the funds that would be raised by applying a specific local property tax or required local effort rate. That local minimum requirement toward the sum may as well be considered state funding (to the extent that it actually is required). What matters to districts and the children they serve, is the SUM here! The foundation budget (and other add-ons), adjusted for various needs and costs.

On a related note, DOE should also recognize that some states actually determine that SUM in an inequitable way (see: https://schoolfinance101.wordpress.com/2009/01/27/the-fine-art-of-inequitable-school-finance-policy/).  For states that have foundation formulas that promote inequity, running ARRA funds through their formulas means using ARRA funds to advance inequity (Nebraska pulled a bait-and-switch for 2010).

Ed Trust, DFER and Center for American Progress misguided

Let me start by saying that these are three groups for which I have a good appreciation. But, these groups have allowed much of their education reform agenda to be misguided by bad analyses and the time has come to clear up some major problems with the assumptions that drive many of the policy recommendations of these groups.

Issue 1Teacher Quality Distribution: Yes, the uneven distribution of teacher quality is a major factor – perhaps the greatest inequity in education that must be resolved.

Hanushek and Rivken conclude: “The substantial contribution of changes in achievement gaps between schools is consistent with an important role for schools, and we find that the imbalanced racial distribution of specific characteristics of teachers and peers—ones previously found to have significant effects on achievement—can account for all of the growth in the achievement gap following third grade.” (p. 29) Hanushek, E., Rivken, S. (2007) School Quality and the Black-White Achievement Gap. Education Working Paper Archive. University of Arkansas, Department of Education Reform.

There are undoubtedly inequities in the distribution of quality teachers across public schools within public school districts and some of the causes of these inequities may be traced back to district leadership and teacher contract structure.

But, without a doubt (and validated by most rigorous analysis of teacher labor markets), most of the disparities in the distribution of quality teaching occur BETWEEN, NOT WITHIN school districts – just as most of the differences in student populations occur between, not within districts. Most of the disparities have little to do with school district HR offices succumbing to seniority privileges and contractual bumping provisions, and have much more to do with racial and socioeconomic differences in students between districts and persistent disparities in school funding, infrastructure, etc.

Ed Trust and CAP in particular have been off base, driven there by empirically bad, conceptually weak, largely non-peer reviewed “policy” research. They have been led to believe that teacher quality distribution is primarily a district problem and one that can be fixed by altering “comparability” regulations of Title I. That is, using federal pressure to make districts fix their own problems. While districts should be required to do so, these problems are small piece of the much bigger puzzle. By obsessing so much on these issues, these organizations have completely taken their eye off the ball on the largest and most persistent inequities that plague our public schooling systems.

Issue 2 – The Role of Federal Title 1 Programs. These organizations are excessively if not obsessively focused on the role of Federal Title I funding. On the one hand, because they believe that most teacher quality disparities exist within districts – mainly districts having Title I schools, they also seem to believe that these disparities can be largely resolved by changing what are called “comparability” regulations of Title I to require districts receiving Title I funds to make greater assurances that their teachers are equitably distributed. Great! Let’s do that. I’m fine with that, but again, it’s trivial piece of the puzzle when districts with large numbers of Title I schools, or even 100% Title I schools can’t compete with their neighboring districts for teachers to begin with – and where those school districts may have few or no Title I schools.

These organizations also appear somewhat obsessed with this idea that Title I money itself is being allocated in ways that make rich districts and rich states richer, while depriving poor districts and poor states. This is also largely a conclusion drawn from very weak analysis which fails to account sufficiently for regional variations in the cost of providing services and for regional variations in the fit of poverty thresholds to income distributions. I’ll happily elaborate for anyone who  truly gives a damn about the technical details, but suffice it to say that – but for the small state minimum allocations to places like Vermont or Wyoming – the cross state and within state distribution of Title I funds is much less awful than I ever expected, and actually not so bad. Driving more Title I funds to southern and rural districts and away from poor urban core northern districts would likely be a very bad policy choice and would be based on deeply problematic analyses.

Finally, on this point, most issues of funding inequity are STATE POLICY ISSUES. The federal role remains relatively small. Some states do much better than others and we need to focus our attention on that. Further, while there do exist disparities within school districts across schools, the larger disparities are still STATE POLICY CONCERNS and exist BETWEEN, NOT WITHIN DISTRICTS. As a side note, it is also the case that districts adopting these hip-and-cool weighted student formulas as within district allocation mechanisms, do no better than districts in the same state using other allocation methods, at improving either fiscal equity or teacher quality equity across schools.

Issue 3 – Measuring Equity in School Funding. Here I have more appreciation and less to gripe about, but wish to point out some critical flaws in the approach used by The Education Trust in their Funding Gap analyses. I bring this topic up because the language used by the above mentioned organizations speaks to the Education Trust framework for evaluating whether states are doing the right thing on school finance. The Ed Trust approach is to look at the average spending of the highest and lowest poverty school districts in a state, with a few arbitrarily selected weights to adjust for “costs” associated with poverty. There’s a whole lot missing here which ultimately leads to some bad conclusions about some states. To begin with, I agree that what we need to be looking for is a progressive distribution of fiscal inputs – systematically higher in higher poverty settings than lower poverty settings. Unfortunately, taking the average of the top and bottom group tells us NOTHING of how SYSTEMATIC the patterns are! Instead, one must evaluate the overall relationship – ACROSS ALL DISTRICTS, EVEN THOSE IN THE MIDDLE – between district fiscal inputs and poverty. On inputs, if we  are truly interested in measuring the state’s own policies, we should look at the sum of state and local revenues per pupil. Second, because of the mis-measurement of poverty across rural versus urban settings (something noted in a few Ed Trust reports) and because of economies of scale related cost differences, we should actually account for differences in the location and size of school districts. We should also account for differences in regional wage variation, which Ed Trust does. But, when all of these are thrown in together, into a rigorous analysis of funding progressiveness across districts within states, one gets a much different picture for some states than the picture provided by the oversimplified Funding Gap analysis. See Connecticut

Conclusions – Okay, so this is just Baker, a school finance techie geek bitching and moaning about trivial statistical problems with research largely conducted by Marguerite Roza and colleagues at the Center for Reinventing Public Education and the reliance of CAP, DFER and Ed Trust on that work. Perhaps – BUT – we are talking about billions of dollars here. And the distribution of billions of dollars should be backed by reasonably rigorous analysis and good assumptions. So, here are the take home points:

1)      Teacher quality distribution is critically important and the main problem exists between school districts.

2)      State school finance systems – not Title I and not district allocation policies – are the primary underlying cause of resource disparity across children in public schools, where the primary types of resource disparity are those that exist between districts.

  1. Funding one or two high poverty districts well in state is by no means “systematic” progressiveness
  2. FUNDING EQUITY – FUNDING PROGRESSIVENESS – IS A NECESSARY (though perhaps not in-and-of-itself sufficient) UNDERLYING CONDITION FOR ACHIEVING TEACHER QUALITY EQUITY

As such any legitimate requirements for states to qualify for additional fiscal stabilization funds or for Race to the Top Funding should include precise indicators about state responsibility to improve school funding equity and adequacy. Ed Trust, CAP and DFER have done a huge disservice by missing this point entirely.

Most recent presentation on Title 1:

Baker.AERA.Title1

Most recent presentation on Within/Between Funding & Teachers:

AEFA 2009b_color

HERE IS A MUCH MORE PRECISE SET OF COMMENTS REGARDING SCHOOL FUNDING, FROM THE EDUCATION LAW CENTER OF NJ:

ELCRTTFcoments.Aug28

Who should qualify for Race to the Top?

I’ve been asked at least a few times this past week about what types of requirements should be included for states to qualify for Race to the Top federal stimulus funding. Interestingly, there seems thus far to be little focus on whether states are actually financing their schools equitably and adequately and putting up reasonable effort to finance their schools as a requirement for accessing stimulus funds. More disconcerting is the fact that there also seems little emphasis on even whether stimulus stabilization funds are being used to advance equity and adequacy of funding. In some cases, which I will elaborate at a later date, stabilization funds have actually been allocated in ways that erode equity and reduce state effort. That being water under the bridge, what might be some reasonable requirements for Race to the Top and second year stimulus funds, and which states might qualify and not qualify?

Category 1: Fiscal Effort

A state’s effort in school finance is often measured as the aggregate state and local pk-12 public education resources allocated as a percent of Gross State Product (now labeled Gross Domestic Product – State). Some have suggested that states which maintain current effort levels should qualify for stimulus funds. This seems a low bar for states that put up very little effort like Delaware and Louisiana. It seems to me that low effort states – states below the average state – should have to show that they’ve increased effort significantly. But, states that are above the average should perhaps be held to the maintenance standard. I discuss Louisiana’s effort here.

Category 2: Fiscal Adequacy

Effort and adequacy are somewhat linked, as one can see in my rant about Louisiana and Mississippi. Louisiana is low effort and low adequacy in funding whereas Mississippi is average effort and low adequacy. That is, Louisiana is perhaps more to blame for its own inadequacy than Mississippi, which simply lacks the economic base.

I would argue that any state which has (a) below average effort and (b) per pupil spending adjusted for regional variation in wages (using the NCES Comparable Wage Index)  should be low on the list for additional stimulus funds. States with below average regional adjusted spending and below average effort should be required to increase both in order to qualify.  Sadly, however, I suspect that states like Louisiana would gladly further deprive the less than 85% of children who actually attend their public schools of these additional resources (LA has the highest share in private schools). Indeed, these requirements are a double-edged sword.

Category 3: Fiscal Equity

This one is a little more complicated, but the general idea is that states should have to be able to show that they’ve made effort toward targeting additional resources – state and local district revenues – to higher poverty school districts. In a forthcoming indexing system, we control for a variety of school district characteristics to evaluate whether, on average, a state school finance system results in systematically more state and local revenue per pupil in higher poverty school districts than lower poverty ones. Unfortunately the Education Trust approach of looking at the highest and lowest 25% of districts by poverty misses the boat – because it fails to capture whether the pattern is systematic across all districts. A good example is Connecticut, which shows a positive differential in state and local revenue between high and low poverty districts, but when measured statistically across all districts, the relationship is not statistically significant – or systematic. That’s because Connecticut district revenues are all over the map. The average spending for high poverty districts is skewed by only two (Hartford and New Haven) which are relatively higher state and local revenue districts. Meanwhile, districts like Bridgeport, Waterbury, New Britain and others are pretty much left out.

So, that in mind, what needs to be measured here?  Well, to qualify for Race to the Top funds, I believe that the first states in line should be those where there exists a systematic positive relationship between state and local revenues per pupil and either/or (a) US Census Poverty estimates (b) NCES Common Core Free/Reduced Lunch rates. This includes only a handful of states such as New Jersey and Minnesota (although also driven by Minneapolis and St. Paul, but better than CT). For states with either no relationship between state and local revenue and poverty, or a negative one, those states should have to show that they have improved significantly the relationship between state and local revenue per pupil and poverty.  For example, New York State, one of the nation’s most “regressively” funded states could reduce it’s negative relationship significantly by following through with planned increases to funding to New York City schools and to many other poor, small city districts around the state which remain, in the hole, so to speak.  Similarly, Pennsylvania which until recent reforms was the most regressively funded state in the nation, could really put a dent in its negative funding relationship by following through with the Governor’s plan to continue phase in of the new funding formula. This, in my mind would make PA an ideal candidate for Race to the Top funding.