Blog

Addendum (and a catchy tune): Ethics, Social Science Research and VAMing Teachers

A few days ago, I posted my concerns regarding the contorted logic of the Brookings report on evaluating teacher evaluation systems. More recently, NEPC posted a slightly revised version of that blog post here: http://nepc.colorado.edu/files/Passing%20muster%20fails%20muster.pdf

Below is an addition to the NEPC version which was not in my original post, but rather, a comment I had made in response to a comment in my post.

The awkward issue here is that this brief and calculator are prepared by a truly exceptional group of scholars, and not just reform-minded pundits. It strikes me that we technocrats have started to fall for our own contorted logic – that the available metric is the true measure – and the quality of all else can only be evaluated against that measure. We’ve become myopic in our analysis, and we’ve forgotten all of the technical caveats of our own work, simply assuming the
technical caveats of any/all alternatives to be far greater.

Beyond all of that, I fear that technicians working within the political arena are deferring judgment on important technical concerns that have real ethical implications. When a technician knows that one choice is better (or worse) than another, one measure or model better than another, and that these technical choices affect real lives, the technician should – MUST – be up front/honest about these preferences.

Of course, this all got me thinking about our responsibilities as social science researchers and especially as social science researchers attempting to use complex statistical models to affect public policy in ways that in turn has real consequences for real people.

Now, I’m no expert in ethics, so I’ll not opine much further on the topic. However, I believe that I’ve become somewhat sensitized to ethical concerns and dilemmas that occur in such contexts, perhaps by various interactions with some pretty good ethical thinkers over time and perhaps even by my time working at the Ethical Culture Schools in NYC. Interestingly, one noted alum of ECS was J. Robert Oppenheimer (“father of the atomic bomb”), for whom a physics lab at the school is named.

This all reminds me of a song

Demystifying today’s Abbott Decision

First, let’s identify the players:

  1. New Jersey Legislature & Governor, or THE STATE
  2. Children attending Abbott school districts and their legal representation, or THE PLAINTIFFS
  3. THE COURT (NJ Supreme Court)
  4. Other school districts and the children they serve

Now, let’s not go too far back in history, and instead account for the last few years which really define where we are at today, and how this decision makes sense: http://www.judiciary.state.nj.us/opinions/supreme/M129309AbbottvBurke.pdf

Until a few years ago, the State of New Jersey was operating its school funding formula under a series of court orders specifically intended to ensure that children attending school districts known as Abbott districts received sufficient resources to provide them with a constitutionally adequate education (history here: http://edlawcenter.org/ELCPublic/AbbottvBurke/AbbottHistory.htm) .  The original Abbott v. Burke lawsuit was brought on behalf of PLAINTIFF children who resided in specific school districts.

A few years ago (2008-09), the New Jersey Legislature- THE STATE – adopted the School Funding Reform Act of 2008 in a legislative, pro-active attempt to move into a new era in New Jersey school funding, an era not driven by judicial mandates but rather by a legislatively adopted formula. An era where a unified state school finance formula would drive “adequate” (their words, not mine) funding to local public school districts, whether those districts were among those that had previously sued the state over funding or not.

PLAINTIFFS CHALLENGED that formula, saying it would not provide them with adequate resources and should not be considered constitutional.

THE STATE argued that the formula, SFRA, was essentially THE OPERATIONAL DEFINITION OF THEIR CONSTITUTIONAL MANDATE.  That SFRA, by its design and according to its planned implementation was necessarily constitutional.

THE COURT cut THE STATE a break, and indicated that while it wasn’t entirely sure that SFRA really was the operational definition of the constitutional mandate, it was a reasonable attempt and should be allowed to move forward. That is, the COURT was anything but activist, giving THE STATE an opportunity to move forward with their new school finance plan, but holding the STATE to their promise on a 3 year time frame. THE STATE WON, and THE PLAINTIFFS LOST.

Then, all hell breaks loose in the economy and THE STATE (which is now a different set of individuals/Governor/legislators, but that’s not relevant to the legal question at hand) pulls about $1.7 billion out of SFRA, relative to where it would have been if implemented as promised. Again, THE STATE had argued that SFRA implemented as promised was effectively THE OPERATIONAL DEFINITION OF THEIR CONSTITUTIONAL MANDATE.

So today, THE COURT had a really narrow, arguably boring question to answer. They didn’t have to answer the big question of whether SFRA in its current form meets the constitutional standard or what that constitutional standard really meant. They had decided in 2009 that SFRA as planned would meet the constitutional standard, and had accepted THE STATE’s argument to that effect.  Today, THE COURT merely had to decide if SFRA, in its current form – less $1.7 billion – was still implemented as planned? That’s a pretty simple NO.  Right or wrong in any broader sense, whether SFRA is a good formula or a sucky one, the legal question before this court was simply whether SFRA was implemented as planned. And it wasn’t.

Judicial activism? Let’s review. First a definition. Judicial activism is when the judicial branch applies the constitution to invalidate statutes passed by the legislature. While having negative connotations, judicial activism is clearly appropriate under some circumstances. Legislatures do adopt policies that violate individual rights and checks and balances are critical. I guess you could say that this decision invalidates recent budgetary decisions. BUT, and this is a big BUT, all that the court has done here is to uphold the state school finance formula that THE STATE asked them to uphold a few years ago.

The court is merely upholding a legislative action that it already upheld a few years ago (while granting significant deference to the legislature on how that formula would work).  That’s pretty mundane, if you ask me.

Are Abbott districts and Ed Law Center the big winners here? It’s really important to understand here that SFRA was considered to be a reasonable operational definition of the state constitutional obligation because THE PLAINTIFFS LOST in 2009. ELC and Abbotts did not want SFRA and felt that it didn’t provide sufficient additional resources to meet the needs of children in Abbott districts. They lost in 2009. THE STATE won, and SFRA was accepted. So, this time around, ELC and Abbotts had to suck it up and accept that SFRA was the standard, and argue that at least SFRA should be funded as planned and as accepted by the court – BECAUSE IT ALREADY WAS. This new decision today merely affirms the PLAINTIFF’s previous loss.

What about that whole bit about THE STATE only having to reinstate the cuts to Abbott districts – THE PLAINTIFFS? Perhaps this is a technicality, but children in Abbott districts are the original plaintiffs and the ones who continue to be represented in this case – THE PLAINTIFFS. So, it is technically correct in a legal sense that THE STATE would be obligated only to close those funding gaps.

BUT… and this is another BIG BUT… this does leave the door wide open to the possibility that all of those other districts whose current funding levels fall “below adequacy” under SFRA can bring separate lawsuits against the state to have their cuts restored as well (if THE  STATE were to choose to only restore cuts to Abbott districts). After all, THE STATE has said and THE COURT has accepted that SFRA as planned was constitutional. THE COURT has now said that funding below that level is a constitutional violation, seemingly making for a pretty straightforward argument for non-Abbotts below their target funding levels – adequacy funding – under SFRA.  Let the games begin!

Does New Jersey really need more small, segregated schools?

Political pundits and the media frequently point out two major concerns regarding the organization of public school districts in New Jersey.

  • First, that New Jersey, being the most population dense state in the nation, simply has far too many small schools and school districts (largely an artifact of municipal reorganization and alignment that occurred in the late 1890s and first decade of the 1900s).
  • Second, that New Jersey is among the most racially and socioeconomically segregated states in the nation, or more specifically, that many urban communities in New Jersey suffer extreme racial isolation (high concentration of a single race/ethnicity).

I blogged about this topic way back when I first started this blog!

Here’s a snapshot:

So then, one should ask how expansion of charter schools intersects with these two major policy concerns. It would be one thing if New Jersey Charter Schools simply had a track record of a) serving similar student populations and b) consistently outperforming traditional public schools in the same location. That is, one might argue that we can deal with a marginal increase in segregation and additional segmentation of our school system if it’s producing better results (therefore not compromising efficiency). But that’s not the case. New Jersey charter schools, on average, are average.  In particular, there are few if any high performing, high poverty charters. The figure below is from a recent post.

In fact, the NJ charters frequently cited as high flyers also tend to a) serve far lower shares of children qualifying for free lunch, b) serve far fewer LEP/ELL children, and c) some in particular have disproportionately high attrition rates in the middle grades.

I’ve shown on many occasions on this blog, that NJ Charters serve far fewer children with greater educational needs.

But do NJ Charter schools contribute to racial and ethnic segregation in New Jersey? Given the break-even performance of NJ charters, it would make little sense to advance a policy agenda that has the tendency to increase segregation and racial isolation in a state already segregated and racially isolated.

Here are the figures, based on the 2009-10 NCES Common Core of Data, Public School Universe Survey, based on the zip code of school location (LZIP).

I’ve included only elementary and middle schools in the following graphs.

First, here are the charter and non-charter averages for % Free Lunch by zip code:

While statewide averages are relatively comparable, as I’ve discussed numerous times, there are big differences in specific locations. Note the number of zip codes where charters serve far fewer children qualifying for free lunch (light blue bars way below dark blue bars). In a few cases, charters serve higher rates.

Second, here are the charter and non-charter % black populations by zip code:

In many cases, charters serve far higher concentrations of black students than surrounding schools.  This figure provides an intriguing contrast with the previous, suggesting that in fact, in many neighborhoods, Charters are serving the less poor among black populations specifically and are serving black populations almost exclusively in some otherwise mixed race neighborhoods.

Third, here is the distribution of Hispanic enrollments by zip code:

Charter schools seem to be largely underserving Hispanic populations. This may be consistent with their underserving of LEP/ELL children to the extent that there is overlap between LEP/ELL concentrations and Hispanic enrollments within Zip Codes. A few zip codes have higher concentrations of Hispanic children in charter schools but most have far fewer.

Finally, here is the concentration of Asian students by zip code:

A handful of NJ charter schools have highly disproportionate shares of Asian students.

These figures raise important questions about the contribution of charter schools in the broader education policy and public policy context in a state already grappling with significant segregation and racial isolation (and consolidation, or lack thereof). These concerns may be particularly relevant as increased numbers of culture (ethnicity) specific charter schools are proposed, dispersed throughout the state.

Raw Stata output of tabulations: Charter Segregation Raw Output

Graphs of the Day: Texas Private School Enrollments & Expenditures

Below are a series of graphs of the distribution of enrollments and average total expenditures for Texas private schools. I figure these are particularly relevant as the Texas legislature entertains the idea of providing vouchers for private schools in Texas. These data, unfortunately, are from a few years back – based on 2008 IRS tax filings of private schools. Further, because I used IRS filings to determine expenditures, certain groups of schools – most notably Catholic schools – are noticeably underrepresented in the financial analysis. That said, I was able to compile sufficient  data on relatively large numbers of Independent Schools (about 75% of all nationally) and Christian Schools (nearly 1/3… not great, but reasonable numbers). Those two groups of schools represent a significant share of Texas private school enrollments.

Here’s the punchline from these graphs. If we have any expectation that a voucher program is going to provide religious neutrality in access to private schooling or to provide sufficient opportunity to attend high quality non-religious, private independent schools, then voucher levels likely need to be much higher than commonly recommended. This then raises the key policy question – if the vouchers would have to be much higher than the average current public school expenditure – and the outcomes unknown – why would we adopt such a policy?

As the larger study (link) below shows, private schools are not uniformly/systematically “cheaper” and/or “better” than public schools. Rather, they vary widely and there are substantive differences in the programs (class size, etc.) and teacher characteristics in low spending versus high spending private schools.

Further, it is important to consider NOT the TUITION, but the actual per pupil expenditures of schools that are expected to enroll voucher students. Schools will (and can) only absorb so much loss per child, just as they do when setting tuition & financial aid policy while cognizant of their program cost structures. And, as voucher enrollment shares of total enrollments increase, shares of enrollments of families likely (and able) to contribute significantly to annual funds (to offset operating gaps) decreases (a potentially vicious cycle of financial decline).

That out of the way… here are the Texas numbers:

Far more information on the data used here and their policy implications can be found here: http://nepc.colorado.edu/publication/private-schooling-US

Passing Muster Fails Muster? (An Evaluation of Evaluating Evaluation Systems)

The Brookings Institution has now released their web based version of Passing Muster including a nifty calculation tool for rating teacher evaluation systems. Unfortunately, in my view, this rating system fails muster in at least two major ways.

First, the authors explain their (lack of) preferences for specific types of evaluation systems as follows:

“Our proposal for a system to identify highly-effective teachers is agnostic about the relative weight of test-based measures vs. other components in a teacher evaluation system.  It requires only that the system include a spread of verifiable and comparable teacher evaluations, be sufficiently reliable and valid to identify persistently superior teachers, and incorporate student achievement on standardized assessments as at least some portion of the evaluation system for teachers in those grades and subjects in which all students are tested.”

That is, a district’s evaluation system can consider student test scores to whatever extent they want, in balance with other approaches to teacher evaluation.  The logic here is a bit contorted from the start. The authors explain what they believe are necessary components of the system, but then claim to be agnostic on how those components are weighted.

But, if you’re not agnostic on the components, then saying you’re agnostic on the weights is not particularly soothing.

Clearly, they are not agnostic on the components or their weight, because the system goes on to evaluate the validity of each and every component based on the extent to which that component correlates with the subsequent year value-added measure.  This is rather like saying, we remain agnostic on whether you focus on reading or math this year, but we are going to evaluate your effectiveness by testing you on math. Or more precisely, we remain agnostic on whether you emphasize conceptual understanding and creative thinking this year, but we are going to evaluate your effectiveness on a pencil and paper, bubble test of specific mathematics competencies and vocabulary and grammar.

Second, while hanging ratings of evaluation systems entirely on their correlation with “next year’s value added,” the authors choose to again remain agnostic on the specifics for estimating the value-added effectiveness measures. That is, as I’ve blogged in the past, the authors express a strong preference that the value added measures be highly correlated from year to year, but remain agnostic as to whether those measures are actually valid, or instead are highly correlated mainly because the measures contain significant consistent bias – bias which disadvantages specific teachers in specific schools – and doe so year after year after year!

Here are the steps for evaluating a teacher evaluation system as laid out in Passing Muster:

Step 1: Target Percentile of True Value Added

Step 2: Constant factor (tolerance)

Step 3: Correlation of teacher level total evaluation score in current year, with next year value added

Step 4: Correlation of non-value added components with next year’s value added

Step 5: Correlation of this year’s value added with next year’s value added

Step 6: Number of teachers subject to the same evaluation system used to calculate correlation in step 3 ( a correlation with next year’s value added!)

Step 7: Number of current teachers subject to only the non-value added system

In researchy terms, their system is all reliability and no validity (or, at least, inferring the latter from the former).

But, rather than simply having each district evaluate its own evaluation system by correlating its current year ratings with next year’s value-added, the Brookings report suggests that states should evaluate district teacher evaluation systems by measuring the extent that district teacher evaluations correlate with a state standardized value-added metric for the following year.

But again, the authors remain agnostic on how that model should/might be estimated, favoring that the state level model be “consistent” year to year, rather than accurate. After all, how could districts consistently measure the quality of their evaluation systems if the state external benchmark against which they are evaluated was not consistent?

As a result, where a state chooses to adopt a consistently biased statewide standardized value-added model, and use that model to evaluate district teacher evaluation systems, the state in effect backs districts into adopting consistently biased year-to-year teacher evaluations… that have the same consistent biases as the state model.

The report does suggest that in the future, there might be other appropriate external benchmarks, but that:

“Currently value-added measures are, in most states, the only one of these measures that is available across districts and standardized.  As discussed above, value-added scores based on state administered end-of-year or end-of-course assessments are not perfect measures of teaching effectiveness, but they do have some face validity and are widely available.”

That is, value-added measures  – however well or poorly estimated – should be the benchmark for whether a teacher evaluation system is a good one, simply because they are available and we think, in some cases, that they may provide meaningful information (though even that remains disputable- to quote Jesse Rothstein’s review of the Gates/Kane Measures of  Effective Teaching study: “In particular, the correlations between value-added scores on state and alternative assessments are so small that they cast serious doubt on the entire value-added enterprise.” See: http://nepc.colorado.edu/files/TTR-MET-Rothstein.pdf).

I might find some humor in all of this strange logic and circular reasoning if the policy implications weren’t so serious.

(RE)Ranking New Jersey’s Achievement Gap

New Jersey’s current commissioner of education seems to stake much of his arguments for the urgency of implementing reform strategies on the argument that while New Jersey ranks high on average performance, New Jersey ranks 47th in achievement gap between low-income and non-low income children (video here: http://livestre.am/M3YZ). To be fair, this is classic political rhetoric with few or no partisan boundaries.

As I have been discussing on this blog, comparisons of achievement gaps across states between children in families above the arbitrary 185% income level and below that income level are very problematic.  In my last post on this topic, I showed that states where there is a larger gap in income between these two groups (the above and below the line groups), there is also a larger gap in achievement.  That is, the size of the achievement gap is largely a function of the income distribution in each state.

Let’s take this all one more, last step and ask – If we correct for the differences in income between low and higher income families – how do the achievement gap rankings change? And, let’s do this with an average achievement gap for 2009 across NAEP Reading and Math for Grades 4 and 8.

First, here are the differences in income for lower and higher income children, with states ranked by the income gap between these groups:

Massachusetts, Connecticut and New Jersey have the largest income gaps between families above and below the arbitrary Free or Reduced Price Lunch income cut off.

Now, let’s take a look at the raw achievement gaps averaged across the four tests:

New Jersey has a pretty large gap, coming in 5th among the lower 48 states (note there are other difficulties in comparing the income distributions in Alaska and Hawaii, in relation to free/reduced lunch cut points). Connecticut and Massachusetts also have very large achievement gaps.

One can see here, anecdotally that states with larger income gaps in the first figure are generally those with larger achievement gaps.

Here’s the relationship between the two:

In this graph, a state that falls ON THE LINE, is a state where the achievement gap is right on target for the expected achievement gap, given the difference in income for those above and below the arbitrary free or reduced price lunch cut-off. New Jersey falls right on that line. States falling on the line have relatively “average” (or expected) achievement gaps.

One can take  this the next step to rank the “adjusted” achievement gaps based on how far above or below the line a state falls. States below the line have achievement gaps smaller than expected and above the line have achievement gaps larger than expected. At this point, I’m not totally convinced that this adjustment is capturing enough about the differences in income distributions and their effects on achievement gaps. But it makes for some fun adjustments/comparisons nonetheless. In any case, the raw achievement gap comparisons typically used in political debate are pretty meaningless.

Here are adjusted achievement gap rankings:

Here, if I counted my bars right, NJ comes in 27th in achievement gap. That is 27th from largest. That is, New Jersey’s adjusted achievement gap between higher and lower-income students, when correcting for the size of the income gap between those students, is smaller than the gap in the average state.

More on NAEP Poverty Gaps & Why State Comparisons Don’t Work

This post is a follow-up to a recent post on how income distributions differ across states and how those income distributions thwart our ability to make reasonable comparisons across states in the size of achievement gaps in relation to low-income status. This series of posts on NAEP poverty gaps comes in response to a tweet on May 4 from Lisa Fleisher of the WSJ.  Lisa was quoting NJ Education Commissioner Cerf on NJ school performance.

  • @lisafleisher Lisa Fleisher
  • Cerf on performance of NJ schools compared w/nation: 5th best in country. But gap btwn rich/poor = 47th highest gap. An “astounding figure”

Cerf has had some difficulties in the past making reasonable (honest) presentations of achievement data – specifically with respect to the influence of poverty measurement.

To review (so you don’t have to necessarily go back and read the other post, which is here):

Here’s the basic framing adopted by most who report on this stuff:

Non-Poor Child Test Score – Poor Child Test Score = Poverty Achievement Gap

Non-Poor Child in State A = Non-Poor Child in State B

Poor Child in State A = Poor Child in State B

These conditions have to be met for there to be any validity to rankings of achievement gaps.

Now, here’s the problem.

Poor = child from family falling below 185% income level relative to income cut point for poverty

Therefore, the measurement of an achievement gap between “poor” and “non-poor” is:

Average NAEP of children above 185% poverty threshold – Average NAEP of children below 185% poverty threshold = “Poverty” achievement Gap

But, the income level for poverty is not varied by state or region. See: https://schoolfinance101.com/wp-content/uploads/2011/03/slide1.jpg

As a result, the distribution of children and their families above and below the specified threshold varies widely from state to state, and comparing the average performance of the groups of children above that threshold and below it is not particularly meaningful.  Comparing those gaps across states is really problematic.

While I showed how different the poverty and income distributions were in Texas and New Jersey as an example, I didn’t necessarily go far enough in that post to explain how/why these distribution differences thwart comparisons of low-income vs. non-low income achievement gaps. Yes, it should be clear enough that the above the line and below the line groups just aren’t similar across these two states and/or nearly every other.

A logical extension of the analysis in that previous post would be to look at the relationship between:

Gap in average family total income between those above and below the free or reduced price lunch cut-off

AND

Gap in average NAEP scores between children from families above and below the free or reduced price lunch cut-off

If there is much of a relationship between the income gaps and the NAEP gaps – that is, states with larger income gaps between the poor and non-poor groups also have larger achievement gaps – such a finding would call into question the usefulness of state comparisons of these gaps.

So, let’s walk through this step by step.

First, here is the relationship across states between the  NAEP Math Grade 8 scores and family total income levels for children in families ABOVE the free or reduced cutoff:

There is a modest relationship between income levels of non-low income children and NAEP scores. Higher income states generally have higher NAEP scores. No adjustments are applied in this analysis to the value of income from one location to another, mainly because no adjustments are applied in the setting of the poverty thresholds. Therein lies at least some of the problem. The rest lies in using a simple ABOVE vs. BELOW a single cut point approach.

Second, here’s the relationship between the average income of families below the free or reduced lunch cut point and the average NAEP scores on 8th Grade Math (2009).

This relationship is somewhat looser than the previous relationship and for logical reasons – mainly that we have applied a single low-income threshold to every state and the average income of individuals below that single income threshold does not vary as widely across states as the average income of individuals above that threshold. Further, the income threshold is arbitrary and not sensitive do the differences in the value of any given income level across states.  But still, there is some variation, with some stats have much larger clusters of very low-income families below the free or reduced price lunch threshold (Mississippi).

BUT, HERE’S THE PUNCHLINE:

This graph shows the relationship between income gaps estimated using the American Community Survey data (www.ipums.org) from 2005 to 2009 and NAEP Gaps. This graph addresses directly the question posed above – whether states with larger gaps in income between families above and below the arbitrary low-income threshold also have larger gaps in NAEP scores between children from families above and below the arbitrary threshold.

In fact, they do. And this relationship is stronger than either of the two previous relationships. As a result, it is somewhat foolish to try to make any comparisons between achievement gaps in states like Connecticut, New Jersey and Massachusetts versus states like South Dakota, Idaho or Wyoming. It is, for example, more reasonable to compare New Jersey and Massachusetts to Connecticut, but even then, other factors may complicate the analysis.

Grading the Governors’ Cuts: Cuomo vs. Kasich vs. Corbett (revised AGAIN!)

Here’s a quick data driven post on Governor’s state aid cuts – or aid changes. So far, I’ve been able to compile data from a few states which make it relatively easy to access and download data on district by district runs of state aid (and one state that does not, but I have good sources of assistance). Here, I compare changes in state aid to K-12 public school districts in Ohio, Pennsylvania and New York.

Let’s start with a review of types of cuts or distributions of cuts that might be applied:

First, cuts might be implemented as  percent of state aid, but might be implemented across different aid programs. States typically have different clumps of state aid that goes out to school districts, some of which are progressively allocated with respect to need and wealth and others which may be allocated flat across districts regardless of local capacity or wealth. And some, like New York State actually still maintain very large aid programs that are distributed in greater amounts to wealthier districts (STAR aid). If one makes proportionate cuts to need based aid, or equalized aid, that generally means making larger cuts to needier districts (on  a per pupil basis). The cuts alone are regressive on their face, and because the cuts are larger for districts with less capacity to replace locally the state cuts the effect tends to be highly regressive. Smaller cuts on wealthier districts are easily replaced with local source funds.

Alternatively, a state might cut a flat percent of flatly allocated aid, or a state might distribute aid cuts as a flat percent of per pupil budgets. The distributional effects – at face value – of these cuts does depend on the distribution of state budgets.  If the overall system is progressive to begin with (higher need districts having larger per pupil budgets) then the cuts are larger on a per pupil basis in higher need districts. If the overall system is flat, or neutral, the proportionate cuts will be flat or neutral on their face. If applied to flatly allocated aid, the cuts are flat, on their face. However, because wealthier districts can more easily replace the same size cut, the distribution effect will likely remain regressive – though not as absurdly regressive as the first option.

Most cuts fall into these two above categories (first three in table), but the possibility exists that a state would actually cut state aid in greater amounts to those districts that either have less need to begin with or districts that can most easily replace that aid with local resources. These would, on their face, be progressively distributed cuts. But, because those districts receiving the largest cuts would be the ones with greatest capacity to bounce back on their own, the distribution effect would likely be flat.

The baseline conditions in a state matter!

This table draws on the School Funding Fairness report I worked on and released last year, which characterizes the baseline conditions for states. It would be particularly problematic, for example, to make the first type of cuts on a state school finance system that is regressive to begin with. It would arguably also be quite offensive to make flat cuts on a regressive system. For more explanation regarding these baseline conditions, see http://www.schoolfundingfairness.org.

New York, while having high average spending per pupil, IS AMONG THE MOST REGRESSIVELLY FUNDED STATE EDUCATION SYSTEMS IN THE NATION. In fact, funding in New York State is only as high as it is because of the very high spending of very affluent suburban districts – suburban districts that, by the way, continue to receive substantial state aid for property tax relief.  New Jersey and Ohio are two of the only states which, in our report, showed systematic positive relationships between funding (state and local) and poverty, albeit Ohio’s funding was much less systematic than that of New Jersey and less progressive overall. Still, Ohio was far more progressive on funding distribution than many other states. Pennsylvania was right down their with New York, among the most regressive in the nation – but PA had begun to phase in a new basic education funding formula which would, if implemented, lead to improvements.

How do the Governor’s cuts play out? Who’s “best” and who’s “worst”

Below are the district by district distributions of per pupil aid changes with respect to student need measures, for Ohio, NY and PA.

In New York, the aid cuts per pupil ARE REGRESSIVE ON THEIR FACE, and fall into the first and worst category above. Higher need districts will have their aid cut nearly $500 per pupil, while many very low need districts see negligible cuts per pupil.

AND NOW FOR THE REAL KASICH CUTS. IF THE CORBETT CUTS WERE SUSPECT AS REPORTED IT ONLY MADE SENSE TO TAKE A SECOND LOOK AT THE KASICH GAME. AND THE PLAYBOOK IS THE SAME!

The playbook is to ignore that federal stabilization money that was intended to be replaced with state aid as it disappeared. Well, here are Kasich’s REAL regressive cuts when comparing 2012 to 2011, with 2011 including the stabilization money:


Ohio’s cuts are particularly interesting. On a per pupil basis, the cuts are systematically smaller in higher poverty districts. The cuts are actually larger in lower need and higher wealth districts (but for a few outliers). These cuts are, on their face, progressive, and will likely lead to a relatively flat distribution of overall per pupil budget changes. I’ve not yet run the second year of aid changes though.

As reported on the PA state portal web size, Basic Education Funding is set to increase by about 2% across PA districts. I’ve certainly heard news of cuts, but the data and official documentation at this point do not show those cuts. The overall state budget data do show huge cuts to other areas of the budget. But BEF funding receives a small boost and SEF (special education funding) is frozen. Because the boost is proportionate to 2010-11 BEF funding which is equalized, the bust is larger in higher need districts. Nonetheless the boost is quite small.

NOW FOR THE REAL PENNSYLVANIA CORBETT CUTS, COURTESY OF THE ED LAW CENTER OF PA:

Why the big difference? Well, I should have caught this one. Indeed the first graph above which shows a 2% increase over prior year is, in fact, a 2% increase over the prior year STATE + FED JOBS money portions of BEF. What they failed to mention is that they chose not to replace the FEDERAL STABILIZATION FUNDING.  In 2010-11:

BEF = STATE AID + SFSF + JOBS

The idea was, that as SFSF disappeared, state aid would be raised to replace that money, or else districts would face substantial budget holes. Corbetts 2012 funding is:

Corbett BEF Aid = 1.02 x (STATE AID 2010-11 + JOBS 2010-11)

Leaving out that other $650 million or so that was also in BEF (from SFSF) in the prior year, and was distributed through the equalized formula.

PA ELC Spreadsheet here!

So, the winner of the worse cuts award in ROUND1 – the battle of Corbett, Kasich, Cuomo – is Cuomo.   Cuomo’s cuts are large and Cuomo’s cuts are regressive on their face! That’s one heck of an accomplishment!

SO, AS IT TURNS OUT BOTH KASICH AND CORBETT ACTUALLY DO MARGINALLY WORSE THAN CUOMO.


BONUS GRAPH – CHRISTIE’s Prior Year New Jersey Cuts


Resource Deprivation in High Need Districts? (& CAP’s goofy ROI)

This post provides a follow-up on two seemingly unrelated topics, both of which can be traced back to the Center for American Progress.

First, there was that wonderful little Return on Investment indicator series that CAP did a while back.

Second, there’s the frequent, anecdotal argument that creeps into CAP/Ed Trust and AEI conversations that high need districts all have enough resources anyway and just have to stop wasting them on things like Cheerleading and Ceramics.

In this post, I provide an abbreviated version of some of the findings one of my recent conference papers.

The goal of the research study was to first identify those districts which fell into various regions or quadrants, applying a framework similar to that used by CAP in their ROI and second, explore the differences in personnel allocation in each group of districts looking for insights into what makes them tick (or not). It’s not a very good framework to begin with, but at least provides a common starting point:

The idea is that districts may fall into four groups. Some are high spending high performers and some are low spending low performers. Others are high spending low performers and still others are low spending high performers. What would be interesting from a policy perspective is whether we really could identify those in Q1 above and those in Q3 above and determine what makes them tick (Q1), or not tick (Q3).

As I discussed in a previous post, CAP took a particularly egregiously flawed approach to correcting/adjusting for various factors and laying out districts across these four quadrants. Here’s a snapshot of their Illinois findings:

The CAP IL snapshot shows plenty of districts in those green and red quadrants. Of course, the CAP snapshot a) fails to full correct for poverty related costs or ELL related costs and b) doesn’t correct at all for economies of scale or population density. If one were to believe the CAP findings, one would assume that there are similar proportions of districts that are in each group – both the expected groups (upper right and lower left) and the less likely groups (upper left and lower right). Of course, CAP also blew it in their interpretation of what’s going on in the lower left. They seemed to chastise these low spending low performing districts for their low performance, rather than acknowledge that these are actually the districts that have been screwed on funding, and are producing exactly what is expected of them in terms of outcomes.

Of course, if one more fully corrects for differences in costs across IL school districts, the actual distribution by quadrant comes out more like this (see conference paper for details on cost adjustment model):

The reality is that there aren’t a whole lot of districts – at least in the Chicago metro area that fall in the upper left and lower right quadrants. In fact, districts are largely where they are expected to be – Some have plenty of resources and do quite well, and others have limited resources and are doing poorly. Now, there is plenty of variance in the lower left and upper right which could be explored for interesting patterns.

Note that Illinois (along with PA and NY) is among the most regressively funded and racially disparately funded systems in the country!

How do resource constraints relate to curricular offerings?

Much of the  conversation of the past few days/weeks by pundits on twitter and in blogs has been on the question of what’s good for the “rich” and what’s good for the “poor.” Let me reframe that issue in this post in terms of what kids have access to in districts in the upper right quadrant of the above figure versus what kids have access to in the lower left quadrant.  Of course, the anecdotal assumption laid out above is that there are actually a whole bunch of districts in the lower right that have elaborate cheerleading and ceramics programs. Say it ain’t so! Okay… it ain’t!

What is so is that students attending districts in the lower left hand quadrant tend to have much less access to advanced curricular opportunities and boutique electives courses than children attending districts in the upper right hand quadrant. Here are a few figures, based on individual staffing assignment data:


Children attending districts in the upper right hand quadrant are nearly 3 times as likely to have access to a teacher assigned primarily to advanced math courses, nearly twice as likely to have access to a teacher primarily assigned to advanced literature or advanced science, and significant more likely to have access to a teacher assigned primarily to advanced social sciences or even seemingly more basic offerings like Algebra and Geometry. Moving deeper into the extremes of the upper right and lower left quadrants magnifies these disparities.  Further, while these distributions are expressed as a percent of total staffing, high spending high outcome districts tend to have significantly more staff per pupil.

Students in the lower left hand quadrant do have more of some stuff. They have a greater density (as a share of total staffing, but NOT on a per pupil basis) of elementary classroom teachers, and teachers in bilingual, alternative and at risk education. They also seem to have marginally more school site administrators. They have only comparable shares of staff allocated to basic level courses.

Implications?

Analyses in the full paper provided little evidence in Illinois or Missouri that high need and low performing districts were squandering their resources on things like cheerleading or ceramics, or, for that matter that there were large numbers of high need low performing districts that really had enough resources to begin with but weren’t using them productively. The classic emergent profile of a high need low performing district in Missouri and Illinois was of a district with highly constrained resources after adjustment for costs, and a district that had largely forgone assigning teachers to advanced content areas and elective courses for which they perhaps expected few students to enroll. Lack of a rich curriculum in high need settings is a significant policy concern and is a concern that cannot likely be remedied by reshuffling deck chairs.  These districts in fact need more total resources than high spending high outcome districts because they must be able to offer both the basic course work to prepare students to gain access to higher level courses, and to offer the higher level courses. Under present circumstances in many states, those resources just aren’t there, and it is very counterproductive to pretend either that they are or that it’s the districts’ fault they aren’t!