Blog

From Portfolios to Parasites: The Unfortunate Path(ology) of U.S. Charter School Policy

I recall several years ago attending an initial organizing meeting for a special interest group on Charter Schools at the American Educational Research Association. Note to outsiders – AERA has several special interest groups, some research oriented, some advocacy oriented…  many somewhere in between. These are member organized groups and many are very small. If I recall correctly, there were a handful of us at that meeting, including Gary Miron, Katy Bulkley and a few others. If memory serves me, I think Rick Hess may have paid a visit to the meeting to argue that this new group should really just be a part of the school choice special interest group. All of that aside, I and others attended this meeting out of our interest in studying this relatively new concept of charter schools. Most of us were intrigued by the possibilities of alternative governance structures that might provide opportunity for innovation (what might now be referred to a disruptive innovation).

I didn’t spend a whole lot of time researching charters in my first few years after that, but eventually I did start to explore charter schooling and teacher labor markets – specifically the recruitment/retention of teachers based on different academic backgrounds – specifically college selectivity. My perspective was that some creative, energetic leadership (which might now be referred to as Cage-busting leadership) that might be associated with a mission-driven start-up school, coupled with an ounce or two of deregulation, and applied in the right context, might provide opportunities to recruit an academically talented pool of teachers. Our research largely supported these assertions.

  • Baker, B. D., & Dickerson, J. L. (2006). Charter Schools, Teacher Labor Market Deregulation, and Teacher Quality Evidence From the Schools and Staffing Survey. Educational Policy, 20(5), 752-778.

In recent years, however, my perception is that this whole movement has gotten way out of control – it has morphed dramatically – especially the punditry and resultant public policy surrounding charter schooling. Sadly, I’m reaching a point where I now believe that the end result is causing more harm than good.  In my view, many charter schools, and certainly the political movement of charter schooling, are no-longer operating in the public interest. In fact, they have all the incentive in the world to do just the opposite, and there is little or no sign of this turning around any time soon.

We’ve shifted dramatically, and rather quickly from what some might refer to as a portfolio model, to what I would now characterize as a parasitic one.

Overarching Incentives & Chartery Miracles

Since the early phases of significant national charter expansion which coincided (somewhat) with early implementation of NCLB, chartery success has been reduced to a definition reminiscent of the cult of efficiency. Chartery success (accompanied by headlines, news magazine segments and visits from politicians) is largely defined as A) getting higher test scores or greater test score growth, B) for less money, and C) with the “same” kids. Because this is the supposed definition of success, punditry around charter schooling – and research designed to endorse this punditry – makes every effort to validate A, while obfuscating or completely misrepresenting B and/or C.

Figure 1. Chartery Miracle Success Framework

Slide1

The central objective in Chartery Miracle Punditry is to prove that average scores, and otherwise methodologically weak policy analyses show that charter students outperform their traditional public school counterparts.

These studies rarely if ever include any accurate measure of the resources used by charters, more often than not citing bogus, irrelevant studies or providing flimsy back of the napkin analysis.

These studies often use entirely insufficient measures for declaring students as being “matched” with peers between district and charter schools, fail to consider fully the role of peer effects as one of the largest school factors, or the intersection of selective attrition and peer effects.

In part, because it is increasingly well understood that this is the way the game is played, charter school operators have all the incentive in the world to play the game this way (even if they were otherwise predisposed not to). And apparently far too many charter operators are responsive to these incentives.

Competition for Demographic Advantage

This recent Reuters article by Stephanie Simon explains practices actually used by many charter operators, arguably in response to current incentives.

http://www.reuters.com/article/2013/02/15/us-usa-charters-admissions-idUSBRE91E0HF20130215

In short, charter schools are applying a variety of creative strategies to screen out those students they feel won’t help their numbers. In some/many cases, children will be screened out on the basis of otherwise unobservable characteristics. Two low income children wish to apply… but only one is sufficiently motivated to complete the 15 page entry essay. They are labeled as similar as one returns to the district school and one matriculates to the charter, but clearly there is at least some difference between them which may influence their future performance. But these mechanisms also serve to sort out poorer children from more disrupted households, more mobile families, and non-English speaking families. And clearly they send a signal to parents of children with disabilities that this may not be the school for you.

In many parts of the country, especially in areas where charter schools serve a larger share of total enrollment, charter schools do seem to serve more lower income students. And in states where there exists an incentive to serve children with disabilities, charters often do so (boutique special education charters). But these incentives get out of hand as well.

In affluent, economically diverse states like New Jersey, New York and Connecticut, as I commented in the Reuters article, my research (& related posts) shows substantial cream-skimming among charters.  Many of these findings are validated by others, as I explain in my reports/publications.  Here are a few figures on demographics of New York and Connecticut charter schools.

This figure on New York City Charter schools draws on data from a forthcoming article (related to a recent report). In this analysis, I use three years of data from 2008-10, and I estimate a regression equation for each demographic measure, comparing schools that serve the same grade level in the same borough of the city. The graph shows how much lower (or higher) the population share is in each charter school chain, relative to NYC district schools.

Figure 2. New York City Relative Demographics

Slide2

This next figure shows the demographics of Connecticut Charter Schools that are in high poverty cities. To construct this comparison, I combine CTDOE data with data from NCES Common Core. I sum the total number of public & charter school enrolled children by City (school location in CCD) and the total numbers of free lunch, ELL and special education enrolled children. Note that the special education concentrations are for only regular district (& charter) schools. Overall district rates of children with disabilities are marginally higher (because some are in special &/or private placements).

Table 1. Connecticut Charter Schools in High Poverty (<50% Free Lunch) Cities

Slide8

Let me make this absolutely clear. In a heterogeneous urban schooling environment, the more individual schools or groups of schools engage in behavior that cream skims off children who are less poor, less likely to face language barriers, far less likely to have a disability to begin with, and unlikely at all to have a severe disability, the higher the concentration of these children left behind in district schools.(see for example: https://schoolfinance101.wordpress.com/2012/08/06/effects-of-charter-enrollment-on-newark-district-enrollment/)

Indeed, as I’ve pointed out previously, districts create some similar (or even more extreme) segregation on their own through magnet schools, but under these circumstances, districts can (and should) regulate the extent of segregation – and specifically the extent to which high need children are left behind clustered in certain district schools. Certainly some urban districts do a very poor job at managing this balance.

But with independent charter expansion, districts lose the ability to even try to manage the balance. Sadly, what may initially have been conceived of as a symbiotic relationship between charter and district schools is increasingly becoming parasitic!

In a “competitive marketplace” of schooling within a geographic space, under this incentive structure, the goal is to be that school which most effectively cream skims – without regard for who you are leaving behind for district schools or other charters to serve – while best concealing the cream-skimming – and while ensuring lack of financial transparency for making legitimate resource comparisons.

This is precisely why the idea of replacing entirely urban public school systems with a portfolio of charters competing against one another with minimal centralized oversight, is a massively stupid [from a public policy perspective] idea.  That is, unless the overarching incentive structure were to change entirely. But I have little hope of that happening, and there seems to be little incentive for advocates of the extreme extension of charter madness to support altering the incentives.

There does seem to be some increased media and public awareness that many charter schools are indeed attempting to game their enrollments. Some charter (and chartering) advocates, including Mike Petrilli have capitulated on this point, but have suggested that this isn’t necessarily a bad thing. I might agree that with moderation, under the right controls, which requires some centralized governance/management, this may be partly true. But under current circumstances, it’s not.

[sidebar – one need only look at the geographic distribution of charters in New Orleans or Kansas City with respect to neighborhood income to see how such a system, under the current incentive structure, will fail to serve the neediest children]

Competition for Resources

The last frontier of deception in the charter debates seems to be over comparability of resources.  Few if any studies which praise charter successes make any legitimate attempt to measure resources. Ken Libby, Katy Wiley and I did our best to tease out resource comparability in NYC, Texas and Ohio. The fact that our report has so darn many pages (over 20) of appendices, footnotes, caveats and explanations regarding those comparisons is testament to the fact that policymakers (and the charter industry influencing them) seem to have little interest in improving transparency or comparability of charter school finances.

Lack of clear reporting, transparency and comparability permits the most vocal charter pundits to continue advancing utterly ridiculous arguments about their supposed massive, persistent resource disadvantage.

Thus, they (charter pundits) perpetuate the myth that charters everywhere and always are disadvantaged in terms of resources access – and specifically by the design of state funding systems.  Some indeed are, but others clearly are not. Thus, they position themselves to lobby fiercely for their supposed “fair share” of public resources. These arguments are most often anchored to the completely bogus Ball State/Public Impact study of charter school funding (see explanation of Bogosity here![1])

My recent report, and forthcoming article with expanded analyses, on New York City charter schools shows that most substantially outspend NYC BOE schools serving similar student populations and the same grade levels.  Figure 3 shows the scatterplot of middle schools by special education population share (where special education population is the strongest predictor of school site spending differences for NYC BOE schools).

Figure 3. Site Based Spending and % Special Education in NYC Middle Schools

Slide3

Figure 4 shows the elementary schools.

Figure 4. Site Based Spending and % Special Education in NYC Elementary Schools

Slide4

Figure 5 shows the total expenditures per pupil for Connecticut district and Charter schools. It would appear from Figure 5 that charter schools are getting the short end of the stick? Right? Especially those high flying charters like Amistad and Achievement First in Bridgeport? The problem with this comparison is that it is the host districts that are responsible for financing transportation costs, and ultimately responsible for serving children with disabilities (including/especially severe disabilities) and the expenditures for transportation and special education (including transportation of charter students) are reported on district expenditures.

Figure 5. Total Expenditures per Pupil for Connecticut District & Charter Schools

Slide9

When we pull out transportation and special education spending the picture changes quite substantially as shown in Figure 6. The charter schools are doing reasonable well in comparable expenditures per pupil – setting aside lengthy discussion of chartery misrepresentations of comparisons of facilities costs (the classic charter reactionary argument being that charters in a state like CT spend about $1700 per pupil on facilities, whereas district facilities are supposedly “free.” Even if that was the case, many CT charters would still be ahead. But, district facilities also come with maintenance costs and long term debt payments [which yes, are expenditures] that while not equaling charter lease payments as a share of operating expense, they do close the supposed gap quite substantially – see lengthy note below).

Figure 6. Comparable Expenditures per Pupil for Connecticut District & Charter Schools

Slide10

Collateral Damage of the Parasitic Chartering Model

In previous posts I showed how the population cream-skimming effect necessarily leads to an increasingly disadvantaged student population left behind in district schools. High need, urban districts that are hosts to increasing shares of cream-skimming charters become increasingly disadvantaged over time in terms of the students they must serve.

It would be one thing if state policies were in some way trying to intervene to scale up district resources to mitigate this damage. It would be one thing if we could count on charter advocates/pundits to support public policy that would help local districts deal with these (intended) consequences.

But again, the overarching incentives do not favor such advocacy. Resources are finite, and in the never ending quest to “win” the chartery success wars, it is in the interest of charter advocates to do whatever they can to get the largest share of the resources, and not care so much whether district schools get anything. In fact, it’s easier to win if they don’t.

I was not initially so cynical as to believe that charter advocates would seemingly endorse persistent deprivation of needy traditional districts in their own effort to garner more resources, and “win”. But, increasingly, it seems they are. At the very least, they want what they perceive to be their share, regardless of consequences for district schools. We see this in the persistent drive for access to facilities in New York City, subtle shifts in charter vs. district subsidy rates that appear to advantage the charters (see IBO reports) and the continued flood of philanthropy.

Meanwhile, what is the status of funding for high need districts in New York State? Well, Table 2 summarizes the current degrees of underfunding of New York State’s school finance formula.

Several high need districts are “underfunded” on the state’s own formula by thousands per pupil, including New York City. And where is the outcry from charter advocates that their hosts are being underfunded?

Table 2. Underfunding of New York State’s foundation formula

Slide12

Districts are starting to get fed up. But they still seem to lack the sex appeal (or bank accounts) and media access of leading charter advocates.

Yet, we don’t hear the cry from charter advocates to support the formula. Doing so might actually increase the pass through funds to charters. But, well endowed charters can offset whatever losses they might face by an underfunded formula… and be that much more likely to “win!” Is that really in the public interest? When is the last time you heard a charter advocate argue for fully funded the state aid formula (as opposed to mandating specifically an increase to their allotment of it).

Connecticut provides a similar case of collateral damage. Figure 7 shows the per pupil increases in the Education Cost Sharing formula adopted for the current year, over prior year spending levels. In short, it ain’t much! Okay… it’s actually next to nothing. Persistent inequities exist between higher and lower need districts, and for that matter, among higher need districts (notably, Hartford and New Haven spending in this graph are distorted by magnet school aid, some of which is spent on kids from other districts).

In the same year, the CT legislature did manage to more significantly increase charter school funding (on the order of $2k per pupil), despite the fact that many charter schools were both serving lower need student populations and already spending more per pupil on a comparative basis than their host districts. Why? Well, first of all, it’s a lot cheaper – takes much less total funding increase – to increase funding for just charter kids. Second, that’s where the current punditry is – with charter advocates successfully conveying their (false) message of severe fiscal disadvantage. Pauvre, Pauvre Charter Schools?

Meanwhile, charters like Achievement First in Bridgeport seem more than happy to take their windfall and allow their “competition” (Bridgeport Public Schools) to languish.  It is indeed easier to win that way. And that seems to be what it’s all about.

Figure 7. 2012-13 increases to District Funding in Connecticut

Slide11

Closing Thoughts

It’s quite sad that we’ve reached this stage. As I envisioned it from the outset (or early on, around the late 1990s), it wasn’t supposed to turn out this way. It would, in theory be possible to establish an avenue for creative experimentation, increased flexibility – for appropriately moderated disruptive innovation and cagebusting leadership. It might even all fit into a portfolio model. Yeah… we could use all of the reformy language to describe what might have been a far more reasonable, thoughtful extension of chartering.

But alas, the potential for charters to contribute positively to the public good, in my view, has been severely compromised in part by the ill conceived incentive framework policymakers and pundits have wrapped around the concept of chartering.  Unfortunately, for the foreseeable future it is all too convenient for them to perpetuate this faulty incentive system. Yeah… the public is catching on, and eventually this too shall pass. The only question is just how much damage will have been done before we turn the corner.

[final side bar: Among the damages not discussed herein, but discussed in a previous post, are the increasing shares of students, primarily in urban districts serving low income children and minorities that will be forced to forgo constitutional rights and statutory protections that would be available to them in true public schools, in order to gain access to the only available charter schools. Sadly, many charters have chosen as one method to improve their chance of winning, discipline policies & requirements that would be impermissible in “public” schools (in legalize “state actors”)].

Notes:

[1] Footnote #22  from: http://nepc.colorado.edu/files/rb-charterspending_0.pdf

A study frequently cited by charter advocates, authored by researchers from Ball State University and Public Impact, compared the charter versus traditional public school funding deficits across states, rating states by the extent that they under-subsidize charter schools. The authors identify no state or city where charter schools are fully, equitably funded.

But simple direct comparisons between subsidies for charter schools and public districts can be misleading because public districts may still retain some responsibility for expenditures associated with charters that fall within their district boundaries or that serve students from their district. For example, under many state charter laws, host districts or sending districts retain responsibility for providing transportation services, subsidizing food services, or providing funding for special education services. Revenues provided to host districts to provide these services may show up on host district financial reports, and if the service is financed directly by the host district, the expenditure will also be incurred by the host, not the charter, even though the services are received by charter students.

Drawing simple direct comparisons thus can result in a compounded error: Host districts are credited with an expense on children attending charter schools, but children attending charter schools are not credited to the district enrollment. In a per-pupil spending calculation for the host districts, this may lead to inflating the numerator (district expenditures) while deflating the denominator (pupils served), thus significantly inflating the district’s per pupil spending. Concurrently, the charter expenditure is deflated.

Correct budgeting would reverse those two entries, essentially subtracting the expense from the budget calculated for the district, while adding the in-kind funding to the charter school calculation. Further, in districts like New York City, the city Department of Education incurs the expense for providing facilities to several charters. That is, the City’s budget, not the charter budgets, incur another expense that serves only charter students. The Ball State/Public Impact study errs egregiously on all fronts, assuming in each and every case that the revenue reported by charter schools versus traditional public schools provides the same range of services and provides those services exclusively for the students in that sector (district or charter).

Charter advocates often argue that charters are most disadvantaged in financial comparisons because charters must often incur from their annual operating expenses, the expenses associated with leasing facilities space. Indeed it is true that charters are not afforded the ability to levy taxes to carry public debt to finance construction of facilities. But it is incorrect to assume when comparing expenditures that for traditional public schools, facilities are already paid for and have no associated costs, while charter schools must bear the burden of leasing at market rates – essentially and “all versus nothing” comparison. First, public districts do have ongoing maintenance and operations costs of facilities as well as payments on debt incurred for capital investment, including new construction and renovation. Second, charter schools finance their facilities by a variety of mechanisms, with many in New York City operating in space provided by the city, many charters nationwide operating in space fully financed with private philanthropy, and many holding lease agreements for privately or publicly owned facilities.

New York City is not alone it its choice to provide full facilities support for some charter school operators (http://www.thenotebook.org/blog/124517/district-cant-say-how-many-millions-its-spending-renaissance-charters). Thus, the common characterization that charter schools front 100% of facilities costs from operating budgets, with no public subsidy, and traditional public school facilities are “free” of any costs, is wrong in nearly every case, and in some cases there exists no facilities cost disadvantage whatsoever for charter operators. Baker and Ferris (2011) point out that while the Ball State/Public Impact Study claims that charter schools in New York State are severely underfunded, the New York City Independent Budget Office (IBO), in more refined analysis focusing only on New York City charters (the majority of charters in the State), points out that charter schools housed within Board of Education facilities are comparably subsidized when compared with traditional public schools (2008-09). In revised analyses, the IBO found that co-located charters (in 2009-10) actually received more than city public schools, while charters housed in private space continued to receive less (after discounting occupancy costs). That is, the funding picture around facilities is more nuanced that is often suggested.

Batdorff, M., Maloney, L., May, J., Doyle, D., & Hassel, B. (2010). Charter School Funding: Inequity Persists. Muncie, IN: Ball State University.

NYC Independent Budget Office (2010, February). Comparing the Level of Public Support: Charter Schools versus Traditional Public Schools. New York: Author, 1.

NYC Independent Budget Office (2011). Charter Schools Housed in the City’s School Buildings get More Public Funding per Student than Traditional Public Schools. New York: Author. Retrieved April 24, 2012, from http://ibo.nyc.ny.us/cgi-park/?p=272.

NYC Independent Budget Office (2011). Comparison of Funding Traditional Schools vs. Charter Schools: Supplement. New York: Author .Retrieved April 24, 2012, from http://www.ibo.nyc.ny.us/iboreports/chartersupplement.pdf.

Note: The average “capital outlay” expenditure of public school districts in 2008-09 was over $2,000 per pupil in New York State, nearly $2,000 per pupil in Texas and about $1,400 per pupil in Ohio. Based on enrollment weighted averages generated from the U.S. Census Bureau’s Fiscal Survey of Local Governments, Elementary and Secondary School Finances 2008-09 (variable tcapout): http://www2.census.gov/govs/school/elsec09t.xls

Dismantling Public Accountability & Transparency in the Name of Accountability & Transparency?

This post comes about as a follow up to a previous post where I critiqued the rationale of the Students First policy agenda.  It should be noted that the Students First policy agenda is anything but unique. Like DFER, SFER, ALEC or any policy advocacy organization, the SF policy agenda is little more than an aggregation of largely non-original, template policy prescriptions.

Now, I’m not one who goes all in for the lingo of “corporate reform” or one who perceives “privatization” or “market” mechanisms to be inherently evil and contrary to the public good. However, I am someone who believes we should consider carefully the multitude of tradeoffs involved in shifting between publicness and privateness in the governance and provision of schooling.

What I have found most intriguing over time is that the central messaging of these reformy template policy prescriptions is that they will necessarily improve accountability and transparency of education systems, and that they will do so largely by improving the responsiveness of those intractable systems through altered governance and finance, including but not limited to “market” based choice mechanisms.

The standard list of strategies that are supposedly designed to increase accountability and transparency of our education system include, among other things:

  1. Expansion of charter schools, coupled with multiple charter authorizers (including private entities) and minimized charter regulation
  2. Adoption of tuition tax credit programs providing individuals and corporations the option to forgo paying a portion of taxes by contributing that amount to a privately governed entity (or entities) that manages tuition scholarships to privately governed/managed schools.
  3. Parent trigger policies that permit a simple majority of parents of children currently attending any school within a district to mandate that the local board of education displace the entire staff of the school and potentially turn over governance and management of school’s operations (and physical/capital assets?) to a private management company to be operated as a charter school.

It is argued that current large bureaucratic public education systems are simply intractable, non-responsive and can’t be improved – That they are simply not accountable to anyone because they are run by corrupt self-interested public officials elected by less than 2% of eligible voters (turnout for board elections) and that they have no incentive to be responsive because they are guaranteed a constantly growing pot of revenue regardless of performance/quality/responsiveness.

Whatever problems do exist with the design of our public bureaucracies, I would argue that we should exercise extreme caution in accepting uncritically the belief that we could not possibly do worse, and that large scale privatization and contracting of private entities to provide the public good is necessarily a better and more responsive, more efficient, transparent and accountable option.

Let’s take a walk-through of some of the key aspects of current preferred reforms by comparison to traditional public governance of our education systems.

Privately Governed/Managed Charter Schools vs. Local Education Agencies

Let’s begin with the push for less regulated, expansion of charter schooling with particular emphasis on expansion of privately governed and managed charter schools, and perhaps even charter schools authorized by independent private authorizers (granted authority to operate by a private entity given that authority by the state).  To be absolutely clear, no-matter how many reformy pundits proclaim from their soapbox that Charter Schools are PUBLIC Schools… it just isn’t that simple.  In many critically important ways, under many critically important conditions Charter Schools SIMPLY ARE NOT PUBLIC in every important traditional or legal sense!  See this post for further elaboration!

Note – this varies widely from state to state, depending on whether state charter statutes specifically spell out requirements of privately governed charter schools. 

Let’s explore how/why this might be important when it comes to evaluating whether and how expanded, less regulated chartering either increases or decreases public accountability.

Table 1. Chartering vs. Traditional District Schooling

Dimension Local Education Agency Privately Governed Charter (Non-State Actor)
Governance Governed by public officials (with all rights & immunities)Elected or appointedNecessarily subject to open public records & open meetings lawsNecessarily required to comply with public bidding requirementsNecessarily required to disclose publicly employee contracts Governed by appointed (self-appointed) board of private citizensMay not be subject to open records or meetings lawsMay not be required to engage in public contract/bidding requirementsPrivate appointed board may hire private management firm
Finance Required to disclose finances (reported relatively consistently in most state data systems, including detailed AFRs (annual financial reports) & public posting of budgets) Usually required to report expenditure of public funding. State data systems spotty and inconsistent on charter school revenue/spending data (may be required to disclose IRS filings [form 990])
Disclosure Public officials subject to open meetings laws.All documents/employee contracts/financial documents & communications between officials subject to open records laws. Board members & managers may not be subject to open meetings. Many documents/contracts with private manager, etc. considered private/proprietary.
Employees Public employees with key constitutional and statutory protections Private employees, forgoing certain rights to bring legal challenges against their employer
Students Retain rights to not have their government (school) infringe on various constitutional and statutory rights, and to uphold key statutory obligations. Students may forgo numerous rights under privately governed discipline codes.

These differences are not trivial, yet few are discussing them as critical factors for shaping future education policy. Rather, day after day, week after week, we are subjected to more and more vacuous punditry by self-proclaimed “expert” pundits displaying an astounding ignorance of education law and callous disregard for our system of government and the U.S. Constitution.

For example, it would appear that charter schools that are not “state actors” (which may include most that are governed by boards of private citizens and especially those managed by private companies/EMOs or CMOs) may require students to abide by disciplinary/conduct codes which involve compelling those students to recite belief statements about the school (mottos, pledges, loyalty oaths), obligatory participation in indoctrination activities and imposition of financial penalties for disciplinary infractions, none of which would be permissible in traditional public schools. Government entities – state actors – may not compel speech and especially may not compel statements of belief.

So then, what is a family to do when no traditional public schools are available to them (as is practically the case in many areas of New Orleans and increasingly the case in other higher charter market share cities)? Should parents have to choose which rights to forgo? [picking the school with the financial penalties over the one requiring daily recitation of a loyalty oath?]

Can (as some belligerent civic illiterate,  pundits believe) entire urban school systems be replaced with charter schools – or the traditional public schools adopt the lessons of “chartering” which involve infringement of constitutional rights? Is it reasonable to assume that the entire student population of a city would be placed in a position of necessarily forgoing their rights to free expression, free exercise?

I hear those reformy pundits cry… “but who cares about a little constitutional protection here and there if we can squeeze out an extra point or two on state assessments [via selective attrition of low performing peers]? They’ll be better for it in the long run!”

Yeah… sure… that’s all well and good for someone else’s kids. I for one believe the constitution continues to have a purpose and that constitutional rights should be equally available to all people’s children. I believe that constitutional protections are a key element of an accountable education system available to all – not just some.

This is a big freakin’ deal. An important policy trade-off to consider, if you will. This is a critically important tradeoff to consider when adopting policies that expand non-state-actor charter schooling, even if some marginal academic gain can be achieved.

Indeed, under our current public schooling system constitutional battles over free exercise, free speech, discrimination, etc. persist (as any good pro-school-choice libertarian will frequently argue – I, being a former card carrying member in my NH days!). It’s a never ending tension between the preferences of the majority vs. the rights and interests of the minority. Such arguments are often used as the basis for saying that all students/families should simply have the right/option to choose where to attend school – where they can each be their own majority.  The value of our current public (gov’m’nt) system is that the minority does have the right to challenge their mistreatment and that collective participation in the public system forces public debate over these issues (even if/when they end up being handled poorly). It ain’t perfect, but I’m not willing to replace it with a system that requires large numbers of children to forgo these rights in order to participate in schooling.

Poor and minority children should not be disproportionately required to forgo constitutional protections (and a variety of statutory protections) to gain access to those few additional test score points. Further, no-one is telling them that they even have rights to begin with – especially those pitching the charter expansion policies (constantly spewing the rhetoric of the “publicness” of charter schooling).

Charter Schooling & The Market for Lemons

In theory, the accountability and efficiency advantage of charter schooling is driven by the market for choice of one school over another. Increasingly, state education agencies have moved from being impartial technical assistance agencies and accountability reporting agencies to strongly promoting the charter sector. This advocacy behavior corrupts the state agency role and creates what economists refer to as an “asymmetry of information” – in the extreme case a “market for lemons.”

Markets fail when the consumer is misled to believe that the product they are being sold is a miracle product (without counterbalancing information available to the “customer”). Asymmetry of information occurs where the seller has more information on the quality of the product than the buyer and is able to extract from the buyer a higher price than is warranted given the product’s true quality. In this case, we are talking about the parents’ choice to apply their child’s gov’t subsidized education credit, per se, at a charter versus the traditional public school.  They’ve got one credit to spend for each child and the SEA endorsed spin these days is that that credit is nearly always best spent in a charter school (even when it clearly is not).

Taken to the extremes, State Education Agency and public media flaunting of chartery miracles has created a distorted market for those charters that are least proven on the market (perhaps in some cases, lemons), with those charters that are most proven already over-subscribed and not needing to compete openly. So, those most available on the market are those whose actual performance/quality is far lower than that which is capturing the headlines and receiving accolades from state officials. [not quite a true market for lemons since the price – education “credit” is fixed … though perhaps I can expand on this at a later point].

It is the absurd punditry, intentional obfuscation and complete disregard for legitimate data/analysis on charter schooling that have perhaps soured my taste for the movement more than anything else (bearing in mind that I was a founding member of the AERA special interest group on Charter School research and, at the time, was largely an advocate myself).

Tuition Tax Credits & Vouchers vs. Conventional LEA Governance

Next up, let’s talk about tuition tax credits and vouchers. Now, I would argue that in many ways, tuition tax credits and vouchers which provide the option for children to attend schools that are well understood to be private, that not state actors are at least more honest with respect to student and employee rights.

It is understood (or should be more clearly understood) that when choosing a private school or choosing to be employed by a private employer that one’s rights may differ. On very few occasions have I actually heard the rather absurd argument that private schools receiving students on publicly financed scholarships are “public” (yes, they did, without understanding the implications, make this claim in Louisiana when their voucher model was overturned by the state courts).

Now, let’s parse the governance and accountability differences between traditional public LEAs, Vouchers and Tuition Tax Credits.

Table 2. Vouchers & Tuition Tax Credits vs. Traditional District Schooling

Element LEA Voucher Tuition Tax Credit
Revenue Raising Raises local tax revenue (subject to local voter approval) & receives state aid (through legislation/formula adopted by state elected officials) Permits/requires the transfer of a set per pupil amount of funding from state and/or state/local sources to pay for private school tuition of students Permits corporations to pay funds to a privately governed, state approved/created/appointed entity (school tuition organization) in lieu of paying taxes.
Governance

(records/

meetings)

Required to disclose minutes of meetings and related documents pertaining to budget, financial report and any/all contractual agreements. Assuming voucher program governed by local or state board/public officials, related requirements apply. Entity governed by appointed private citizens, not public officials.  (thus, may not be required to disclose records, open meetings)
Disclosure Required to report/disclose annual budget (for approval by either/both local elected officials and/or local voters)Required to report/disclose annual financial report (usually with independent external audits) Financial disclosure of funds expended (from public agency) on vouchers subject to all public expenditure laws [that is, total allocated to vouchers from budget]Voucher receiving schools not likely required to provide detailed disclosure (non-religious non-profit pvts file with IRS, religious privates not required) May/may not be subject to disclosure requirements of public officials.If non-religious, organized as non-profit, may be required to report limited finances to IRS.
Use of Funds Expended directly by publicly governed entities (public officials) Comingled with all other operating funds of private school entity Comingled with all other operating funds of private school entity
Governance of Schools Publicly governed Private once $ reaches school Private once $ collected to tuition organization
Student/

Employee Rights

Public Private, not state actor Private, not state actor
Taxpayer/

Public Rights

Right to political participation (electing officials, etc.)Right to bring limited legal challenges regarding use of funds

Right to request disclosure

Right to bring limited legal challenges regarding use of funds Limited state legislative options (can try to vote in new legislators)[taxpayers lose right to challenge objectionable use of funds because the funds are not considered tax dollars]

The simple part here is that under either the tuition tax credit or voucher program, the schools that children attend are clearly private. It is (or at least should be) understood that students and employees forgo certain rights. As such, it would be plainly illogical to use such a model as the model for an entire city or state, meaning that children would not even have the option of attending a school where they are protected from discrimination and other forms oppression. [notably, while children/families may be oppressed and/or discriminated against by the ruling “majority” in a public school setting, they have a constitutional right to challenge their mistreatment – a right that ceases to exist where only private providers are available].

Other more nuanced delineations here are between the voucher and the tuition tax credit model. The more popular TTC approach is far more convoluted, and in being so, creates additional layers of opaque to non-existent accountability, ultimately negating altogether taxpayer legal rights.

Under a voucher model, like the Cleveland voucher model, taxpayers do have the right to challenge that their tax dollars are being allocated to religious education. Indeed, when such a challenge was brought, the U.S. Supreme Court decided that the voucher mechanism in place was sufficiently neutral (reliant on parental choice) that it did not violate the establishment clause of the U.S. Constitution. But, taxpayers at least had the right to bring this challenge even if they did lose.

What I find most objectionable (in terms of public accountability) about the TTC approach is that when a similar challenge was brought against the Arizona tuition tax credit model, the U.S. Supreme Court determined that the dollars being expended effectively weren’t the taxpayers’ dollars and thus the taxpayer had no right to bring a legal challenge to the policy (no taxpayer “standing”). Quite simply no taxpayer standing means NO taxpayer legal accountability. No taxpayer legal recourse. Arguing that TTC models increase public accountability is absurd.

Further, that these systems rely on creating non-public, non-publicly accountable entities to manage these funds diverted from the public coffers further reduces public accountability.

Parent Trigger vs. Conventional Local Education Agency Governance

Parent trigger is quite possibly the most ludicrous corruption of public governance and accountability on the education reformy education policy table.  Put simply, parent trigger is the most ill-conceived subversion of governance I’ve seen out there in the reformy playbook. Let’s give it a walk-through.

Table 3. Parent Trigger vs. Traditional District Schooling Governance

Element Traditional LEA Parent Trigger
Primary Control Elected or appointed board of public officials:Public disclosure requirements as addressed above Permits simple majority of parents of children currently attending any school within an LEA to require that the LEA change the management/operations of that school, to include transfer of governance to a private entity
Financial Governance Public officials govern annual budget and accumulated assets of LEA in accordance with public budgeting and finance statutesExpenditure of funds and/or transfer of assets subject to public approval & required public disclosure Small minority of district voter population may obligate district to allocate funds to/contract with private provider/charter manager against preferences of elected officials
Public control/accountability “Public approval” applies to all eligible voters whose primary residence lies within the geographic boundaries of the LEA (whose tax dollars support the annual operations and contributed to purchase and/or maintenance of assets)Board elections held on regular cyclesBudget approval may also require public vote and held on regular election cycleSpecific requirements apply for incurring municipal bond debt for capital investment Provides no recourse for property owners/taxpayers who have no children currently attending the schoolProvides no recourse for parents of children who would be attending the school in future years, until the point at which they would attendMay or may not occur on defined timeline – specific election cycle
Student/teacher rights Student and teacher constitutional and statutory rights as addressed above Students and employees forgo constitutional/statutory rights if converted to privately governed/managed school

 

The most substantive reductions of public accountability, transparency and governance occur when the simple majority of parents of children in one school decide that there school must be converted to a privately managed charter school, which may in turn adopt policies that deprive both children and employees of constitutional and statutory rights. Indeed, the district would likely be required to find a school for the displaced minority of students who don’t wish to forgo these rights. But the simple majority of parents in that school at that point in time should not be granted the authority to displace a minority of students in their school. Further, a simple majority of parents in a school in a district should not be granted the authority to dictate local board funding or contracting policies without input of the broader eligible voter population.

Among other things, Parent Trigger policies assume that the public at large who reside and own property within a school district have no stake in the accountability of that school system. School closures, school quality, school location, etc. affect the value of residential properties by affecting quality of neighborhood life. Quite likely (an open empirical question) conversion to exclusive and/or specifically themed charter schools creates unique effects on property values and neighborhood quality of life, and not necessarily always positive effects.

Finally, schools/school buildings and property are public assets having a lifespan far exceeding that brief moment in time when that trigger pulling simple majority has children attending the school and the public that has invested in those schools over time should thus have some say in their operation, maintenance and management.

The idea that this particular subversion of traditional governance somehow heightens public accountability is simply ridiculous.

Closing Thoughts

Love it or hate it, we’ve got a pretty well defined, reasonably functional system of public governance in this country, with the overarching rule of the land being our U.S. Constitution. I’m not trying to oversell here. I’m not saying it’s perfect, always responsive to all and never intractable, opaque or corrupt. But I am saying that we could certainly do worse and many proposals on the table are likely to do just that.

Importantly, state laws might be written to close many of the gaping holes in student and employee rights identified above, public disclosure requirements and clarify the delineation between publicness and privateness. But the current trend is not necessarily in that direction!

Our current system defines the roles and responsibilities of public officials, holding them to public accountability standards vetted by our federal and state judicial branches for over two centuries. Yeah, I know, many of these reformy pundits would also simply do away with that meddling judicial branch. I for one, think that our courts continue to play a critical role in protecting rights.

Modern education reform efforts, in the name of supposed increased accountability and transparency largely seek to subvert our system of government as we know it and in many cases seek to strip large shares of poor and minority children and the employees in schools of poor and minority children of constitutional protections. And we’re all supposed to be okay with that?

The Efficiency Smokescreen, “Cuts Cause no Harm” Argument & The 3 Kansas Judges who Saw Right Through It!

State school finance litigation is a tedious – often annoying –politically charged process.  Often, school finance litigation involves extensive debate over tedious statistical and other details underlying estimates of how much is should cost for states to meet their constitutional obligations. Too often, it seems, these debates over tedious statistical details serve to distract the conversation from broader principles of plainly logical fair treatment for kids.

In these cases, states continue to vigorously defend their right to fund – or not – schools as they see fit… when they see fit…. whether or not they see fit.  A relatively consistent pool of experts continue to advise states on strategies for their defense. These strategies have evolved somewhat over time.  For many years, the central “expert” strategy was simply to argue that there’s no proof that adding more money would matter anyway because there is no systematic relationship between funding and outcomes. Of course, this argument fails to excuse the facial inequity of permitting some children in some districts to have twice or more, the resources of others. But, defense experts have certainly extended the money doesn’t matter argument to support the contention that because money is inconsequential, so too are these inequities.

More nuanced versions of these arguments have emerged in recent years.  Bringing “efficiency” arguments into the debate, defense experts have taken to helping states build their central theory on the argument that all districts have more than enough money, even those with the least, and that if they simply used that money in the most efficient way, we could see that it is more than adequate. The extension of this argument is that therefore, even cutting funding to these schools would not cause harm and does not compromise the adequacy of their funding, if they take advantage of these cuts to improve efficiency.  This argument is then coupled with challenges to any and all attempts to estimate the “cost” of producing adequate outcomes based on existing practices of school districts – because existing practices are inefficient practices.  This is a nuanced and complex argument and one that I’ve addressed previously in academic writing and in this blog.

As I’ve stated previously:

Importantly, cost model estimates are estimates based on the actual production technologies of schooling. They are based on the outcomes schools and/or districts produce under different circumstances, for different children – the actual children they serve, based on the actual assessments given, and based on the real conditions under which children attend school. Some critics of education cost analysis in general, and cost function modeling in particular assert that all local public school districts are simply inefficient, mainly because they pay their personnel based on parameters not associated with improved student outcomes.[1] Therefore, they assert that it is useless to consider the spending practices of current districts when trying to determine how much needs to be spent to achieve desired outcomes. A common version of this argument goes that if schools/districts paid teachers based on test scores they produce and if schools/districts systematically dismissed ineffective teachers, productivity would increase dramatically and spending would decline. Thus, educational adequacy could be achieved at much lower cost, and therefore, estimating costs based on current conditions/practices is a meaningless endeavor.[2],[3]

The most significant problem with this logic is that there exists absolutely no empirical evidence to support it! It is entirely speculative, frequently based on the assertions that teacher workforce quality can be improved with no increase to average wages, simply by firing the bottom 5% each year and paying the rest based on the student test scores they produce.  To return to the car purchasing analogy above, this is like assuming that somewhere out there is a car/truck with all the features of the Escalade, but the price of the F-150 – specifically, a version of the Escalade itself produced by a new, yet to be discovered technology with materials not yet invented that allow that vehicle to be sold at less than 1/2 its original price.

In fact, the logical way to test these very assertions would be to permit or encourage some schools/districts to experiment with alternative compensation strategies, and other “reforms,” and to include these schools and districts among those employing other strategies (production technologies) in a cost function model, and see where they land along the curve. That is, do schools/districts that adopt these strategies land in a different location along the curve? Do they get the same outcomes with the same kids at much lower spending? In fact, some schools and districts do experiment with different strategies and those schools carry their relevant share of weight in any statewide cost model.

Pure speculation that some alternative educational delivery system would produce better outcomes at much lower expense is certainly no basis for making a judicial determination regarding constitutionality of existing funding, and is an unlikely (though not unheard of) basis for informing statewide mandates or legislation.  Cost model estimates, as well as recommendations of professional judgment and expert panels can serve to provide useful, meaningful information to guide the formulation of more rational, more equitable and more adequate state school finance systems.

In a Shawnee County District Court decision released on Friday, a three judge panel thoroughly impressed me with their understanding, and eloquent takedown of the efficiency & funding cuts smokescreen.

As the Kansas 3-Judge Panel Framed It: (p. 188)

If “value” is to be a determinative consideration in the evaluation of the costs of providing suitable education, which we concur it must be, then, nevertheless, we would have to believe the State would have some obligation in this proceeding to advance alternative measures that cost less, but which, at least, produce the same sustained effect in producing the “improvement in performance that reflects high academic standards” which now epitomizes the end measure for a “suitable education.” Here, the record is wholly devoid of such alternative approaches, by cost or otherwise, to that goal. Rather, here, the State has effectively asserted that all Kansas K-12 students have reached their apparent maximum and will continue to do so with less money. Here, it is clearly apparent, and, actually, not arguably subject to dispute, that the state’s assertion of a benign consequence of cutting school funding without a factual basis, either quantitatively or qualitatively , to justify the cuts is, but, at best, only based on an inference derived from defendant’s experts that such costs may possibly not produce the best value that can be achieved from the level of spending provided. This is simply not only a weak and factually tenuous premise, but one that seems likely to produce, if accepted, what could not be otherwise than characterized as sanctioning an unconscionable result within the context of the education system. Simply, school opportunities do not repeat themselves and when the opportunity for a formal education passes, then for most, it is most likely gone. We all know that the struggle for an income very often – too often – overcomes the time needed to prepare intellectually for a better one.

If the position advanced here is the State’s full position, it is experimenting with our children which have no recourse from a failure of the experiment.  Here, the legislative experiment with cutting funding has impacted Kansas children’s K-12 opportunity to learn for almost one-third of their k-12 educational experience (2009-10 through 2012-13). Further, given the increased performance results that have accrued after passage of the No Child Left Behind Act and the more focused attention to the increase in standards in the future, the failure to provide full opportunity for learning experiences in our Kansas K-12 school system in the past due to a shortfall in funding is truly sad, however, a continuation of the status quo would only deepen the reflection of opportunities lost. For past students and future students, “all that they can be” was, is currently, and will be, compromised.

See also:

Baker, B. D. (2012). Revisiting the Age-Old Question: Does Money Matter in Education?. Albert Shanker Institute.

Baker, B. D. (2011). Exploring the Sensitivity of Education Costs to Racial Composition of Schools and Race-Neutral Alternative Measures: A Cost Function Application to Missouri. Peabody Journal of Education, 86(1), 58-83.


[1]Hanushek, E. (2005, October). The alchemy of ‘costing out’ and adequate education. Paper presented at the Adequacy Lawsuits: Their Growing Impact on American Education conference, Cambridge, MA. Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223.

[2] For elaboration on this argument, see: Costrell, R., Hanushek, E., & Loeb, S. (2008). What do cost functions tell us about the cost of an adequate education? Peabody Journal of Education, 83, 198–223.

[3] An alternative version of this argument is presented by the “efficiency” intervenors in this case. Intervenors’ brief explains: “Therefore, it is literally impossible for the legislature or other current managers of the school system in Texas to take the position, in cost-effective economic terms, that any particular level of funding is necessary for efficiency. Even the question of allocation of funding among districts cannot be determined in an efficient manner without a more substantive and comprehensive system of financial accountability.” http://eduefficiency.org/wp-content/uploads/2012/02/2012-02-22-Plea-in-Intervention.pdf (p. 9) This comment would appear to be a backhanded attempt to undermine any use of analysis of existing spending data for addressing either the overall adequacy of funding to Texas school districts or the equitable distribution of that funding. But this argument suffers the same lack of substantiation that there actually exists some hypothetically more efficient system out there somewhere, and that the current system is necessarily so inefficient as to be irrelevant. The only reasonable basis for  the court to determine education costs in Texas, and how they vary across children and settings is to evaluate those costs in the context of policies as they currently exist, given the actual production of outcomes and average efficiency of schools and districts in producing those outcomes.  Reducing regulations may be a rational alternative, and re-estimating costs after such policy change is also reasonable. If costs of desired outcomes go down after such policy change, then great! But one cannot simply assume that regulatory change (or charter expansion as an approach to regulatory reduction – see Section 5.0) will result in dramatic efficiency gains.

Friday Ratings Madness: Quality Counts, Students First & Funding Fairness

It’s been a fun week for grading the states. First we had the wacky ratings from Students First which graded states largely on the extent to which they had adopted the preferred policies of that organization. Then we had the old-standard Education Week Quality Counts. When it comes to their finance rating system, little has changed in recent years. These two reports, of course, produced substantially conflicting results.

One might argue that both reports and ranking systems, like our School Funding Fairness report, include several indicators intended to identify policy conditions for success. This has been the standard response of Students First when they have been criticized on the basis that the states that they have applauded most tend to have pretty low average outcomes.  But, the Students First report, Quality Counts and our Funding Fairness report differ quite substantially on what we consider to be policy conditions for success. 

Students First has put policy conditions into three categories – 1) elevating the teaching profession, 2) parent empowerment and 3) finance and governance.  Students first gives no consideration across any of these categories to whether teacher wages, for example are sufficient to recruit/retain high quality candidates into teaching or whether wages are specifically competitive in high need schools. Students First gives no consideration to whether funding, overall, is sufficient to provide either/both competitive wages or reasonable class sizes, generally, or specifically in high need schools.  It would appear to be their opinion (as was rather clearly expressed by Eric Lerum in a video conference) that overall level or distribution of funding isn’t the issue – but rather that their preferred policies are what matters, regardless of funding (since the only funding/resource equity considerations in their rankings pertained to whether charter schools received what they consider equal funding – no validation provided!)

Education Week goes old school especially on their school finance rankings. I don’t have time/space to address all of their rankings. As I will show below, some of their old-school measures seem to capture relatively useful information, but others do not. Let’s quickly summarize the measures they use.

  • Fiscal Neutrality: Fiscal neutrality measures the relationship between district spending and district wealth. State school finance formulas are partly intended to disrupt this relationship – reduce the likelihood that wealthier districts spend systematically more. This measure is often still useful, but may be complicated by the fact that school finance formulas also try to address differences in student needs and costs. To the extend that higher need kids live in poorer districts (not always the case that taxable property wealth and student need are tightly associated), this indicator may work to partly capture both.
  • McLoone Index: Named for school finance legend Gene McLoone! This index tells us how close, on average, the per pupil spending of districts in the lower half (serving the lower half of kids) are to the median. That is, to what extent does the state formula succeed in “leveling up” the bottom half to the middle. A McLoone of 100 would mean that the lower half is equal to the middle. But this index in particular can produce some screwy results. Say for example a state has one or a few very large districts with high need populations and those districts constitute both the lower half and the middle (they have nearly or all of the bottom half of kids). A state with one or a handful of high need large districts with spending lower than everyone else (the upper half) might still get a McLoone of 100. But it would be a really crappy school finance system! (with all due respect to Gene!)
  • Coefficient of Variation: The coefficient of variation simply measures the extent of variation in per pupil spending as a percent of the mean per pupil spending. A CV of 10% indicates that 2/3 of children attend districts with per pupil spending within 10% of the mean. The problem with the CV is that, while it measures variation, it doesn’t capture the difference between GOOD variation and BAD variation. Modern state school finance formulas try to create variation in funding to accommodate differences in student needs. Education Week uses nominal weights to “adjust” for differences in student needs, but some state school finance systems actually adjust more aggressively for needs than do their weights. Those states are penalized in the CV.
  • Spending Index & Percent at/Above National Mean: A few reports back Education Week wanted to construct a form of “spending adequacy” figure to compare spending levels across states and the shares of kids with access to what they considered more “adequate” spending. So they adopted this measure and index based on the percent of children in each state who attended districts that spent at least the same as the national average district (spending adjusted for regional wage variation). This figure does generally capture spending level differences across states – adjusted for wage variation – but doesn’t, for example capture spending level differences corrected for student population differences, or the shares of students who might be attending very small, remote rural districts.

Ed week includes a few additional indicators like the restricted range – or difference in spending between the 95th and 5th %ile district, but these are largely redundant with the CV & McLoone and suffer the same problems of not accounting for other cost factors – or state aid formulas that aggressively adjust for needs and costs.

We had set out to correct for many of the problems in the Ed Week approach when we started work  on our Funding Fairness report. Specifically, we wanted to make comparisons that better accounted for differences in needs and costs across districts and states and that could be used to characterize state school finance policies consistently, without suffering some of the problems of old-school indicators like the CV or McLoone Index. We also look at spending level – using a statistical model based on 3 years of data to project the per pupil state and local revenue of a district with a) average poverty rate, b) in an average wage labor market and c) with 2,000 or more students and average population density. That is, our projected state and local revenue figures are adjusted for poverty, competitive wages, size and population density. We use the same model to then evaluate whether, on average – and in a predictable pattern – state and local revenues are systematically higher (progressive) or lower (regressive) in higher poverty districts (relative to lower poverty districts). That is, does the system overall target resources to higher poverty districts – controlling for the other factors.

That prerequisite discussion aside, let’s take a look at how all of this stuff lines up – How the Ed Week Indicators line up with the Funding Fairness Indicators and how both line up with the Students First Indicators. Finally, I look at how all line up with various outcome measures.

Again… all of these funding related indicators are about policy conditions for success, rather than success itself.

First up – and here’s a relative no-brainer – both our funding fairness report and Ed Week Quality Counts include an indicator of state funding effort – or share of state capacity allocated to elementary and secondary education.  I can’t speak for Ed Week, but we include ours to acknowledge that some states spend more than others (do better on our spending level measure) because they can and that we should grade them at least partly on their effort.  Figure 1 shows that our effort measure and Ed Week’s effort measure are pretty highly correlated.

Figure 1

Slide1

Figure 2, by contrast, shows that the Students First funding GPA isn’t related at all with the Ed Weeks effort indicator, and by extension with ours. Ed Week (and we) consider funding effort to be an underlying policy condition for success, apparently, Students First doesn’t .

Figure 2

Slide2

Figure 3 compares our funding level indicator and Education Week’s spending index – or relative adequacy indicator. Clearly the two are highly related… but the Ed Week indicator caps out at 100% – or where 100% of the children attend districts above the national average spending. Personally, I prefer indicators that capture the full range of variation.  But again, our spending level measure and Ed Week’s spending index are picking up much of the same information – relative spending differences across states.

Figure 3

Slide3But, Figure 4 shows that Students First finance rating scheme really doesn’t relate at all to Education Week’s spending index, suggesting that overall availability of resources – like the effort to raise them – is inconsequential in the eyes of Students First.

Figure 4

Slide4Figure 5 shows the relationship between our funding progressiveness indicator and Ed Week’s fiscal neutrality indicator. For many states, the two are picking up similar things. In states like New Jersey or Utah, where higher poverty districts have more resources than lower poverty ones, the systems have also achieved fiscal neutrality (disrupted the relationship between wealth and spending). By contrast, in states like Illinois or North Carolina, the wealth-spending relationship remains strong and positive (higher wealth – higher spending) and higher poverty districts receive systematically fewer resources!

Figure 5

Slide5

Recall that Illinois received one of the best grades on finance from Students First. Apparently, in addition to effort and spending level, fiscal neutrality and need based funding are also inconsequential to Students First when it comes to funding issues. Figure 6 shows the relationship between funding progressiveness and Students Firsts funding related GPA. Note that all of Students First’s funding superstars (Illinois, New York, Rhode Island and Michigan) are less than stellar on funding fairness.

Figure 6

Slide6

Figure 7 relates the Ed Week CV to our funding fairness measure, showing that states with progressive funding distributions including New Jersey, Ohio and Massachusetts are actually penalized by this measure.  The CV does not differentiate between need based variation as occurs in these states and wealth-drive variation as occurs in New Hampshire.  We all seem to agree – Ed Week, Students First and us… that New Hampshire’s funding is…well… not so good!

Figure 7

Slide7

Moving on, here’s the relationship between our funding fairness measure and the McLoone Index! Not much going on here… and but for a few specific examples… it’s actually hard to tell what the McLoone really captures these days in complex state school finance systems. At least it captures that New Hampshire school funding… well… sucks! But other than that, the McLoone really doesn’t capture much valuable additional information regarding equity.

Figure 8

Slide8

So then, how do these various policy conditions for success relate to various outcome measures. In this table and the following graphs I explore that question, using the following outcomes:

  1. Reduction in % below proficient (from http://www.hks.harvard.edu/pepg/PDF/Papers/PEPG12-03_CatchingUp.pdf)
  2. Annual Standardized Gain (NAEP, from http://www.hks.harvard.edu/pepg/PDF/Papers/PEPG12-03_CatchingUp.pdf)
  3. Adjusted (for initial level) Annual Standardized Gain
  4. Reading and Math NAEP 8th Grade 2011
  5. Reading and Math NAEP 8th Grade for Lowest Income Group (Free Lunch) 2011

Table 1 shows the correlations between each of the indicators addressed above and the outcome measures listed above.  Note that each of these correlations a) is relatively modest to non-existent and b) merely represents a relationship whereby when X is higher, so too is Y. Underlying causal relationships involve a complex web of factors including socio-economic and demographic conditions, etc.

Table 1

Slide9

Figure 9 ranks the correlations between policy conditions and reduction in % below proficient at the 8th grade level. Interestingly, variation (inequity – bad and good) in spending is most positively associated with reduction in % below proficient. Beyond that, our funding level indicator and the two funding level indicators from Ed Week are next in line.  Students First’s teaching profession indicator is next… but their funding indicator further down. The figure seems to suggest that higher spending states, even where that spending is unequal, are doing okay on reducing % below proficient – but this is a pattern that can clearly be influenced by regional variation.

Figure 9

Slide10

Figure 10 shows the correlations – ranked high to low – between each policy condition and adjusted standardized gain. In this case, adjusted standardized gains are most highly correlated with our spending level indicator, the Ed Week spending adequacy indicator, and our progressiveness indicator and Ed Week’s neutrality indicator. One might infer from this that more equitable and adequate funding is associated with greater long term average gains on NAEP… but again, regional differences may drive this to an extent. To get an idea of which states have better “adjusted annual gains” see the figure in Appendix A. Higher adjusted gains are states above the trendline and lower adjusted gains are those below the trendline.  Not all states are included (in the graph or correlations) for lack of baseline data year (I may work on updating this with multiple baseline years & tests. This is just a start).


Figure 10

Slide12Finally, we have Figure 11, which compares correlations with the NAEP scores of the lowest income children (which across states were not associated with the average income of the families of those children). These are children in families below the 130% income level for poverty.  As in Figure 9, states with the greatest spending variation seemed to have higher low income NAEP scores. Beyond that however, funding level, effort and wage competitiveness (Teaching Penalty data) seem to be positively correlated with low income NAEP scores. That is, states with higher funding levels, that put up more funding effort, and that have more competitive teacher wages (weekly, relative to non-teachers) have higher low income NAEP scores.

In Figure 11, all of the students first indicators (GPAs) are negatively associated with low income student NAEP scores. That is, low income children are doing much worse in states that got good grades from Students First.  That said, many of the conditions Students First included as setting the state for future success are policies only recently implemented in these states.

Figure 11

Slide13

So that’s it… my run down of the relationship between this week’s state rankings data, how they relate (or not) to our School Funding Fairness Report and how they relate to various outcome measures. I’ll let the rest of you run with it from here! Cheers!

Data Sources:

Students First Report Card

School Funding Fairness

Ed Week Quality Counts (Finance)

Teaching Penalty

Relevant Additional Readings

Appendix A: NAEP Standardized Gains and 1990 Scores

Slide11

Gates Still Doesn’t Get It! Trapped in a World of Circular Reasoning & Flawed Frameworks

Not much time for a thorough review of the most recent release of the Gates MET project, but here are my first cut comments on the major problems with the report. The take home argument of the report seems to be that their proposed teacher evaluation models are sufficiently reliable for prime time use and that the preferred model should include about 33 to 50% test score based statistical modeling of teacher effectiveness coupled with at least two observations on every teacher. They come to this conclusion by analyzing data on 3,000 or so teachers across multiple cities.  They arrive at the 33 to 50% figure, coupled with two observations, by playing a tradeoff game. They find – as one might expect – that prior value added of a teacher is still the best predictor of itself a year later… but that when the weight on observations is increased, the year to year correlation for the overall rating increases (well, sort of). They still find relatively low correlations between value-added ratings for teachers on state tests and ratings for the same teachers with the same kids on higher order tests.

So, what’s wrong with all of this? Here’s my quick run-down:

1. Self-validating Circular Reasoning

I’ve written several previous posts explaining the absurdity of the general framework of this research which assumes that the “true indicator of teacher effectiveness” is the following year value-added score. That is, the validity of all other indicators of teacher effectiveness is measured by their correlation to the following year value added (as well as value-added when estimated to alternative tests – with less emphasis on this). Thus, the researchers find – to no freakin’ surprise – that prior year value added is, among all measures, the best predictor of itself a year later. Wow – that’s a revelation!

As a result, any weighting scheme must include a healthy dose of value-added.  But, because their “strongest” predictor of itself analysis put too much weight on VAM to be politically palatable, they decided to balance the weighting by considering year to year reliability (regardless of validity).

The hypocrisy of their circular validity test is best revealed in this quote from the study:

Teaching is too complex for any single measure of performance to capture it accurately.

But apparently the validity of any/all other measures can be assessed by the correlation with a single measure (VAM itself)!?????

See also:

Evaluating Evaluation Systems

Weak Arguments for Using Weak Indicators

2. Assuming Data Models Used in Practice are of Comparable Quality/Usefulness

I would go so far as to say that it is reckless to assert that the new Gates findings on this relatively select sub-sample of teachers (for whom high quality data were available on all measures over multiple years) have much if any implication for the usefulness of the types of measures and models being implemented across states and districts.

I have discussed the reliability and bias issues in New York City’s relatively rich value-added model on several previous occasions. The NYC model (likely among the “better” VAMs) produces results that are sufficiently noisy from year to year to raise serious questions about their usefulness. Certainly, one should not be making high stakes decisions based heavily on the results of that model. Further, averaging over multiple years means, in many cases, averaging scores that jump from the 30th to 70th percentile and back again.  In such cases, averaging doesn’t clarify, it masks. But what the averaging may be masking is largely noise. Averaging noise is unlikely to reveal a true signal!

Further, as I’ve discussed several times on this blog, many states and districts are implementing methods far more limited than a “high quality” VAM and in some cases states are adopting growth models that don’t attempt – or only marginally attempt – to account for any other factors that may affect student achievement over time.  Even when those models to make some attempts to account for differences in students served, in many cases as in the recent technical report on the model recommended for use in New York State, those models fail! And they fail miserably.  But despite the fact that those models fail so miserably at their central, narrowly specified task (parsing teacher influence on test score gain) policymakers continue to push for their use in making high stakes personnel decisions.

The new Gates findings – while not explicitly endorsing use of “bad” models – arguably embolden this arrogant, wrongheaded behavior!  The report has a responsibility to be clearer as to what constitutes a better and more appropriate model versus what constitutes an entirely inappropriate one.

See also:

Reliability of NYC Value-added

On the stability of being Irreplaceable (NYC data)

Seeking Practical uses of the NYC VAM data

Comments on the NY State Model

If it’s not valid, reliability doesn’t matter so much (SGP & VAM)

3. Continued Preference for the Weighted Components Model

Finally, my biggest issue is that this report and others continue to think about this all wrong. Yes, the information might be useful, but not if forced into a decision matrix or weighting system that requires the data to be used/interpreted with a level of precision or accuracy that simply isn’t there – or worse – where we can’t know if it is.

Allow me to copy and paste one more time the conclusion section of an article I have coming out in late January:

As we have explained herein, value-added measures have severe limitations when attempting even to answer the narrow question of the extent to which a given teacher influences tested student outcomes. Those limitations are sufficiently severe such that it would be foolish to impose on these measures, rigid, overly precise high stakes decision frameworks.  One simply cannot parse point estimates to place teachers into one category versus another and one cannot necessarily assume that any one individual teacher’s estimate is necessarily valid (non-biased).  Further, we have explained how student growth percentile measures being adopted by states for use in teacher evaluation are, on their face, invalid for this particular purpose.  Overly prescriptive, overly rigid teacher evaluation mandates, in our view, are likely to open the floodgates to new litigation over teacher due process rights, despite much of the policy impetus behind these new systems supposedly being reduction of legal hassles involved in terminating ineffective teachers.

This is not to suggest that any and all forms of student assessment data should be considered moot in thoughtful management decision making by school leaders and leadership teams. Rather, that incorrect, inappropriate use of this information is simply wrong – ethically and legally (a lower standard) wrong. We accept the proposition that assessments of student knowledge and skills can provide useful insights both regarding what students know and potentially regarding what they have learned while attending a particular school or class. We are increasingly skeptical regarding the ability of value-added statistical models to parse any specific teacher’s effect on those outcomes. Further, the relative weight in management decision-making placed on any one measure depends on the quality of that measure and likely fluctuates over time and across settings. That is, in some cases, with some teachers and in some years, assessment data may provide leaders and/or peers with more useful insights.  In other cases, it may be quite obvious to informed professionals that the signal provided by the data is simply wrong – not a valid representation of the teacher’s effectiveness.

Arguably, a more reasonable and efficient use of these quantifiable metrics in human resource management might be to use them as a knowingly noisy pre-screening tool to identify where problems might exist across hundreds of classrooms in a large district. Value-added estimates might serve as a first step toward planning which classrooms to observe more frequently. Under such a model, when observations are completed, one might decide that the initial signal provided by the value-added estimate was simply wrong. One might also find that it produced useful insights regarding a teacher’s (or group of teachers’) effectiveness at helping students develop certain tested algebra skills.

School leaders or leadership teams should clearly have the authority to make the case that a teacher is ineffective and that the teacher even if tenured should be dismissed on that basis. It may also be the case that the evidence would actually include data on student outcomes – growth, etc. The key, in our view, is that the leaders making the decision – indicated by their presentation of the evidence – would show that they have used information reasonably to make an informed management decision. Their reasonable interpretation of relevant information would constitute due process, as would their attempts to guide the teacher’s improvement on measures over which the teacher actually had control.

By contrast, due process is violated where administrators/decision makers place blind faith in the quantitative measures, assuming them to be causal and valid (attributable to the teacher) and applying arbitrary and capricious cutoff-points to those measures (performance categories leading to dismissal).   The problem, as we see it, is that some of these new state statutes require these due process violations, even where the informed, thoughtful professional understands full well that she is being forced to make a wrong decision. They require the use of arbitrary and capricious cutoff-scores. They require that decision makers take action based on these measures even against their own informed professional judgment.

See also:

The Toxic Trifecta: Bad Measurement & Evolving Teacher Evaluation Policies

Thoughts on Data, Assessment & Informed Decision Making in Schools

RheeFormy Logic & Goofball Rating Schemes: Comments & Analysis on the Students First State Policy Grades

On Monday, the organization Students First came out with their state policy rankings, just in time to promote their policy agenda in state legislatures across the country. Let’s be clear, Students First’s state policy rankings are based on a list of what Students First thinks states should do. It’s entirely about their political preferences – largely reformy policies – template stuff that has been sweeping the reformiest states over the past few years. I’ll have more to say about these preferred policies at the end of this post.

Others have already pointed out that Students First gave good grades to states like Louisiana and Florida, and crummy grades to states like New Jersey or Massachusetts – but that states like Louisiana have notoriously among the worst school systems – lowest test scores – in the nation – whereas states like New Jersey and Massachusetts have pretty darn good test scores and well respected school systems. I’ll go there as well, but not as my primary focus. Clearly there’s  more behind the test score differences than policy context. New Jersey and Massachusetts certainly have more educated, more affluent adult and parent populations than Louisiana, and that makes a difference.

I’ll anxiously await the day good reformers like Mike Petrilli pack their bags and leave their suburban Washington DC districts to move their kids to the amazing future schools of Louisiana!  Heck, given these new Students First ratings, any Louisiana school has to be better- or at least have far more potential – than any school in Montgomery County Maryland, run by that curmudgeonly anti-reformer Suprintendent Joshua Starr!  In fact, in my ideal world, Louisiana would become a reformy wonderland… magnet for all the reformy types… where they could go live in peace – rate their teachers by value-added models, fire 10 to 20% each year – pay them nothing – get rid of any retirement benefits, make every school a charter school (operating primarily with imported Turkish and/or Filipino labor), engage in at least 50% online learning (sitting at a computer doing test prep modules), and provide tuition tax credits to all of the reformies who prefer a purely religious perspective interwoven across subjects. Of course, this must all be done with Lousiana’s current level of financial commitment to public schooling.

But I digress… Now back to the Students First ratings.  Students First created 3 broad categories of preferred policies for their ratings – policies that it believes:

  1. Elevate teaching
  2. Empower parents
  3. Spend wisely and govern well

By elevate teaching, Students First means the usual basket of reformy options including elimination of traditional salary schedules, teacher evaluations based heavily on student test scores, reduction of retirement benefits and reduction or elimination of due process rights, and pay based primarily on test-score driven evaluation systems. They also prefer to expand alternative routes into the teaching profession. Of course, there’s not a whole lot of transparency into how these various elements are factored into the final grades. But there is a rubric!

By empowering parents, Students First essentially means increasing use of school report cards (more school grades & ratings – yes, a report card that endorses use of … report cards!), reporting to parents when their child is assigned to a teacher with a low rating (driven by test scores), and adoption of policies such as Parent Trigger. And of course, charters should be provided everywhere and anywhere… so everyone has the choice to attend one.

Finally, by spend wisely and govern well… I’m quite honestly not even sure what the hell they mean? They include a broad statement about all kids receiving equitable funding, but seem to imply that this means that charter kids get the same funding as district kids – a pretty narrow interpretation of fairness (and an incalculable one in states with no charters). No actual data seem to be used to rate state funding systems. Fairness is also determined by the provision of publicly finance facilities space to charter schools. Somehow, their ratings of funding totally ignore the vast majority of schools and children across which funds are distributed. In their view, Mayors, not local school boards should govern schools, and schools should report their expenditures uniformly – in a way that shows how the spending affects achievement (good luck with that as a reporting requirement/mechanism).

Every item on their list is somehow mysteriously scored on a “0” (you suck) to “4” (wow… you are REFORMERIFIC!) scale without using any actual data (apparently) to inform that ordinal rating. Then in a wonderful leap of number abuse, these ordinal scale data are averaged to create a grade point average for each broad category – on a 0-4 GPA like scale, where most values of course lie in the imaginary spaces between the original ordinal ratings (like kinda-semi-almost-reformerific = 3.49).

That said, let’s dig into those grades, and other stuff that may or may not correlate with them.

RheeFormy Funding Indicators vs. Real Funding Indicators

Let’s start with the funding grades which I will relate to two of our primary indicators in our annual report on school funding fairness. First, let’s look at the relationship between the RheeFormy GPA for funding/governance and our rating on the percent of gross state product allocated to K-12 education. Top scorers on RheeFormy funding are Michigan, Rhode Island, New York, Alaska and Illinois. Right away this is rather absurd since New York and Illinois are quite well known to have among the least equitable school funding systems in the nation…. but that’s the next graph. RheeFormy winners Florida and Louisiana are not particular standouts on their funding effort to education, whereas the states of New Jersey and Massachusetts, despite being much richer to begin with, allocate a much larger share of their economic productivity to elementary and secondary education. But hey, why would anyone want to count how much effort a state actually puts into funding its schools? right? how could that matter?

Figure 1.

Slide2

Figure 2 compares the RheeFormy finance GPA to our indicator of funding fairness. Our indicator of funding fairness compares the projected state and local revenue of high poverty school districts to that of low poverty school districts. A value of greater than 1.0 indicates a progressive system where more resources are allocated to higher poverty districts and a value of less than 1.0 indicates a regressive system. Not surprisingly, the RheeFormy standouts of Florida and Louisiana and especially New York and Illinois (which are among the top in RheeFormy finance) are all highly regressive states. Uh… that’s bad… not good. High poverty districts get the shaft in these states. But that’s apparently just fine with Students First. In fact, it seems preferable! Way to go!

Meanwhile, New Jersey and Massachusetts, along with Ohio, are relatively progressive on finance. Yes, Utah is too, but that’s because Utah spends next to nothing on most schools and slightly more than next to nothing on lower income schools.

So, apparently, RheeFormy logic dictates that to really make progress on achievement in low income communities, money really doesn’t matter. State school finance systems don’t matter, and in fact, spending less on districts with more poor children is the way to go. Good for New York and Illinois! NOT!

Figure 2.

Slide3

What we know about school finance reforms, funding level & distribution & student outcomes

…sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, adequate and equitable distributions of financial inputs to schooling provide a necessary underlying condition for improving adequacy and equity of outcomes. That is, if the money isn’t there, schools and districts simply don’t have a “leverage option” that can support strategies that might improve student outcomes. If the money is there, they can use it productively; if it’s not, they can’t. But, even if they have the money, there’s no guarantee that they will. Evidence from Massachusetts, in particular, suggests that appropriate combinations of more funding with more accountability may be most promising.

See: http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Put bluntly – equitable and adequate financing for the education of all children is a prerequisite condition for achieving equitable and adequate outcomes. The Students First rating system misses this point entirely – measuring neither the equity nor adequacy – nor effort to raise these prerequisite resources.

No, the Students First ratings don’t pretend to measure these things. But, they seem to argue that their ratings measure the prerequisite conditions for reforming education systems – where I must assume they mean making those systems better. But the reality is that the ratings focus on trivial and tangential reformy preferences, leading them to praise states that are among the worst in the nation on school funding and chastise states that are among the best.

RheeFormy Teacher Quality Indicators vs. Competitive Compensation

Next up are the RheeFormy ratings on elevating the teaching profession, where quite clearly, having competitive wages for teachers (relatively to the workforce of similarly educated workers) is a non-issue. Figure 3 shows the relationship between the relative weekly wage of teachers – compared to same education level non-teachers – and the elevating teaching GPA.  Teachers in Louisiana have among the largest “teaching penalties” earning only about 70% of the weekly wage of non-teachers. That’ll certainly elevate the profession! They are right up there with other stellar reformy states including Colorado and Tennessee.  Teachers in Florida do a bit better. Teachers in Massachusetts don’t do that well either, but Massachusetts is a state where non-teacher wages are quite high. So being relatively low in Massachusetts might not be as bad as being relatively low in Louisiana or Tennessee! (They also have the benefit of being able to send their own kids to pretty good public schools.) Teacher wages in New Jersey are more competitive, even in their higher non-teacher wage competitive context.

Figure 3.

Slide5

In really simple terms – competitive wages are the first step toward elevating the teaching profession. Given what we actually know from research on “elevating the teaching profession” I don’t expect to see America’s best teachers flocking to Louisiana, Colorado and Tennessee anytime soon. But, I certainly hope to see those reformers in their caravan, moving their families to those reformy promise lands!

What we know about policy conditions for a strong teacher workforce

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[i] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[ii] In addition, more recent studies have tackled the specific issues of relative pay noted above. Loeb and Page showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[iii]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[iv]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[v]

And what do we know about the effectiveness of the preferred Students First policies of providing performance based pay? For recent studies specifically on the topic of “merit pay,” each of which generally finds no positive effects of merit pay on student outcomes, see:

  • Glazerman, S., Seifullah, A. (2010) An Evaluation of the Teacher Advancement Program in Chicago: Year Two Impact Report. Mathematica Policy Research Institute. 6319-520
  • Springer, M.G., Ballou, D., Hamilton, L., Le, V., Lockwood, J.R., McCaffrey, D., Pepper, M., and Stecher, B. (2010). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching. Nashville, TN: National Center on Performance Incentives at Vanderbilt University.
  • Marsh, J. A., Springer, M. G., McCaffrey, D. F., Yuan, K., Epstein, S., Koppich, J., Kalra, N., DiMartino, C., & Peng, A. (2011). A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses. Final Evaluation Report. RAND Corporation & Vanderbilt University.
RheeFormy States are NOT a Model for our Nation!
Now for that discussion of outcomes I mentioned previously. Well, here’s what it looks like. The new RheeFormy ratings applaud the likes of Louisiana, Florida… Tennessee and even Washington DC. These are anything but stellar performers on national assessments, as shown in Figure 4 and Figure 5. But indeed, these are also states (and a city) with relatively high child poverty rates.
Figure 4.
Slide7
Figure 5.
Slide8
Others have argued that states like Louisiana and Florida in particular – while being low performers – have posted impressive gains on NAEP over the past 20 years (most of which predates adoption of these new RheeFormy policies). Figure 6 uses the standardize annual NAEP gains reported in THIS REPORT.  It would appear here that overall winners on the Students First Ratings do have pretty good NAEP gains over time. But, Massachussetts and New Jersey – RheeFormy losers actually posted gains on NAEP similar to those of Louisiana and Florida!
Figure 6.
Slide9
Even more impressive, New Jersey (and also Massachusetts, but 1990 scores were not available) posted strong NAEP gains despite being relatively high to begin with. As it turns out, Louisiana had nowhere to go but up. And it appears that starting out with very low NAEP scores is a pretty strong determinant of how much a state gained over time. The lower your starting point, the more you gained. But that Non-reformy New Jersey – curmudgeonly high spending, fair spending state of New Jersey – posted gains similar to Louisiana even though it started out already among the highest performing states!
Figure 7.
Slide10
And what else about those outcomes in those stick-in-the-mud union dominated states of Massachusetts and New Jersey? As it turns out, while they were working on spending their money fairly on lower income kids they were also making significant gains in reducing the percent of children scoring below proficiency on NAEP (again using data from the PEPG Catching Up report!)  Yep, there they are, flying pretty high on funding fairness and on improving the outcomes of the lowest performing students. Louisiana and Florida… well… not so much!
Figure 8.
Slide11

A Comment on Accountability, Empowerment, Transparency & Students First Preferred Policies

Finally,  I close with a topic that should be another blog post altogether, and likely will be at some point. I’ve been struck by the logic that the preferred policies in the Students First report are intended – by their framing – to increase accountability, empowerment and transparency. Yet, in all likelihood, most of these proposals accomplish precisely the opposite – substantially eroding public accountability and oversight and compromising statutory and constitutional rights of children, employees and local taxpayers.

Now, we may have our differing perspectives on the structure of our American government and operation of government entities. But, our government has a defined structure with reasonably well conceived overarching laws.

The U.S. Constitution, state constitutions and various federal and state statutes provide important protections to students and employees and the taxpayers that finance public institutions. Importantly, our constitution protects individuals from certain treatments by our government and agents of our government. Public schools – government schools to borrow from libertarian rhetoric – fall under that umbrella, and must, for example, provide children due process before depriving them the right to attend, and must respect – to a limited extent – students rights to free expression, etc. Government schools also cannot promote/endorse a particular religious viewpoint (proselytize).  Other protections, many protections of both children and employees of government institutions are invoked through Section 1983 of the U.S. Code which applies to entities that are ‘state actors’ (uh… government entities). Further, many state laws apply to government entities and to ‘public officials.’

We have a representative system of government with multiple levels, where public officials are elected by (and accountable to) voters (albeit often a small share of those eligible), and where additional layers of public officials may be appointed by elected public officials. And, as noted above, many laws, especially those pertaining to public disclosure, public meetings and public records apply to ‘public officials.’

The Students First state policy rating system – like many other reformy manifestos – implies that the road to ACCOUNTABILITY and TRANSPARENCY is necessarily (perhaps exclusively) paved through shifting larger numbers of students and teachers and larger shares of public funding over to the management of non-government entities and non-public officials, as well as creating entirely new layers of ‘public decision making’ by referendum/petition (Parent Trigger).  Whatever gripes we may have regarding the efficiency or responsiveness of government operated services, we must think this one through carefully.

Unless detailed accountability requirements are explicitly spelled out in a whole new layer of state and federal laws, the preferred policies laid out in the Students First and by other reformy institutions are more likely to lead to less public accountability and transparency rather than more. For example:

  1. Shifting substantial numbers of students into private schools or privately managed charter schools means that larger shares of students will have limited constitutional and statutory protections.  When students are educated under privately managed schools – including charters – they do not (unless explicitly laid out in state charter laws) have the same constitutional protections with respect to discipline policies and they lack other important statutory protections that apply only to “state actors” (government institutions). Indeed, parents have a choice of whether to forgo these rights for their children. BUT…. advocates of these policies are deceitfully selling charter schooling as ‘public’ (with all the rights, privileges, etc.) to an unknowing public.
  2. Shifting substantial shares of public financing to entities governed by appointed boards of private citizens (not public officials), private management firms and private subcontractors reduces financial transparency because these institutions and individuals may – and most often do when challenged – invoke that they are not subject to open meetings, open records and other disclosure laws that necessarily apply to government entities.

Further, specific policies including parent trigger policies and ‘opportunity scholarship’ tuition tax credits may also substantially erode public accountability.

  1. Parent trigger attempts to subvert traditional elected representative local government by granting disproportionate power to a temporary class of citizens – parents of children attending a school at a given moment in time – to make relatively permanent decisions, by simply majority rule, regarding operations of public assets, public programs and services, including the option to make them no-longer public. But these assets, programs and services belong to and serve directly and indirectly the larger community of eligible voters who put in place their elected school board (or city government officials that appointed a portion or all of that board).
  2. Establishment of privately governed  entities to manage funds collected through a tax credit program [Opportunity Scholarship Vouchers] is comparable to simply handing over an equivalent sum of funds collected as tax dollars to that entity, except that taxpayers lose any/all rights/accountability over the use of those funds! As with private management firms and schools, private entities of this type are not governed directly by public officials and therefore may not be similarly legally accountable. They may not be subject to the same level of public records or meetings disclosure, etc. This mechanism [Tuition Tax Credit] has been used as a means to get around constitutional concerns over allocation of public tax revenues to religious institutions. That is, this mechanism was created specifically to negate taxpayer standing to mount legal challenges over the use of funds. The U.S. Supreme Court has determined that when tax credit programs are structured in this way, taxpayers have no right [no standing] to bring constitutional (or likely any) challenges over the use of these funds. [http://www.supremecourt.gov/opinions/10pdf/09-987.pdf]

So yes – Students First has their policy preferences – and they’re certainly entitled to that. They’ve built their entire rating system on their idea of what’s good policy. They’ve not tried to justify their policy preferences in any research basis on effectiveness or efficiency of these policy preferences, nor could they.  There simply is no research basis to support the vast majority of their preferences. Even where Charter school policy is concerned, findings of successful charters seem to occur most often where authorizers are few and tightly regulated, and where charter market share is low (as in NYC or Boston).  This is in direct contrast with the SF preference for further deregulating and expanding the sector (as in states with relatively poor charter performance).   So, in short, there’s simply no research based reason to follow the policy agenda of Students First.  But the reasons they provide – accountability, transparency, blah… blah… blah… are also not consistent with their policy agenda.

As a school finance researcher in particular, I’ve been increasingly frustrated by the lack of detailed consistent financial reporting on charter schools, and I’ve written much on this topic. I’ve also written on private school financing, which is even more sparsely reported. The more kids who are shifted into charters, the fewer kids on which we have reliable, comprehensive information on finances, teacher contracts, compensation packages etc. (as charters management companies repeatedly invoke that their employee contracts are private, not public documents).  Similarly, financial arrangements involving land deals and capital financing are more opaque than ever – far more opaque and inaccessible than the public financing world of municipal bond financed infrastructure. And I don’t see any legitimate effort to make these institutions more transparent – NONE!


[i] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[ii] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[iii] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[iv] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

See also:

Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[v] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

Thoughts on “Randomized” vs. Randomized Charter School Studies

There’s much talk in education research about Randomized Control Trials and truly “experimental” research being the “gold standard” for determining whether a specific intervention “works” or not. Thus is the basis for the Institute for Education Sciences What Works Clearing House. It is often argued that randomized, or experimental studies are “good” and decisive, and that other approaches simply don’t match up. Therefore, if someone really wants to know what works or doesn’t with regard to a specific intervention or set of interventions, one need only review those randomized, experimental studies to identify the consensus finding.

There’s so much to discuss on these issues, including the extent to which truly randomized experiments can actually shed light on how interventions might play out in other settings or at scale. But I’ll stick to a much narrower focus in this post, and that is, just how randomized is randomized? Most recently, this question came to mind after reading this post addressing “experimental” vs. “non-experimental” studies of charter schools by Matt Di______Carlo at Shanker blog, and this post over at Jay P. Greene’s blog on RIGOROUS charter research (meaning experimental, or randomized).

There tend to be two types of studies done to determine the relative effectiveness of “charter schools” versus traditional “district schools.” The basic idea of either type of study is to determine the effect that “charter schooling” or some specific set of policies/practices and instructional models and strategies about “charter schooling”, has on students’ outcomes, when compared to kids who don’t receive those strategies. That is, exposure to “charter schooling” is assumed to be a treatment, and non-exposure, whatever that constitutes, is the control.

One type of study tries to identify after the fact, otherwise similar kids (matched pairs) attending a set of charter schools and a set of district schools in the same city, and then compares their achievement growth over time. These studies often fall short in two important ways.

The other type of study is often referred to as meeting the gold standard – as being a randomized study – or lottery-based study. It is assumed, since these studies are declared golden, that they therefore necessarily resolve both above concerns. And it is possible, that if these studies truly were randomized (or even could be) that they could resolve the above concerns. But they don’t (resolve these concerns), because they aren’t (really randomized).

First, what would a randomized study look like? Well, it would have to look something like this – where we randomly take a group of kids – with consent or even against their will – and assign them to either the charter or traditional school option. The mix of kids in each group is truly random and checked to ensure that the two groups are statistically representative (using better than the usual measures) of the population.  Then, we have to make sure that all other “non-treatment” factors are equivalent, including access to facilities, resources, etc. That is, anything that we don’t consider to be a feature of the treatment itself. This is especially important if we want to know whether expanding elements of the treatment are likely to work for a representative population.  This is a randomized, controlled trial.

Slide1

So then, what’s randomized in a randomized charter school study? Or lottery-based study?  One might sketch out a lottery-based study as follows:

Slide2

Here, the study is really only randomized at one point in a long complicated sequence – the lottery itself. Students and families have to decide they want to enter the lottery – that they are interested in attending a charter school, which will ultimately affect the composition of the charter school enrollments. Then, among those selecting into the pool, students are randomly chosen to attend the charters along side others randomly chosen to attend (from a non-random pool of lottery participants), and the others randomly selected, to go, well, somewhere else… with a group of peers non-randomly chosen to end up in that same somewhere else.

So, while the studies compare the achievement of kids randomly chosen to those randomly un-chosen (thus comparing only those who tried to get a charter slot), the kids are shuffled into settings that are anything but randomly assigned, containing potentially vastly different peer groups and a variety of other differences in setting. Add to this the likelihood of non-random student attrition, further altering peer group over time.

As such, I very much prefer these studies to be referred to as “lottery-based” rather than randomized or experimental. These studies are randomized at only one step in this process, potentially conflating setting/peer effects with treatment effects, thus substantially compromising policy implications.

As with those matching studies, the types of variables used to check and/or correct for peer composition and non-randomness of attrition are often too imprecise to be useful.

One fun alternative would be to pull a switch, whereby the charter teachers, their model, instructional strategies etc. would be traded with the district schools’ teachers, model and strategies, as a confirmatory test to see whether the charter model effects are actually transferable (assuming there were effects to begin with).

Slide5

Clearly, I’m asking way too much to assume that charter school, or most other program/intervention research in education be based on real RCTs. That’s not going to happen. And I’m not convinced it would be that useful for informing policy anyway. But, my point in this post is to make it clear that the difference between the types of matched student studies done by CREDO, for example, and the studies being (mis)characterized as “gold standard” randomized studies is far more subtle than many are willing to admit and NEITHER ARE WHAT THEY’RE REALLY CRACKED UP TO BE!

Dumbest “School Finance” Tweet Ever?

Critics say only public systems can focus 100% on the children, but vast majority of K-12 $$ goes to employees not kids bit.ly/SLrNUn

— AEI Education(@AEIeducation) December 18, 2012

Twisted Truths & Dubious Policies: Comments on the NJDOE/Cerf School Funding Report

Yesterday, we were blessed with the release of yet another manifesto (as reported here on NJ Spotlight) from what has become the New Jersey Department of Reformy Propaganda.  To be fair, it has become increasingly clear of late, that this is simply the new model for State Education Agencies (see NYSED Propaganda Here), with the current US Dept of Education often leading the way.

Notably, there’s little change in this report from a) the last one or b) the Commissioner’s state of the schools address last spring.

The core logic of the original report remains intact:

  1. That NJ has a problem – and that problem is  the achievement gap between low income and non-low income kids;
  2. That spending money on these kids doesn’t help – in fact it might just hurt – but it’s certainly a waste;
  3. Therefore, the logical solution to improving the achievement gap is to reduce funding to districts serving low income and non-English speaking kids and shift that funding to others.

Here’s a quick walk-through…

The Crisis?

The new report, like the previous, zeros in on the problem of New Jersey’s achievement gap between low income and non-low income kids. Now, the reason that the recent reports have focused so heavily on the achievement gap is that in the early days of this administration, the rhetoric was focused on the system as a whole being academically bankrupt. The simple response was to point out that NJ schools, by nearly any outcome measure stack up quite favorably against nearly any other state. So, they had to back off that rhetoric, and move to the achievement gap thing. Here’s one of the justifying statements in the current report.

“Likewise, on the 2011 administration of the National Assessment of Educational Progress, New Jersey ranked 50th out of 51 states (including Washington, D.C.) in the size of the achievement gap between high- and low-income students in eighth grade reading.”

Of course, as I’ve pointed out again and again, and will reiterate below, this is an entirely bogus comparison.

The Proposed Solution?

Like the previous funding report from last Winter, the primary recommendations in this new manifesto are to reduce funding adjustments for low income and non-English speaking kids, because we know they don’t need that funding and certainly couldn’t and obviously haven’t used it well. The report did back off from proposing one of the oldest tricks in the book for cutting aid to the poor – funding on average daily attendance – but likely backed off because they simply lack the legal authority to propose this change in this context and not out of any moral/ethical principle.

The Rationale?

The most bizarre section of the new report appears on the bottom of the second page. Here, the report’s author makes several bold, outlandish and unjustified and mostly factually incorrect statements. Further, little or no justification is provided for any of the boldly stated points. It’s nearly as ridiculous as The Cartel.

Here are two of my favorite paragraphs:  

 The conclusion is inescapable: forty years and tens of billions of dollars later, New Jersey’s economically disadvantaged students continue to struggle mightily. There are undoubtedly many reasons for this policy failure, but chief among them is the historically dubious view that all we need to do is design an education funding formula that would “dollarize” a “thorough and efficient system of free public school” and educational achievement for every New Jersey student would, automatically and without more, follow.” (emphasis added)

“Of course, schools must have the resources to succeed. To the great detriment of our students, however, we have twisted these unarguable truths into the wrongheaded notion that dollars alone equal success. How well education funds are spent matters every bit as much, and probably more so, than how much is spent. New Jersey has spent billions of dollars in the former-Abbott districts only to see those districts continue to fail large portions of their students. Until we as a state are willing to look beyond the narrow confines of the existing funding formula – tinkering here, updating there – we risk living Albert Einstein’s now infamous definition of insanity: doing the same thing over and over again and expecting a different result.”

First, I would point out that starting with the line “the conclusion is inescapable” is one of the first red flags that most of what follows will be a load of BS. But that aside… let’s take a look at some of these other statements.  I’m not sure who the Commissioner thinks is advancing the “historically dubious view that all we need…blah…blah… blah… dollarize … blah… blah” but I would point out that the central issue here is that a well organized, appropriately distributed, sufficiently funded state school finance system provides the necessary underlying condition for getting the job done – achieving the desired standards, etc. (besides nothing could ever equal the reformy dubiousness of this graph… or these!) .

This isn’t about arguing that money in and of itself solves all ills. But money is clearly required. It’s a prerequisite condition. More on that below. This claim that others are advancing such an historical dubious view is absurd. Nor is it the basis for the current state school finance system, or the court order that led to the previous (not current) system! [background on current system here]

Equally ridiculous is the phrase about these “unarguable truths.” Again, when I see a phrase like this, my BS detector nearly explodes. Again, I’m not sure who the commissioner thinks is advancing some “wrongheaded notion” that “dollars alone equal success,” but I assure you that while dollars alone don’t equal success, equitable and adequate resources are a necessary underlying condition for success.

Indeed, the current state school finance system is built on attempts to discern the dollars needed to provide the necessary programs and services to meet the state outcome objectives [I’ll set aside the junk comparisons to Common Core costs listed in the report for now]. But the focus isn’t/wasn’t on the dollars, but rather the programs and services – which, yes… ultimately do have to be paid for with… uh… dollars.

Under the prior Abbott litigation and resulting funding distributions, the focus was entirely on the specific programs and services required for improving outcomes of children in low income communities (early childhood education programs, adequate facilities, etc.). In fact, that was one of the persistent concerns among Abbott opponents… that the programs/services must be provided under the court mandate, regardless of their cost (not that the dollars must be provided regardless of their use) and in place of any broader, more predictable systematic formula. So, perhaps the answer is to go back to the Abbott model?

Ultimately, to establish a state school finance formula (which is a formula for distributing aid), you’ve got to “dollarize” this stuff. But that doesn’t by any stretch of the imagination lead to the assumption that the dollars create – directly – regardless of use – the outcomes. That’s just ridiculous. And the report provides no justification behind its attack on this mythical claim.

In fact, these statements convey a profound ignorance of even the recent history of school finance in New Jersey.

The Reality!

Now that I’m done with that, let’s correct the record on a few points.

New Jersey has an “average” achievement gap given its income gap

I’m not sure how many times I’ll have to correct the current NJDOE and its commissioner on their repeated misrepresentation of NAEP achievement gap data. This is getting old and it’s certainly indicative that the current administration is unconcerned with presenting any remotely valid information on the state of New Jersey schools. Given what we’ve seen in previous presentations I guess I shouldn’t be surprised.

In any case, here’s my most recent run of the data comparing income gaps and NAEP outcome gaps. Across the horizontal axis in this graph is the difference in income between those above the reduced lunch income threshold and those below the free lunch income threshold. New Jersey and Connecticut have among the largest gaps in income between these two groups. Keep in mind that the same income thresholds are used across all states, despite the fact that the cost of comparable quality of life varies quite substantially (nifty calculator here). On the vertical axis are the gaps in NAEP scores between the two groups.

 Figure 1. Income Gaps and Achievement Gaps

Slide1

As we can see, states with larger gaps in income between the groups also have larger gaps in scores between the two groups. Quite honestly, this is not astounding. It’s dumb logic. And that’s why it’s so inexcusable for Cerf & Co. to keep returning to this intellectually & analytically dry well.

Most importantly, NJ’s gap is right on the line. That is, given its income gap, NJ falls right where we would expect- on the line. NJ’s income related achievement gap is right in line with expectations!

Is that good enough? Well, not really. There’s still work to be done. But the bogus claim that NJ has the 2nd largest achievement gap has to stop.

New Jersey has posted impressive NAEP gains given its spending increases

Now let’s take a look at how disadvantaged kids in NJ have actually done on a few of the NAEP tests in recent years when compared to disadvantaged kids in similar states in the region.  The pictures pretty much tell the story.

Figure 2. NAEP 8th grade Math for Children Qualified for Free Lunch

Slide3

Figure 3. NAEP 4th grade Reading for Children Qualified for Free Lunch

Slide4

Figure 4. NAEP 8th Grade Math for Children of Maternal HS Dropouts

Slide5

Even Eric Hanushek’s recent data make NJ look pretty darn good in terms of NAEP gains achieved relatively to additional resources provided!

Figure 5. Relationship between Change in Per Pupil Spending and Overall NAEP Gain

Slide6

Figure 6. Relationship between Change in % Spending per Pupil and Overall NAEP Gain

Slide7

Figure 7. Relationship between Starting Point and Gain over Time

Slide8

For more on these last few slides and the data from which they are generated, see this post.

Arguably, given these results, doing the same thing over and over again and expecting the SAME result might be entirely rational!

Money Matters & Equitable and Adequate Funding is a Necessary Underlying Condition for Success

Finally, a substantial body of literature exists to refute the absurd rhetoric and policy preferences of the NJDOE school funding report – most specifically the veiled assertion that reducing funding to low income children is the way to reduce the achievement gap.

In a recent report titled Revisiting the Age Old Question: Does Money Matter in Education? I review the controversy over whether, how and why money matters in education, evaluating the current political rhetoric in light of decades of empirical research.  I ask three questions, and summarize the response to those questions as follows:

Does money matter? Yes. On average, aggregate measures of per pupil spending are positively associated with improved or higher student outcomes. In some studies, the size of this effect is larger than in others and, in some cases, additional funding appears to matter more for some students than others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent – in other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.

Do schooling resources that cost money matter? Yes. Schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes. Again, in some cases, those effects are larger than others and there is also variation by student population and other contextual variables. On the whole, however, the things that cost money benefit students, and there is scarce evidence that there are more cost-effective alternatives.

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more adequate funding with more accountability for its use may be most promising.

While there may in fact be better and more efficient ways to leverage the education dollar toward improved student outcomes, we do know the following:

Many of the ways in which schools currently spend money do improve student outcomes.

When schools have more money, they have greater opportunity to spend productively. When they don’t, they can’t.

Arguments that across-the-board budget cuts will not hurt outcomes are completely unfounded.

In short, money matters, resources that cost money matter and more equitable distribution of school funding can improve outcomes. Policymakers would be well-advised to rely on high-quality research to guide the critical choices they make regarding school finance.

Regarding the politicized rhetoric around money and schools, which has become only more bombastic and less accurate in recent years, I explain the following:

Given the preponderance of evidence that resources do matter and that state school finance reforms can effect changes in student outcomes, it seems somewhat surprising that not only has doubt persisted, but the rhetoric of doubt seems to have escalated. In many cases, there is no longer just doubt, but rather direct assertions that: schools can do more than they are currently doing with less than they presently spend; the suggestion that money is not a necessary underlying condition for school improvement; and, in the most extreme cases, that cuts to funding might actually stimulate improvements that past funding increases have failed to accomplish.

To be blunt, money does matter. Schools and districts with more money clearly have greater ability to provide higher-quality, broader, and deeper educational opportunities to the children they serve. Furthermore, in the absence of money, or in the aftermath of deep cuts to existing funding, schools are unable to do many of the things they need to do in order to maintain quality educational opportunities. Without funding, efficiency tradeoffs and innovations being broadly endorsed are suspect. One cannot tradeoff spending money on class size reductions against increasing teacher salaries to improve teacher quality if funding is not there for either – if class sizes are already large and teacher salaries non-competitive. While these are not the conditions faced by all districts, they are faced by many.

It is certainly reasonable to acknowledge that money, by itself, is not a comprehensive solution for improving school quality. Clearly, money can be spent poorly and have limited influence on school quality. Or, money can be spent well and have substantive positive influence. But money that’s not there can’t do either. The available evidence leaves little doubt: Sufficient financial resources are a necessary underlying condition for providing quality education.

There certainly exists no evidence that equitable and adequate outcomes are more easily attainable where funding is neither equitable nor adequate. There exists no evidence that more adequate outcomes will be attained with less adequate funding. Both of these contentions are unfounded and quite honestly, completely absurd.

Related sources:

Baker, B.D. (2012) Revisiting the Age Old Question: Does Money Matter in Education. Shanker Institute. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Baker, B.D., Welner, K. (2011) School Finance and Courts: Does Reform Matter, and How Can We Tell? Teachers College Record 113 (11) p. –

How Modern School Finance/Education Policy Works: Lessons from New York

I’ll admit that the more I do this stuff, the more I write about today’s education policy environment and especially the environment around school funding, I do get more cynical. And few states have done more to encourage my cynicism than New York, of late. But I suspect that the tales from the trenches in many other states might be quite similar. So let me use New York as a prototype of the twists and turns and warped logic of modern state education policy.  New York education policy has followed a four step process:

Step 1: Slither out from court order by rigging low-ball foundation aid formula

As I noted on another recent post, several years back the New York Court  of Appeals ordered that the state legislature provide sufficient funding (specifically to New York City) to achieve a “sound basic education” which was ultimately equated with a “meaningful high school education.”  The city and governor’s office presented to the court alternative estimates of what that would cost. The state (governor/legislature/regents), as might be expected sought a “less expensive” option. And the court largely took their side. That is, the court ordered that the system be fixed, but largely (uncritically, but for some dissenting minority opinion) accepted the state’s proposal to fix it.

The state achieved their low-ball estimate by pulling a few classic tricks, some of which have been used in other states. First, the state based their minimum funding level on average spending of existing districts meeting the state standards – but had set a relatively low bar for those standards (a bar most were already surpassing anyway). Then they chose to look only at the “instructional” spending share of current spending (lopping off a large chunk of spending that’s actually needed to operate a school).  Rhode Island recently pulled the same garbage, but instead of looking at instructional spending for districts within Rhode Island they used instructional spending in the neighboring states of Massachusetts, Connecticut and New Hampshire (okay… NH doesn’t border RI… does it… but don’t tell their Commissioner… ‘cuz including NH allowed them to bring the average down! See link above).

The final step in their low-ball analysis was to look only at the average spending of the lower half spending districts that meet the state standards – assuming those districts to be the “efficient” ones, better reflecting minimum “costs.” Of course, what this does in New York State is to eliminate from the calculation nearly every district in the Rockland, Westchester, NYC and Long Island regions. So… base level of funding is essentially the average instruction-only spending of the lower half spending districts that have at least somewhat below current average outcomes, and lie somewhere between Syracuse and Buffalo. That makes sense right? That should give us a reasonable ballpark cost for New York City, Mount Vernon or Yonkers, right?

Even for my 2012-13 analyses below, the foundation level per pupil is set to only $6,570, where it is assumed that the average instructional spending per pupil needed in a New York State to achieve state standards.  So then, how does that stack up against alternative cost estimates of what would actually be needed to achieve specific state outcome targets?

I don’t have time to explain the chart below in great detail, but I do provide complete analysis/explanation in this report on New York State school finance.

In short, what Figure 1 shows us is in PURPLE, the foundation level, or target funding calculated to be needed by districts in each poverty quintile under the state’s own proposed remedy to their constitutional violation.  The PURPLE is the amount of money a district would have under the foundation aid formula, as a combination of state aid and levying the minimum required local effort.

The blue bars come from a cost model produced a few years back by William Duncombe of Syracuse University in which he used that model to estimate the average spending actually required to achieve a 90% proficiency rate on state assessments (where the average had drifted over time, making the 80% standard relatively meaningless – again, see report).  The red arrows show the gap between estimated costs of reasonable outcome goals and guaranteed funding under the foundation formula.

Figure 1:

Slide1

The point here is simply to show a) how much the state low-balled the target funding using their approach vs. a more rigorous approach, and b) how those funding gaps increase quite dramatically for higher poverty districts. In fact, the target funding level is not that far off for low poverty districts, but it’s only slightly better than half of the cost of comparable outcomes for high poverty districts.

Step 2: Conjure annual excuses for why the state can’t afford to fund even its own low-balled targets for local districts

Given figure 1 above, it might be bad enough if the state did follow through and fund its formula. The formula itself was/is grossly insufficient, determined by bogus calculations and filtrations (exclusions) of data all toward the end goal of generating the lowest possible politically palatable estimate of the cost of providing a sound basic education in New York.

But no… no… low-balling the cost wasn’t nearly far enough for the NY legislature and Gov(ernors) to go. The next step was to say – We can’t afford it (they were saying this even before the economy tanked, and they set out a multiyear phase in)!  We can’t afford our own low-ball estimate (while decrying that the estimate was somehow actually overly generous?).

Did they cut back just a little from their target? Oh… say… give districts about 90% or 80% (uh… that would actually be a lot of cut) of what the formula said they needed? Nope. They went much deeper than that. In fact, as I showed in one recent post, as student population needs escalate (according to the state’s own Pupil Need Index) under-funding with respect to foundation targets grows in some cases to over $4,000 per pupil and in New York City to over $3,000 per pupil.

Figure 2.

Slide1

As I showed in that same post, among the most screwed large districts in the state, several receive from the state in general foundation aid only about half (or less) of what they should receive under the STATE’S OWN LOW-BALL FORMULA!

Figure 3.

Slide2

Let’s be clear here. I’m not talking about shortfalls from the relatively high cost targets in that first graph. I’m talking about state aid shortfalls relative to the STATE’S OWN LOW-BALL Foundation Aid model – the model represented by the purple bars in the first graph.  Note also that the state in proposing this foundation model that they’ve subsequently underfunded, essentially declared that low-ball model to be the empirical manifestation of their own state constitutional obligation. It’s their own freakin’ definition of their constitutional obligation…. And they’ve chosen to ignore it.

Step 3: Pretend that it’s all the teachers’ fault and use that as a basis for holding hostage additional funding that should have gone to high need districts years ago!

Oh… but it doesn’t end there!

Riding the national, Duncanian wave of new normalcy (which I’ve come to learn is an extreme form of innumeracy) & reformyness, the only possible cause of lagging achievement in New York State  is bad teachers –greedy overpaid teachers with fat pensions – and protectionist unions who won’t let us fire them. Clearly, the lagging state of performance in low income and minority districts in New York State has absolutely nothing at all to do with lack of financial resources under the low-balled aid formula that the state has chosen to not even half fund for the past 5 years? Nah… that couldn’t have anything to do with it. Besides, money certainly has nothing to do with providing decent working conditions and pay which might leveraged to recruit and retain teachers.

And we all know that if New York State’s average per pupil spending is high, or so the Gov proclaims, then spending clearly must be high enough in each and every-one of the state’s high need districts! (right… because averages always represent what everyone has and needs, right? Reformy innumeracy rears its ugly head again!).

So it absolutely has to be the fact that no teacher in NY has ever been evaluated at all, or fired for being bad even though we know for sure that at least half of them stink. The obvious solution is that they must be evaluated by egregiously flawed metrics – and we must ram those metrics down their throats.

In fact, the New York legislature and Governor even found it appropriate to hold hostage additional state aid if districts don’t adopt teacher evaluation plans compliant with the state’s own warped demands and ill-conceived policy framework.

As I understand it, legislation passed this past year actually tied receipt of state general aid to compliance with the state teacher evaluation mandate. That, in order to receive any increase in state general/foundation aid over prior year, a districts would have to file and have accepted their teacher evaluation plan.

That’s it – we’ll take away their general state aid – their foundation aid – the aid they are supposed to be getting in order to comply with that court order of several years back. The aid they are constitutionally guaranteed under that order. I’m having some trouble accepting the supposed constitutional authority of a state legislature and governor to cut back general aid on this basis – where they’ve already failed to provide most of the aid they themselves identified as constitutionally adequate under court order? But I guess that’s for the New York Court system to decide.

If nothing else, it is thoroughly obnoxious, arbitrary and capricious and grossly inequitable treatment. I hear the reformers (who understand neither math nor school finance) whine… But why… why is it inequitable to require similarly that poor and rich districts follow state teacher and principal evaluation guidelines. Setting aside the junk nature of that evaluation system and the bogus measures on which it rests (and the fact that the reformers’ fav-fab-charters have largely rightfully ignored the eval mandate), it is inequitable because districts serving higher poverty children stand to lose more money per child as a result of non-compliance. And they’ve already been squeezed.

And here’s how that plays out. As I understand it, if districts don’t comply by January, they face the threat of losing the small increase in state aid they received for the current year (compared to 11-12). So, they’d lose it retro-actively, part way through this year. And guess what? Because higher need districts received a marginally greater increase in state aid, they’d lose more per pupil. But the gaps shown above actually already include that oh-so-generous increase! That’s right, the poorer you are, the bigger the financial penalty for non-compliance with the teacher evaluation mandate – and the bigger the financial hole the state has put you in to begin with!

Figure 4. State aid Per Pupil Before and After Non-Compliance Penalty by Student Need

Slide4

Figure 5. Compliance Penalty by Student Need

Slide5This recent article explains that Hempstead, already underfunded by the largest per pupil amount of any large district in the state, stands to lose another $3.5 million in aid if it does not come to agreement on a teacher evaluation plan. State general aid is for the general provision of education to these kids – to pay for enough teachers, classrooms etc. It’s about the day to day operations of schools to ensure the provision of a sound basic education.  This funding shouldn’t be held hostage over reformy whims.

Note that for many districts I have likely understated the amount of aid they would lose because I have counted only changes to general, foundation aid, including “gap elimination adjustment” and partial restoration of those funds. (it would appear, for example, that the potential losses to Hempstead reported in the news are closer to that districts total aid change, not just foundation/GEA change).

Step 4: Protect billions in state aid still being allocated to districts with far fewer additional student needs/costs

And let us not forget that New York State was one of the shining stars – a poster child – of my report with Sean Corcoran for the Center for American Progress where we chronicled how states actually use their aid systems to make equity worse, not better.  While the NY Gov and Legislature have continued to shed elephant tears (in purely political terms) about their fiscal dire straits, the state persists in protecting billions in state direct aid and indirect tax relief subsidies that largely support the states lower  and lowest need local public school districts.

Figure 6 shows that if we look at state general aid, based on initial calculations to local districts by poverty (left hand panel), even after allocating state general aid, there remains an $1,100 per pupil gap in state and local revenue between high and lower poverty districts. But, after the state “tweaks” the  state general aid distribution to provide minimum aid to the wealthiest districts and increase aid to middle/upper middle class districts, and then adds on “tax relief” subsidies, the gap between higher and lower poverty districts increases to $2,300 per pupil. Yep – NY state is actually using billions in state funding to make the system less equitable!  Read the report below for more thorough explanation/analysis!

Figure 6. School Finance Pork in New York!

Slide6

Baker, B.D., Corcoran, S.P.(2012) The Stealth Inequalities of School Funding: How Local Tax Systems and State Aid Formulas Undermine Equality. Washington, DC. Center for American Progress. http://www.americanprogress.org/wp-content/uploads/2012/09/StealthInequities.pdf

And that is how modern state education policy works!