Blog

Friday Thoughts: Is there really a point to advocating both standardization and choice?

I’ve long been perplexed that the Thomas B. Fordham Institute frames as its top two policy priorities:

  1. Implementing the Common Core
  2. Advancing Choice

Their new web site layout makes this more obvious.

More recently, a report released by the Council on Foreign Relations (referred to largely as the Rice-Klein report in the media and on twitter) argued that our “failing” education system is  a national security concern, and that the road to addressing that concern involves:

  1. expanding the Common Core State Standard initiative to include subjects beyond math and English Language Arts;
  2. an expansion of charter schools and vouchers

Now, as I understand it, there’s at least a subtle difference between these two sources on the point regarding vouchers and charter schools in that Fordham does not appear these days to be out front on promoting vouchers and instead seems to be favoring charter expansion (avoiding the word “voucher” but welcoming “other approaches that provide parents and children solid options and the capacity to make maximum use of them”).

Let me be clear that this post isn’t about favoring or slamming either vouchers or the common core, but rather pointing out that favoring both is entirely inconsistent, unless there’s some weird, warped agenda behind it all. This post IS about slamming the two, when used in combination. It just doesn’t make sense.  Let’s throw into this mix other policies promoting standardization of the operations of traditional public schools like forcing those schools to make personnel decisions based largely on student assessment data.

Collectively what we have here is a massive effort on the one hand, to require traditional public school districts to adopt a common curriculum and ultimately to adopt common assessments for evaluating student success on that curriculum and then force those districts to evaluate, retain and/or dismiss their teachers based on student assessment data, while on the other hand, expanding publicly financed subsidies for more children to attend schools that would not be required to do these things (in many cases, for example, relieving charter schools from teacher  evaluation requirements).

For example, if we believe that improving understanding of core scientific concepts is important for our national security or economic competitiveness, why would we be trying to increase the number of students who opt out of those standards, opting instead to attend fundamentalist religious institutions which may be decidedly anti-science? It seems like it would be one or the other? Certainly, TB Fordham Institute appears concerned with the importance of teaching science, and evolution specifically. When they simultaneously promote “other” choice alternatives, are they suggesting the regulation of science curriculum in those alternatives?

Also, if one believes that competitive pressures create improvement across schools (by stimulating innovation), why set up totally different rules – absurd constraints – in fact – for the largest set of schools in the mix. That seems rather counter productive and certainly limits any potential for real innovation. My critique all along about Race to the Top as a stimulus for innovation was that RTTT was anything but a stimulus for innovation and was instead a bribe to get states to fast-track a handful of preferred and completely unfounded reformy template policies – effectively squelching any real innovation that might have otherwise occurred.

One might instead argue for forcing all schools – public, private (if voucher receiving) and charter – to adopt the common core and evaluate teachers with student test data – and to simultaneously promote a broad based choice program. Yeah… let’s try really hard to make all schools the same and then let individuals choose among them? What we would have is a program that allows parents to choose which school adopts the common core better, and uses testing data better when firing teachers. That doesn’t seem to make a whole lot of sense, either.

No matter how you cut it, combining these two broad preferences leads to a ridiculous mix of policies, whichever side you’re coming from (unless, of course, you’re trying to come from both at once).

So, this all has me wondering if the real objective here – among advocates of these seemingly contradictory policies – is actually to make traditional public schooling so utterly unbearable for both teachers and students by expanding the testing and standards driven culture, expanding curricular standards across areas previously untouched, sucking any remaining creativity out of teaching, and mechanizing the teaching workforce in traditional public schools, making even the worst of the less-regulated alternatives seem more desirable for future generations of both teachers and students?

 

The Principal’s Dilemma

This is a bit of tangential post for this blog, but it’s a topic a few of us have been tweeting about and discussing for the past day or so.

In a series of recent blog posts and in a forthcoming article I have discussed the potential problems with using bad, versus entirely inappropriate measures for determining teacher effectiveness.  I have pointed out, for example, that using value-added measures to estimate teacher effectiveness and then determine whether a teacher should be denied tenure, or have their tenure removed might raise due process concerns which arise from the imprecision and potential outright inaccuracy of teacher effectiveness estimates derived from such methods.

I have also explained that in some states like New Jersey, which have adopted Student Growth Percentile measures as an evaluation tool, that where those measures are used as a basis for dismissing teachers, teachers (or their attorney’s) might simply rely on the language of the authors of those methods to point out that they are not designed to, nor were they intended to attribute responsibility for the measured student growth to the teacher. Where attribution of responsibility is off the table the dismissing a teacher on an assumption of ineffectiveness based on these measures is entirely inappropriate, and a potential violation of the teacher’s due process rights.

But, the problem is that state legislatures are increasingly mandating that these measures absolutely be used when making high stakes personnel decisions. That, for example, such measures count for a significant percentage of the final decision (see notes here) to tenure or remove tenure from a teacher, and in some case (Like NY) that these measures be the absolute determinant (that a teacher cannot be rated as good if they have bad value added ratings).  Some state statutes and regulations provide more flexibility, but essentially require that principals and/or district officials develop their own systems and measures which generally conform to value-added or SGP methods or include them as measures within the evaluation process.

Enter the principal’s dilemma. I would argue that state policymakers in many regards have quickly passed along from one state to another, ill-conceived copy-and-paste legislation with little substantive input from the constituents who actually have to implement this stuff. And, as is clear by the groundswell of opposition in states like New York by principals in particular, many charged with the on-the-ground implementation of these policies are, shall we say, a bit concerned. But what to do?

A principal might be concerned, for example, that if she actually follows through with implementation of these ill-conceived fast-tracked policies, and uses the recommended or required measures or follows the preferred methods for developing her own measures, that she might end up being backed into violating the due process rights of teachers.  That is, the principal might, in effect, be required to dismiss a teacher based on measures that the principal understands full well are neither reliable nor valid for determining that teacher’s effectiveness.

So, can the principal simply refuse to implement state policy? My guess is that even if the district board of education agreed in principle with the principal, that the state would threaten some action against the local school district – applying sufficient pressure (perhaps financially) – such that the local board of education would take action against the principal. And, because the principal would be failing to fulfill her official duties as defined in state statutes and regulations, the principal would have no legal leg to stand on – though might at least have a clear conscience to carry with her in search of a more reasonable state that has avoided such foolish, restrictive policies.

The principal might instead halfheartedly comply with the letter of the state statutes, but still vocally oppose the statutes and regulations in blogs, on twitter and in local op-ed columns.  This is where we might think that the principal would be on safer ground. Unfortunately, recent legal precedents suggest that even in this case, the principal might be at a loss for a winning legal defense if the local school board is pressured into action against her. To the extent that the principal’s public airing of concerns with the newly adopted policies relate to her own official duties as a principal, the principal may not even be able to make first amendment argument in her own defense, regarding her concerns with the current direction of public policy regarding teacher evaluation. Even though the principal might actually be a pretty good source of opinion on the matter. In Garcetti, the “Supreme Court held that speech by a public official is only protected if it is engaged in as a private citizen, not if it is expressed as part of the official’s public duties.”

An awkward situation indeed. It would seem that the only choice of the principal to not jeopardize her own career is to suck-it-up, be quiet and do what she’s knows is wrong, violating the due process rights of one teacher after another by being the hand that implements the ill-conceived policies drawn up by those with little or no comprehension of what they’ve actually done.

Is this really how we want our schools to be run?

Note: Reformy policy is particularly schizophrenic regarding deference to principals and respect for their decision making capacity.  Consider that two key elements of the reformy teacher effectiveness policy template are a) highly restrictive guidelines/matrices/rating systems for teacher evaluation and b) mutual consent hiring and placement policies.  Mutual consent policies coupled with anti-seniority preference policies (part of the same package) require that when a teacher is to be hired into or placed in a specific school within a district, district officials must have the consent of the school principal in order to make such a placement.  These policies presume that principals make only good personnel decisions but district officials are far more likely to make bad ones. These policies also ignore that districts retain latitude to place principals, and further, that there might actually be a case where the district office wishes to place a top notch teacher in a school that currently has weak leadership – but where that weak leader might be inclined to deny the high quality teacher. It’s just a silly policy with no basis in practicality or in research. But at its core, the mutual consent policy asserts that the principal is all-knowing and the best person to make personnel decisions. However, these mutual consent policies are often included in the very same packages which then require the principal to a) rate teacher effectiveness in accordance with a prescriptive rubric and b) tenure and or de-tenure teachers in accordance with that rubric on highly restrictive timelines (3 good years to tenure, 2 bad and you’re out). Put really simply… it’s one or the other. Either princpals’ expertise should be respected or not.  Simultaneously advocating both perspectives seems little more than an effort to confuse and undermine the efficient operation of public school systems!

Baseless Reformy Thoughts from Connecticut (& How this year’s reforms improved decades of past performance!?)

This utterly absurd post appeared yesterday on the CT Ed Reform blog:

http://ctedreform.org/blog/2012/04/poverty-is-not-to-blame-ct%E2%80%99s-low-income-students-rank-48th-in-the-nation-while-ma%E2%80%99s-rank-2nd/

Essentially, the argument goes:

  1. CT’s achievement gap is worse than achievement gaps in states like Massachusetts and New Jersey and in particular, CT’s low income students perform less well than low income students in those states.
  2. Massachusetts has recently adopted reforms to teacher evaluation, which is obviously why Massachusetts has a smaller achievement gap and better performance among low income students. (the post cites New Jersey as well)
  3. THEREFORE, WE KNOW THAT POVERTY IS NOT THE ISSUE…. IT’S TEACHER EVALUATION (and Charter schools, and other reformy stuff)!
  4. Therefore, the solution is to pass SB24 in its original form, which includes such fun things as student test based evaluation of teachers (3 good years to tenure, 2 bad and you’re out), additional funding for and expansion of charter schools – which we know can overcome this pesky poverty distraction!

Now, before I even begin here, I found it most absurd that this particular Ed Reform posting attributed progress made in Massachusetts over the past two decades, to policies adopted in the past two years! As they put it:

We think folks would be hard-pressed to argue that low-income students right over the border in Massachusetts or New Jersey face very different circumstances at home than the low-income students in Connecticut.  So, what actions have our neighboring states taken to address their achievement gaps that Connecticut hasn’t?  Put bluntly, they have adopted education reform policies very similar to the ones proposed in Governor Malloy’s original education reform bill.  They have adopted or implemented policies that evaluate teachers on the basis of student performance, that rank schools and districts within a tiered intervention framework, and that provide the Commissioner with the authority to intervene in the lowest performing schools and districts.

That’s just funny! Ridiculous in fact, making me wonder if this really was just an April fool’s joke.   Let’s make this really plain and simple.

Policies adopted in Massachusetts in the past two years and not yet even fully implemented did not cause low income children in Massachusetts to outperform low income children in Connecticut between 1992 and 2009!

And about New Jersey, I’m not quite sure what policies (legislation) they are talking about, since legislation regarding teacher evaluation is still in its early incubation stages. Pilot studies have begun in a handful of districts but  to suggest that pilot studies being implemented and evaluated now somehow explain past performance of low income students in New Jersey is, well, just dumb.

Now, Massachusetts and New Jersey have in fact implemented reforms that Connecticut has not – SCHOOL FINANCE REFORMS (in Mass, coupled with accountability reforms in the 1990s). Both states have more systematically targeted funding to higher need districts. And for a review of the literature on the effects of such school finance reforms, with specific references to New Jersey and Massachusetts, see:

  1. http://www.tcrecord.org/library/abstract.asp?ContentId=16106
  2. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

While Connecticut has selectively driven magnet aid to Hartford and New Haven, Connecticut has left other high need districts out entirely. Further, in recent years, as shown in a previous post, Connecticut charter schools have substantively segregated students by income within Hartford, New Haven and Bridgeport.

Here’s a quick snapshot, using 2009 data, of the relationship between current spending per pupil and U.S. Census poverty rates for districts enrolling over 2,000 pupil within New Jersey, Massachusetts and Connecticut. The notable feature of these graphs, indicated by the r-squared value is that in both Massachusetts and New Jersey, current spending per pupil is far more predictably a function of differences in local district poverty rates (adjusted for regional cost variation). In New Jersey and Massachusetts, poverty variation explains more than a third, to nearly half the variation in per pupil spending, and in Connecticut, less than 1/6!

Financial data: http://www.census.gov/govs/school/

Poverty data: http://www.census.gov/did/www/saipe/data/schools/data/index.html

Now, on to other issues: That achievement gap!

Yes, Connecticut does have a relatively large achievement gap, but that gap has to be put in context with similar states, as I explain here. The short version of this story is that the low income- non-low income achievement gaps across states are largely a function of the income gaps between the two groups. Here’s my graph of that relationship:

In the upper right hand corner of this graph are the states with both large income gaps between poor and non-poor kids and with large achievement gaps between them. Yes, Connecticut’s gaps are larger than those of Mass or New Jersey, and those are perhaps the most relevant comparison states (the post got that right – but that’s about all).

We know the correlates of student achievement across CT schools: Poverty!

As it turns out, across these three states, in each case, lower income students perform less well on NAEP. Children qualified for reduced lunch perform less well than those not qualified for subsidized meals at all, and children qualified for free lunch perform less well than those who qualify for reduced price lunch. That’s why, in some cases, I choose t parse these populations in comparisons where most schools serve children below the upper threshold.

Data source: http://nces.ed.gov/nationsreportcard/naepdata/dataset.aspx

Data source: http://nces.ed.gov/nationsreportcard/naepdata/dataset.aspx

It also turns out that if we just go nutty with lots of different measures across Connecticut districts to identify those factors that are most highly correlated with student achievement measures (all data can be found here: http://sdeportal.ct.gov/Cedar/WEB/ct_report/DTHome.aspx & http://www.nces.ed.gov/ccd/bat) we find that various measures of poverty or household income are pretty darn highly associated with student outcome measures! This would seem to suggest that perhaps poverty and measures related to it do likely matter in some way. We’re talking about correlations near and above .80 here between % free/reduced and CMT scores across districts!

We know which districts have greater needs, by various measures!

The reality is that by any number of measures of income or poverty, we know which Connecticut districts have greater student needs, low income and in many cases more children with Limited English Language proficiency.  Here are the scatterplots of the relationships between poverty, income and ELL measures used in the above correlation analysis.

Yep. It’s all pretty straightforward. Various measures of income and poverty are pretty highly related across Connecticut. It’s a highly socioeconomically and racially segregated state. And student outcome measures remain highly correlated with socio-economic measures and with racial composition of school districts!

Malloy’s plan does little or nothing to help them financially

We also know from my previous posts that the Malloy plan does little or nothing to infuse additional resources into the highest need districts. Here it is again!

Additional data sources suggest that CT charters serve fewer needy kids and spend more per pupil than surrounding district schools

But, the Malloy plan does include substantial boost in funding for charter schools which I have shown in previous posts, tend to serve the less needy kids within the highest need settings in the state.  Further, I have shown in those earlier posts that Connecticut charter schools don’t appear to be systematically financially disadvantaged when compared to traditional public schools.

I recalled the other day that the new U.S. Department of Education school site data set released a short while ago includes per pupil spending figures for charter schools in some states. Among those states is Connecticut. The report and data also include information on shares of children by school who are low income.

Here are a few quick snapshots of how Connecticut charter schools in Hartford, New Haven and Bridgeport compare in terms of spending and poverty to “regular” public schools in each of those host districts.

First, without charter names:

Data source: http://www2.ed.gov/about/offices/list/opepd/ppss/reports.html#comparability-state-local-expenditures

[Note that the data set has the variable labeled as “school poverty rate” when in fact it is, I believe, a % free or reduced lunch measure. I’ve left the data label as it is in the original data set]

Now, with the names:

Data source: http://www2.ed.gov/about/offices/list/opepd/ppss/reports.html#comparability-state-local-expenditures

[Note that the data set has the variable labeled as “school poverty rate” when in fact it is, I believe, a % free or reduced lunch measure. I’ve left the data label as it is in the original data set]

A few things are notable here.

First, overall, there’s simply no upward tilt in per pupil spending by school poverty rate. That said, I’m only looking here within higher poverty settings (with the trendline determined by the regular public schools only).  In other words, higher poverty schools don’t generally have more resources per pupil than lower poverty ones within these cities, but my experience with similar data in other settings indicates that these variations are most often explained by the distribution of children with disabilities (also scarce in CT charters).

Second, as I illustrated in my previous post, the charter schools in these cities stand out in terms of the populations they serve (by low income status). New Haven schools have somewhat of a spread of low income rates, but Hartford and Bridgeport are all crunched up against the 100% mark (not all at 100% though, and with some more spread when I use free lunch only). In each case, charter % low income is lower than most regular schools in the host district.

Third, consistent with my previous analysis (but likely because the state reported the data to USDOE), many (more than not & all Achievement 1st schools) charter schools appear to be spending not only more than schools serving similar student populations (by income status), of which there are very few in these settings, but also more than district schools serving much lower income student populations.

These data certainly  raise questions about the validity of the current policy push in Connecticut and raise even more questions about the stated reasons for that push. That is, to the extent that anyone truly believes the absurd rhetoric that test-based teacher evaluation policies and expanding higher spending lower poverty charter schools are the solution to Connecticut’s achievement gap.

Firing teachers based on bad (VAM) versus wrong (SGP) measures of effectiveness: Legal note

In the near future my article with Preston Green and Joseph Oluwole on legal concerns regarding the use of Value-added modeling for making high stakes decisions will come out in the BYU Education and Law Journal. In that article, we expand on various arguments I first laid out in this blog post about how use of these noisy and potentially biased metrics is likely to lead to a flood of litigation challenging teacher dismissals.

In short, as I have discussed on numerous occasions on this blog, value-added models attempt to estimate the effect of the individual teacher on growth in measured student outcomes. But, these models tend to produce very imprecise estimates with very large error ranges, jumping around a lot from year to year.  Further, individual teacher effectiveness estimates are highly susceptible to even subtle changes to model variables. And failure to address key omitted variables can lead to systemic model biases which may even lead to racially disparate teacher dismissals (see here & for follow up , here) .

Value added modeling as a basis for high stakes decision making is fraught with problems likely to be vetted in the courts.  These problems are most likely to come to light in the context of overly rigid state policy requirements requiring that teachers be rated poorly if they receive low scores on the quantitative component of evaluations, and where state policies dictate that teachers must be put on watch and/or de-tenured after two years of bad evaluations (see my post with NYC data on problems with this approach).

Significant effort has been applied toward determining the reliability, validity and usefulness of value-added modeling for inferring school, teacher, principal and teacher preparation institution effectiveness. Just see the program from this recent conference.

As implied above, it is most likely that when cases challenging dismissal based on VAM make it to court, deliberations will center on whether these models are sufficiently reliable or valid for making such judgments – whether teachers are able to understand the basis for which they have been dismissed and whether it is assumed that they have had any control over their fate.  Further, there exist questions about how the methods/models may have been manipulated in order to disadvantage certain teachers.

But what about those STUDENT GROWTH PERCENTILES being pitched for similar use in states like New Jersey?  While on the one hand the arguments might take a similar approach of questioning the reliability or validity of the method for determining teacher effectiveness (the supposed basis for dismissal), the arguments regarding SGPs might take a much simpler approach. In really simple terms SGPs aren’t even designed to identify the teacher’s effect on student growth. VAMs are designed to do this, but fail.

When VAMs are challenged in court, one must show that they have failed in their intended objective. But it’s much, much easier to explain in court that SGPs make no attempt whatsoever to estimate that portion of student growth that is under the control of, therefore attributable to, the teacher (see here for more explanation of this).  As such, it is, on its face, inappropriate to dismiss the teacher on the basis of a low classroom (or teacher) aggregate student growth metric like SGP. Note also that even if integrated into a “multiple measures” evaluation model, if the SGP data becomes the tipping point or significant basis for such decisions, the entire system becomes vulnerable to challenge.*

The authors (& vendor) of SGP, in very recent reply to my original critique of SGPs, noted:

Unfortunately Professor Baker conflates the data (i.e. the measure) with the use. A primary purpose in the development of the Colorado Growth Model (Student Growth Percentiles/SGPs) was to distinguish the measure from the use: To separate the description of student progress (the SGP) from the attribution of responsibility for that progress.

http://www.ednewscolorado.org/2011/09/13/24400-student-growth-percentiles-and-shoe-leather

That is, the authors and purveyors clearly state that SGPs make no ATTRIBUTION OF RESPONSIBILITY for progress to either the teacher or the school. The measure itself – the SGP – is entirely separable from attribution to the teacher (or school) of responsibility for that measure!

As I explain in my response, here, this point is key. It’s all about “attribution” and “inference.” This is not splitting hairs. This is a/the central point! It is my experience from expert testimony that judges are more likely to be philosophers than statisticians (empirical question if someone knows?).  Thus quibbling over the meaning of these words is likely to go further than quibbling over the statistical precision and reliability of VAMs. And the quibbling here is relatively straightforward, and far more than mere quibbling I would argue.

A due process standard for teacher dismissal would at the very least require that the measure upon which dismissal was based, where the basis was teaching “ineffectiveness”, was a measure that was intended to INFER a teacher’s effect on student learning growth – a measure which would allow ATTRIBUTION OF [TEACHER] RESPONSIBILITY for that student growth or lack thereof.  This is a very straightforward, non-statistical point.**

Put very simply, on its face, SGP is entirely inappropriate as a basis for determining teacher “ineffectiveness” leading to teacher dismissal.*** By contrast, VAM is, on its face appropriate, but in application, fails to provide sufficient protections against wrongful dismissal.

There are important implications for pending state policies and current and future pilot programs regarding teacher evaluation in New Jersey and other SGP states like Colorado. First, regarding legislation, it would be entirely inappropriate and a recipe for disaster to mandate that soon-to-be available SGP data be used in any way tied to high stakes personnel decisions like de-tenuring or dismissal. That is, SGPs should neither be explicitly or implicitly suggested as a basis for determining teacher effectiveness. Second, local school administrators would be wise to consider carefully how they choose to use these measures, if they choose to use them at all.

Notes:

*I have noted on numerous occasions on this blog that in teacher effectiveness rating systems that a) use arbitrary performance categories, slicing decisive arbitrary categories through noisy metrics and b) use a weighted structure of percentages putting all factors alongside one another (rather than sequential application), the quantified metric can easily drive the majority of decisions, even if weighted at a seemingly small share (20% or so). If the quantified metric is the component of the evaluation system that varies most, and if we assume that variation to be “real” (valid), the quantified metric is likely to be 100% of the tipping point in many evaluations, despite being only 20% of the weighting.

A critical flaw with many legislative frameworks for teacher evaluation and district adopted policies is that they place the quantitative metrics along side other measures including observations, in a weighted calculation of teacher effectiveness. It is this parallel treatment of the measures that permits the test driven component to override all other “measures” when it comes to the ultimate determination of teacher effectiveness and in some cases whether the teacher is tenured or dismissed. A simple logical resolution to this problem is to use the quantitative measures as a first step – a noisy pre-screening – in which administrators – perhaps central office human resources – might review the data to determine whether the data are indicating potential problem areas across schools & teachers – knowing full well that these might be false signals due to data error and bias. But, the data used in this way at this step might then guide district administration on where to allocate additional effort in classroom observations in a given year.  In this case, the quantified measures might ideally improve the efficiency of time allocation in a comprehensive evaluation model but would not serve as the tipping point for decision making.  I suspect however, that even used in this more reasonable way, administrators will realize over time that the initial signals tend not to be particularly useful.

**Indeed, one can also argue that a VAM regression merely describes the relationship between having X teacher, and achieving Y growth, controlling for A, B, C and so on (where A, B, C include various student characteristics, classroom level characteristics and school characteristics).  To the extent that one can effectively argue that a VAM model is merely descriptive and also does not provide a basis for valid inference, similar arguments can be made. BUT, in my view, this is still more subtle than the OUTRIGHT FAILURE OF SGP to even consider A, B & C – which are factors clearly outside of teachers’ control over student outcomes.

***A non-trivial point is that if you review the conference program from the AEFP conference I mentioned above, or existing literature on this point, you will find numerous articles and papers critiquing the use of VAM for determining teacher effectiveness. But, there are none critiquing SGP. Is this because it is well understood that SGPs are an iron-clad method overcoming the problems of VAM? Absolutely not. Academics will evaluate and critique anything which claims to have a specific purpose. Scholars have not critiqued the usefulness of SGPs for inferring teacher effectiveness, have not evaluated their reliability or validity for this purpose, BECAUSE SCHOLARS UNDERSTAND FULL WELL THAT THEY ARE NEITHER DESIGNED NOR INTENDED FOR THIS PURPOSE.

A Few Additional CT Charter Figures

I was admittedly in a bit of a rush the other day to pull together some figures on CT charter schools based largely on data I had previously compiled, some of which only included Achievement First charter schools.  Here, I include all charter schools in Hartford, New Haven and Bridgeport, and address only the % Free Lunch numbers using the most recent available data from the NCES Common Core of Data, which are from 2009-10.  A few quick points are in order.

First, this is not “old” data per se. It is one year lagged from the  most recent official state data (2010-11). Current year (2011-12) data would not be appropriate for use until the close of the year. Thus 2010-11 would be the most recent complete data, if available. Also, these types of data tend to be relatively stable over time. They don’t shift much over a 2 year period, but I’ll keep updating as complete end of year data become available. The burden of reporting accuracy falls on the schools and districts.

Second, this is not a “study.” A study, so to speak, in my view, requires far more extensive analysis than this. And yes, this is a topic on which I have conducted those more extensive analyses (though not specifically involving CT charter schools). This is a blog, and in this blog post and in the previous blog post on CT Charter schools I have merely rendered graphs of the existing data as reported by the schools. There’s no data editing involved, and no tricky statistical analyses ( like the regression model of wages in my CT teacher post – which come from previous work). It’s just graphs. Then why bother? Well, I bother because much of what I see in the ongoing debate over CT charter schools (and charters in some other locations) is guided by misinformation, or at least misconceptions (of charters beating the odds with the “same” students – proving poverty doesn’t matter! nor does money?).  Misinformation that is easily enough correctable with a simple graph or two, or map, or even table of the numbers. Hey… all of these numbers are available to each and every one of you. I’ve provided posts in the past where I explain how to get them and how to summarize and graph them.  I wish someone else would save me the time, and go make their own graphs, or at least present and discuss the existing data to provide relevant context for current policy discussions. But alas, I’ve not seen that happening (though a few individuals have jumped into the game). Thus, I stick my nose, uninvited into another state’s business once again.

All of that said, here are a few more graphs:

The upshot of these graphs is that it would certainly be unfair to criticize Achievement First specifically for serving fewer low income children than district schools in these major cities. In fact, in both New Haven and Hartford, the Achievement First charters have the higher low income concentrations among the charters, and in Bridgeport they are not the lowest.

It is also important to understand that districts have to a large extent self-induced economic segregation through their own magnet school programs. I’ve addressed the same issue regarding Newark, NJ in the past. So, economic segregation within these cities is not entirely driven by the presence of charters but rather by the complex mix of district traditional and magnet schools coupled with the introduction and expansion of charters.

 

SB24 won’t solve CT’s real Teacher Equity Problems

Connecticut’s SB 24 appears to be little more than boilerplate reformy legislation which, like similar legislation in other states, creates a massive smokescreen concealing the very real problems facing Connecticut school districts. I addressed in a previous post my concern that SB24’s emphasis on charter expansion as a solution for high poverty districts is misguided, mainly because most of those successful charter schools in CT are currently achieving their successes at least in part by NOT serving high poverty populations. And another part may be the additional resources of these schools, used for such things as increased school time, supported by increased teacher salaries.  But SB24 comes with few resources attached. The other major elements of SB24 involve teacher “effectiveness” with significant emphasis on use of student performance measures for teacher evaluation. For numerous posts on this topic, see: https://schoolfinance101.wordpress.com/category/race-to-the-top/value-added-teacher-evaluation/

A few points are in order before I move on.

First, even if we make value added measures about 20% of an evaluation system, and observations and other measures cover the rest, if the value-added measures vary most (which they are likely to, simply because they will be reported as statistically norm-referenced), then the value added measures likely become the tipping point more often than not. This is hugely problematic, given our inability to fully remove bias from these measures, and the fact that they remain so damn noisy as to hardly be useful at all (and are easily manipulated to yield different results for individual teachers)

Second, arguing that somehow using these noisy, potentially biased measures for personnel management or even mass deselection of teachers will somehow improve the equity of the distribution of teachers across advantaged and disadvantaged schools is simply absurd! This is especially the case if absolutely no attention is paid to existing underlying disparities in working conditions and teacher compensation.

The Real Connecticut Problem(s)

So, let’s take a look at what’s really going on in Connecticut regarding the distribution and compensation of teachers. But, let me begin with a bit of background literature on the relationship between funding and teacher quality. Rather than reinvent the wheel here, allow me to rely on a section of a policy brief I wrote last fall for Shanker Institute:

The Coleman report looked at a variety of specific schooling resource measures, most notably teacher characteristics, finding positive relationships between these traits and student outcomes. A multitude of studies on the relationship between teacher characteristics and student outcomes have followed, producing mixed messages as to which matter most and by how much.[i] Inconsistent  findings on the relationship between teacher “effectiveness” and how teachers get paid – by experience and education – added fuel to “money doesn’t matter” fire. Since a large proportion of school spending necessarily goes to teacher compensation, and (according to this argument) since we’re not paying teachers in a manner that reflects or incentivizes their productivity, then spending more money won’t help.[ii] In other words, the assertion is that money spent on the current system doesn’t matter, but it could if the system was to change.

Of course, in a sense, this is an argument that money does matter. But it also misses the important point about the role of experience and education in determining teachers’ salaries, and what that means for student outcomes.

While teacher salary schedules may determine pay differentials across teachers within districts, the simple fact is that where one teaches is also very important in determining how much he or she makes.[iii] Arguing over attributes that drive the raises in salary schedules also ignores the bigger question of whether paying teachers more in general might improve the quality of the workforce and, ultimately, student outcomes. Teacher pay is increasingly uncompetitive with that offered by other professions, and  the “penalty” teachers pay increases the longer they stay on the job.[iv]

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[v] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[vi] In addition, more recent studies have tackled the specific issues of relative pay noted above. Loeb and Page showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[vii]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[viii]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[ix]

With regard to teacher quality and school racial composition, Hanushek, Kain, and Rivkin (2004) note: “A school with 10 percent more black students would require about 10 percent higher salaries in order to neutralize the increased probability of leaving.”[x] Others, however, point to the limited capacity of salary differentials to counteract attrition by compensating for working conditions.[xi]

Finally, it bears noting that those who criticize the use of experience and education in determining teachers’ salaries must of course produce a better alternative, and there is even less evidence behind increasingly popular ways to do so than there is to support the policies they intend to replace. In a perfect world, we could tie teacher pay directly to productivity, but contemporary efforts to do so, including performance bonuses based on student test results,[xii] have thus far failed to produce concrete results in the U.S. More promising efforts to measure productivity, such as new teacher evaluations that incorporate heavily-weighted teacher productivity measures based on their students’ test scores, are still a work in progress, and there is not yet evidence that they will be any more effective (or cost-effective) in attracting, developing or retaining high-quality teachers.

To summarize, despite all the uproar about paying teachers based on experience and education, and its misinterpretations in the context of the “Does money matter?” debate, this line of argument misses the point. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.

So then, how does this all play out in Connecticut? By the reformy rhetoric being so casually tossed about one would think that all of those urban CT teachers must already be being paid lavishly and certainly more than enough than would be required to get the best and brightest CT college grads to want to teach in Bridgeport or New Britain.

Let’s start with wages relative to non-teaching professions. Allegretto, Corcoran and Mishel identify Connecticut as having a relatively average teaching penalty.

In Connecticut, the average weekly wages of teachers are about 77.6% of the average weekly wages of similarly educated non-teachers.

But, we also know that Connecticut’s good schools and districts are pretty darn good. So perhaps the issue isn’t so much about the average, but about the disparities. Besides, the core rhetoric around the proposed reforms seems to be much about the achievement gap in CT, which is indeed large even when corrected for the income gap.

Let’s quickly revisit my representation of which districts in CT are most disadvantaged, when we look at the relationship between cost and need adjusted per pupil expenditures and average current outcomes:

Expressing the funding as difference from the average, and throwing some cutpoints into the picture to create some fun groups for comparison, I get:

So, I’ve got the advantaged districts which have high adjusted spending and high outcomes. I’ve got my overall group of disadvantaged districts which have low adjusted spending and low outcomes, and a particularly screwed subset of these districts which I call severely disadvantaged (including Bridgeport and New Britain).

Now, recall, that the funding side of the Malloy plan isn’t going to do a whole lot to help out these districts.

Notice that huge infusion of funding represented by the red triangles relative to prior year Net Current Expenditures. Oh wait. There really isn’t any. But what’s funding got to do with it anyway? (go back and read the above section!)

So, I’ve taken the teacher level salary and characteristics data to estimate two different models of the differences in teacher characteristics between the advantaged and disadvantaged districts and between the advantaged and severely disadvantaged districts. First, let’s look simply at salary parity at constant teacher characteristics. NOTE THAT IT WOULD TAKE NOT ONLY AN EQUAL, BUT HIGHER SALARY, FOR EXAMPLE, TO GET TEACHERS WITH SPECIFIC QUALIFICATIONS TO WORK IN BRIDGEPORT AS OPPOSED TO WESTPORT.

Here’s the salary model, with comments in the margins:

All else equal, teachers in disadvantaged districts are still behind their peers in advantaged districts within the same labor market. Reformy platitudes and test based evaluation will not fix that!

Now, here’s a logistic regression of the likelihood that a teacher is a novice, or in his/her first 3 years of teaching. This is a commonly used marker for teacher quality inequity, because a substantial body of literature has found that concentrations of novice teachers (i.e. teachers with less than 3 or 4 years of experience) can have significant negative effects on student outcomes.[1] Rivkin, Hanushek, and Kain (2005) find that teacher experience is important in the first two years of a teaching career (but not thereafter).[2]  Hanushek and Rivkin note that: “we find that identifiable school factors – the rate of student turnover, the proportion of teachers with little or no experience, and student racial composition – explain much of the growth in the achievement gap between grades 3 and 8 in Texas schools.”[3] Notably, evidence from a variety of state and local contexts, provides a consistent picture that higher concentrations of novice teachers are associated with negative effects on student outcomes.

Here are the models:

So, kids in classrooms in severely disadvantaged or generally disadvantaged districts are each about 20% more likely to face novice teachers. Note that they are also more likely to be in larger classes, specifically if they are in the severe disparity group!

Again, SB24’s reformy platitudes will do nothing to remedy this disparity.

Put simply, the SB24 teacher effectiveness provisions are a massive smokescreen that do little or nothing to address persistent underlying disparities across CT districts. Worse, the misguided emphasis on reducing job security and focusing on problematic performance metrics will likely do more harm than good for children in the most disadvantaged districts.

 

 REFERENCES

[i] Hanushek, E.A. (1971) Teacher Characteristics and Gains in Student Achievement: Estimation Using MicroData. Econometrica 61 (2) 280-288

Clotfelter, C.T., Ladd, H.F., Vigdor, J.L. (2007) Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review 26 (2007) 673–682

Goldhaber, D., Brewer, D. (1997) Why Don’t Schools and Teachers Seem to Matter? Assessing the Impact of Unobservables on Educational Productivity. The Journal of Human Resources, 332 (3) 505-523

Ehrenberg, R. G., & Brewer, D. J. (1994). Do school and teacher characteristics matter? Evidence from High School and Beyond. Economics of Education Review, 13(1), 1-17.

Ehrenberg, R. G., & Brewer, D. J. (1995). Did teachers’ verbal ability and race matter in the 1960s? Economics of Education Review, 14(1), 1-21.

Jepsen, C. (2005). Teacher characteristics and student achievement: Evidence from teacher surveys. Journal of Urban Economics, 57(2), 302-319.

Jacob, B. A., & Lefgren, L. (2004). The impact of teacher training on student achievement: Quasi-experimental evidence from school reform. Journal of Human Resources, 39(1),50-79.

Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 471.

Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains. Review of Educational Research, 73(1), 89-122.

For a recent review of studies on the returns to teacher experience, see:

Rice, J.K. (2010) The Impact of Teacher Experience: Examining the Evidence and Policy Implications. National Center for Analysis of Longitudinal Data in Educational Research.

[ii] Some go so far as to argue that half or more of teacher pay is allocated to “non-productive” teacher attributes, and so it follows that that entire amount of funding could be reallocated toward making schools more productive.

See, for example, a recent presentation to the NY State Board of Regents from September 13, 2011 (page 32), slides by Stephen Frank of Education Resource Strategies: http://www.p12.nysed.gov/mgtserv/docs/SchoolFinanceForHighAchievement.pdf

[iii] Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62

[iv] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.

[v] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[vi] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[vii] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[viii] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

See also:

Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[ix] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

[x] Hanushek, Kain, Rivkin, “Why Public Schools Lose Teachers,” Journal of Human Resources 39 (2) p. 350

[xi] Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438

Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[xii] For recent studies specifically on the topic of “merit pay,” each of which generally finds no positive effects of merit pay on student outcomes, see:

Glazerman, S., Seifullah, A. (2010) An Evaluation of the Teacher Advancement Program in Chicago: Year Two Impact Report. Mathematica Policy Research Institute. 6319-520

Springer, M.G., Ballou, D., Hamilton, L., Le, V., Lockwood, J.R., McCaffrey, D., Pepper, M., and Stecher, B. (2010). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching. Nashville, TN: National Center on Performance Incentives at Vanderbilt University.

Marsh, J. A., Springer, M. G., McCaffrey, D. F., Yuan, K., Epstein, S., Koppich, J., Kalra, N., DiMartino, C., & Peng, A. (2011). A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses. Final Evaluation Report. RAND Corporation & Vanderbilt University.


[1] See Charles T. Clotfelter, Helen F. Ladd and Jacob L. Vigdor, “Who Teaches Whom? Race and the distribution of novice teachers,” Economics of Education Review 24, no. 4 (August, 2005): 377-392;   See Charles T. Clotfelter, Helen F. Ladd and Jacob L. Vigdor, “Teacher sorting, teacher shopping, and the assessment of teacher effectiveness,” Sanford Institute of Public Policy, Duke University, 2004; and Hanushek, Kain, and Rivkin, “Teachers, schools, and academic achievement.”

[2] Hanushek, Kain, and Rivkin, “Teachers, schools, and academic achievement.”

Snapshots of Connecticut Charter School Data

In several previous posts I have addressed the common argument among charter advocacy organizations (notably, not necessarily those out there doing the hard work of actually running a real charter school – but the pundits who claim to speak on their behalf) that charter schools do more, with less while serving comparable student populations. This argument appears to be a central theme of current policy proposals in Connecticut, which, among other things, would substantially increase funding for urban charter schools while doing little to provide additional support for high need traditional public school districts. For more on that point, see here.

I’ve posted some specific information on Connecticut charter schools in previous posts, but have not addressed them more broadly. Here, I provide a run-down of simple descriptive data, widely available through two major credible sources. Easy enough to replicate any/all of these analyses on your own with the publicly available data:

Connecticut State Department of Education (CEDaR) reports

National Center for Education Statistics Common Core of Data

Since the common claim is that charters do more (outcomes) with less (funding) and while serving the same kids (demographics), it is relevant to walk through each of these prongs of the argument step by step.

DEMOGRAPHIC COMPARISONS

These graphs focus on Connecticut’s most acclaimed high-flying charter schools, those affiliated with Achievement First, and the graphs are relatively self explanatory.

Note: % Free lunch information comes form 2009-10 NCES Common Core of Data and includes all schools identified as being located within the city limits. % ELL data is from 2010-11 CEDaR system and includes Achievement First Charters and District Schools (leading to smaller numbers of total schools due to special school and other charter exclusions). Special education data are gathered from individual school snapshot reports (CEDaR).

For fun, in this one, I’ve also noted the position of Capital Prep – which is a magnet school, and it is well understood that the student populations at Hartford magnets are substantively different from Hartford regular public schools. But strangely, there is even substantial rhetoric out there about this school being an example of beating the odds!?!

Finally:

Put very simply – Achievement First Charter schools DO NOT SERVE STUDENT POPULATIONS COMPARABLE TO DISTRICT POPULATIONS.

I have explained previously how this is relevant to broader policy discussions. Specifically, it is relevant to the claim that these schools can serve as a model for expansion yielding similar outcomes for all children in New Haven, Bridgeport or Hartford. In very simple terms, there are not enough non-low income, non-disabled and non-ELL kids around in these settings to broadly replicate the outcomes that these schools may be achieving.  Again, this public policy perspective contrasts with the parental choice perspective. While from a public policy perspective we are concerned that these outcomes may be merely a function of selective demography, from a personal/parental choice perspective within any one of these cities, the concern is only for the outcomes, and achieving those outcomes by having a desirable peer group is as desirable as achieving those outcomes by providing higher quality service.

FINANCIAL & OTHER RESOURCE COMPARISONS

Below (at end of post) I provide an important explanation/discussion of issues in comparing charter school and traditional public district finances. First and foremost, it is important to understand simply from the above comparisons, that these schools serve substantively different student populations, thus equal dollar inputs is, from the outset, an inappropriate fairness metric. But the complexities go beyond that. In CT and other locations, host districts retain responsibility for transportation and special education costs, even for students attending charters. Thus, it would be reasonable, as I did in a previous post to subtract out those expenditures from district budgets when comparing to charter spending. Now, on the other side, Charters do often have to lease facilities at their own expense, which in a state like CT would typically run about $1,500 to $2,000 per pupil. More in NYC, similar in NJ. But, while charter advocates would have you believe that districts have $0 cost of facilities, that is not necessarily true. For CT public districts, plant operations expenses per pupil tend to be on the order of $1,000 to $2,000 per pupil, and large urban districts maintaining significant capital stock with significant deferred maintenance tend to be toward the high end. More discussion of the factors which cut each way is in the note at the end of the post. So, here’s a quick run-down on charter and district expenditures in CT, cut different ways (all expressed in per pupil terms, and with respect to district/charter % Free or Reduced price lunch shares):

So… after taking out special education and transportation, charters appear relatively well resourced.

EVEN IF WE ASSUME THAT THE NET DIFFERENCE IN FACILITIES COST IS ABOUT $1,000 PER PUPIL BETWEEN CHARTER AND DISTRICT SCHOOLS, CHARTERS ARE IN PRETTY GOOD SHAPE IN CT.  (That assumption would pull the $1,000 per pupil off the charter estimates above). This would assume the facilities maintenance/operations/debt service in hosts to be about $1,000 and lease/operations/maintenance for charters to be about $2,000 per pupil.

Here’s an alternative angle (from previous post)

I also showed in a previous post that for Amistad, the funding difference translates to both a class size advantage and salary advantage:

Class sizes are more mixed in Hartford, but in Bridgeport (the least well funded urban district), Achievement First offers much smaller class size:

OUTCOMES

The final prong of the argument involves those higher outcomes – those beating the odds with the same kids and less money – outcomes.  Here are a few samples of the 5th grade math outcomes by district, focusing on the position of the Achievement First charter schools. I’ve graphed the school level 5th grade math 2010-11 Connecticut Mastery Test mean scale score by school level % Free Lunch (prior year). It’s important to understand that these charter schools not only have much lower % Free Lunch but also tend to have low ELL populations and also have much lower shares of enrollment with disabilities.

Here’s Hartford, where the Achievement First school looks so unlike nearly every Hartford public school reporting 5th grade math scores that it’s hard to even make a comparison. But, the two dots over near the Achievement First school do perform similarly.

Comparisons are comparably ridiculous in Bridgeport.

But more reasonable in New Haven! Even then, Amistad and Elm City Prep fall somewhat in line with New Haven schools serving similar % Free Lunch.

A statewide look at 7th grade math scores provides a better showing especially for Achievement First schools, but the analysis is hardly decisive. Note that this graph uses % Free or Reduced Lunch from CEDaR sources. Using such a high income threshold for low income status tends to mash schools in urban districts against the right hand side of the figure, removing some important variation.  I’ll redo with % free if/when I get the chance. This graph includes schools statewide, including affluent suburban schools. Among the notable features of the graph is that low income status matters, whether for charter schools or for traditional district schools. Most fall along the trendline.

In this case, the Achievement First schools in particular have higher math mean scale scores than traditional public schools serving the same % Free Lunch, BUT… this DOES NOT ACCOUNT FOR THE ADDITIONAL DIFFERENCES IN ELL AND SPECIAL EDUCATION WHICH MAY (WILL) SUBSTANTIALLY INFLUENCE THESE COMPARISONS!

Perhaps most importantly, these scatterplots are essentially little more than descriptive comparisons of mean scale scores against schools similar on a single parameter (% Free Lunch). BUT, even this simple adjustment serves to undermine the current rhetoric in Connecticut, as I discussed in a previous post.

NOTE: Charter-District School Spending Comparisons & the Facilities Cost Issue

A study frequently cited by charter advocates, authored by researchers from Ball State University and Public Impact, compared the charter versus traditional public school funding deficits across states, rating states by the extent that they under-subsidize charter schools.[1] The authors identify no state or city where charter schools are fully, equitably funded. But simple direct comparisons between subsidies for charter schools and public districts can be misleading because public districts may still retain some responsibility for expenditures associated with charters that fall within their district boundaries or that serve students from their district. For example, under many state charter laws, host districts or sending districts retain responsibility for providing transportation services, subsidizing food services, or providing funding for special education services. Revenues provided to host districts to provide these services may show up on host district financial reports, and if the service is financed directly by the host district, the expenditure will also be incurred by the host, not the charter, even though the services are received by charter students. Drawing simple direct comparisons thus can result in a compounded error: Host districts are credited with an expense on children attending charter schools, but children attending charter schools are not credited to the district enrollment.  In a per pupil spending calculation for the host districts, this may lead to inflating the numerator (district expenditures) while deflating the denominator (pupils served), thus significantly inflating the district’s per pupil spending. Concurrently, the charter expenditure is deflated.

Correct budgeting would reverse those two entries, essentially subtracting the expense from the budget calculated for the district, while adding the in-kind funding to the charter school calculation. Further, in districts like New York City, the city Department of Education incurs the expense for providing facilities to several charters. That is, the City’s budget, not the charter budgets, incur another expense that serves only charter students. The Ball State/Public Impact study errs egregiously on all fronts, assuming in each and every case that the revenue reported by charter schools versus traditional public schools provides the same range of services and provides those services exclusively for the students in that sector (district or charter).

Charter advocates often argue that charters are most disadvantaged in financial comparisons because charters must often incur from their annual operating expenses, the expenses associated with leasing facilities space. Indeed it is true that charters are not afforded the ability to levy taxes to carry public debt to finance construction of facilities. But it is incorrect to assume when comparing expenditures that for traditional public schools, facilities are already paid for and have no associated costs, while charter schools must bear the burden of leasing at market rates – essentially and “all versus nothing” comparison. First, public districts do have ongoing maintenance and operations costs of facilities as well as payments on debt incurred for capital investment, including new construction and renovation.  The average “capital outlay” expenditure of public school districts  in 2008-09 was over $2,000 per pupil in New York State, nearly $2,000 per pupil in Texas and about $1,400 per pupil in Ohio. Based on enrollment weighted averages generated from the U.S. Census Bureau’s Fiscal Survey of Local Governments, Elementary and Secondary School Finances 2008-09 (variable tcapout): http://www2.census.gov/govs/school/elsec09t.xls

Second, charter schools finance their facilities by a variety of mechanisms, with many in New York City operating in space provided by the city, many charters nationwide operating in space fully financed with private philanthropy, and many holding lease agreements for privately or publicly owned facilities. New York City is not alone it its choice to provide full facilities support for some charter school operators (http://www.thenotebook.org/blog/124517/district-cant-say-how-many-millions-its-spending-renaissance-charters). Thus, the common characterization that charter schools front 100% of facilities costs from operating budgets, with no public subsidy, and traditional public school facilities are “free” of any costs is wrong in nearly every case, and in some cases, there exists no facilities cost disadvantage whatsoever for charter operators.

Baker and Ferris (2011) point out that while the Ball State/Public Impact Study claims that charter schools in New York State are severely underfunded, the New York City Independent Budget Office (IBO), in more refined analysis focusing only on New York City charters (the majority of charters in the State), points out that charter schools housed within Board of Education facilities are comparably subsidized when compared with traditional public schools (2008-09). In revised analyses, the IBO found that co-located charters (in 2009-10) actually received more than city public schools, while charters housed in private space continued to receive less (after discounting occupancy costs).[1] That is, the funding picture around facilities is more nuanced that is often suggested.

Batdorff, M., Maloney, L., May, J., Doyle, D., & Hassel, B. (2010). Charter School Funding: Inequity Persists. Muncie, IN: Ball State University.

NYC Independent Budget Office (2010, February). Comparing the Level of Public Support: Charter Schools versus Traditional Public Schools. New York: Author, 1

NYC Independent Budget Office (2011) Charter Schools Housed in the City’s School Buildings get More Public Funding per Student than Traditional Public Schools. http://ibo.nyc.ny.us/cgi-park/?p=272

NYC Independent Budget Office (2011) Comparison of Funding Traditional Schools vs. Charter Schools: Supplement http://www.ibo.nyc.ny.us/iboreports/chartersupplement.pdf

Additional Figures

Administrative expenses in charters often include facilities lease agreements in addition to any recruitment/marketing expenses and growth/expansion.

A comment on the “I pay your salary” and “I pay twice for schools” arguments

Taxpayer outrage arguments are in style these days (as if they ever really go out of style). Two particular taxpayer outrage arguments that have existed for some time seem to be making a bit of resurgence of late. Or, at least I think I’ve been seeing these arguments a bit more lately in the blogosphere and on twitter.  First, since now is the era of crapping on public school teachers and arguing for increased accountability specifically on teachers for improving student outcomes, there’s the “I pay your salary so you should cower to my every demand” argument (I’ve heard only a few warped individuals take this argument this far, but sadly I have!).  Second, there’s the persistent I pay for those schools and don’t even use them argument, or the variant on that argument that I pay twice for schools because I send my kids to private schools.

I (the taxpayer) pay your (the teacher) salary

This is a strange, obnoxious and easily diminished argument. Not that it’s not important to be sensitive to the demands of school constituents, but rather, that it’s more important to be sensitive to the demands of the broader public regarding their preferences for public schools more so than it is to be hypersensitive to any one loud-mouthed individual who would invoke this obnoxious argument. I explain more about that broader public under the next topic below.

For this one, a simple hypothetical is in order. Let’s assume the individual invoking this argument owns a residential property valued at $350,000 in a school district serving 5,000 students, where that district spends about $15,000 per pupil per year and where the effective property tax for schools is about 1.5%.  So, the school property tax bill on this house is 1.5% x 350k = $5,250.  Meanwhile the school total budget is 5k x 15k = 75 million. So, this one household is contributing far less than 1% (about .007%) of the district budget (which is then about $4.20 of a $60,000 teacher salary). And other households and owners of other property types within the district, as well as the broader base of taxpayers contributing to the state aid pot and any federal revenue sources all play a part in paying the salaries of teachers in this school.

This is by no means to suggest that any one person’s “say” in a district should be proportionate to one’s tax bill as a share of the budget. But rather, that one voice is one voice from the broader mix of voices that contribute to the financing of and shaping of public goods and services like schools.

[implications of disproportionate philanthropic giving from within or outside a district raise other serious questions to be addressed another day]

I (the taxpayer) pay for those public schools and don’t even use them!

Thus is the nature of public versus private goods. In the simplest model, taxpayers in a municipality contribute property taxes for a mix of public services, including local parks, fire protection, police and schools. I probably use our public park much less than others, and I rarely get a chance to attend the summer concert series. Should I get a refund for my contributed share, so that I can put that money towards buying Broadway tickets or a family vacation instead? Hey, I’m paying twice for entertainment and don’t even attend those free concerts in the park. And, I never sat on one of those benches. Can I please get a refund for my share of the cost of installing those benches and maintaining them, so I can invest in my own benches in my own back yard? And about those fancy fire trucks. We’ve had a few house fires in my town in my time living there. Why am I paying for the fire trucks to go to someone else’s house? Let’s say I live in a town that has public tennis courts, but I decide I want to join a private tennis club. Should I get a refund for the public courts I don’t use? In the amount of property tax I contributed? How about roads? Should I have to pay for roads I don’t ever intend to drive on?

Of late, I’ve been seeing the private school parent argument that “I pay twice for schools, since I pay my taxes for the public schools and pay private tuition.” This one is frequently invoked in conversations involving voucher and tuition tax credit programs. It should be noted that many residents of any given community pay taxes for schools – and all of the other stuff above – but may or may not use any of all of it. Families without school aged children also pay for schools. Further, since schools are financed by a mix of local, state and federal revenues, lots of different people, within and outside of any given community, are contributing to the financing of that community’s schools (to the extent that the schools receive intergovernmental revenue). Thus is the nature of publicly financed services.

But there’s even more to it than that. The above statements make the uninformed assumption that one receives absolutely no benefit from the presence and quality of these public goods and services simply because one does not make direct use of them. In reality (as well as in economic theory – which doesn’t always match reality), there’s this thing called capitalization! There is value to living in a community with such amenities as nice parks, good schools and police and fire protection. That value exists whether you actually use those things or not. That value is reflected in property values. As the quality and mix of services changes, those changes may be reflected in property values. Communities that have relatively better schools over time (even as reflected in crude grading systems in state accountability systems) see increases to property values. Residential property owners, not just those with kids in the public schools, see this benefit.

In short, the “I pay twice”, or “I pay for a service or amenity I don’t use” argument presents a dreadful oversimplification and misunderstanding of very basic principles of the provision of public goods and  services.

Instead, if taxpayers really want something to fuss about, read my previous post!

 

 

Revisiting NJOSA & the Lakewood Effect

The current version of the New Jersey Opportunity Scholarship Act would pilot the tuition tax credits for private schooling in the following locations:

  • Asbury Park City School District
  • Camden City School District
  • Elizabeth City School District
  • Lakewood City School District
  • Newark City School District
  • City of Orange School District
  • Passaic City School District, and
  • City of Perth Amboy School District

http://www.njleg.state.nj.us/2012/Bills/S2000/1779_I1.PDF

http://www.njspotlight.com/stories/12/0316/0145/

NJOSA is often pitched publicly as a scholarship program that would allow students trapped in failing urban districts to exercise the choice to select a better alternative – implicit in this argument is that any private school option a student might choose would necessarily be a better alternative. Also suggestive in the rhetoric around NJOSA is that this program is mainly focused on kids in places like Camden and Newark – the stereotypical New Jersey urban centers.

NJOSA would provide scholarships to children in families below the 250% income threshold for poverty. The text of the bill indicates that eligible children are those either attending a chronically failing school in one of the districts above or eligible to enroll in such school in the following year (which would seem to include any child within the attendance boundaries of these districts even if presently already enrolled in private schools).

“children either attending a chronically failing school or eligible to enroll in a chronically failing school in the next school year.”

I have discussed NJOSA numerous times on this blog, specifically focusing on the Lakewood effect here & here.

Many in New Jersey probably already understand that the above list contains some intriguing outliers, but I suspect few understand just how big these outlier effects are. One would naturally assume that Newark, for example, would be the major target of NJOSA scholarship recipients? Right? That’s our stereotypical urban core with failing schools from which kids need to escape.

Here’s what the Newark private school market looks like.

This map uses data on individual private schools, their locations, and enrollments from the 2007-08 National Center for Education Statistics, Private School Universe Survey, which also includes classifications of religious affiliation/status. Purple circles are religious private schools and green circles are those who’s primary affiliation is listed as non-religious (independent of a specific church/religion). Circle size indicates enrollment size. Bigger circles are the bigger schools.

I also use U.S. Census Bureau American Community Survey data to identify the number of total children and children in families below the 250% income threshold attending private school within each Public Use Micro Data Area (PUMA). Blue numbers indicate total private enrollments, and red numbers indicate low income private school enrollments.

Currently, there are about 3400 private schooled students residing in Newark, and there are about 2,000 who actually fall below the 250% poverty-income threshold. So, that’s a sizeble number of Newark children who might quality for NJOSA scholarships, in addition to others who might apply who are presently enrolled in public schools.

It would seem by the language in the bill that a current privately schooled student would merely have to be eligible to attend their local public school, but not actually do so.

Here’s what the Passaic/Clifton private school market looks like (neither one is big enough to be its own PUMA):

The Passaic/Clifton PUMA has nearly as many low income private school enrolled children as Newark – 1,619, despite much smaller total population. And by far the largest private school in the area is Yeshiva Ktana.

But the most striking example is that of Lakewood, as I have discussed in the past. Since Lakewood remains in this bill, even though there’s nothing really new I’m presenting here, I felt the need to reiterate just how big a deal this is.

Here’s the Lakewood private school marketplace & current enrollments:

Based on the Census ACS data from a few years back, there were over 17,000 privately schooled students in Lakewood, and OVER 10,400 OF THOSE STUDENTS WERE IN FAMILIES THAT REPORTED THEMSELVES AS BEING BELOW THE 250% POVERTY-INCOME THRESHOLD!

Recall that Newark had about 2,000 low income private school enrolled children.

Orange/East Orange combined have under 900.

All of the cities around Asbury Park combined about 400 (meaning that Asbury Park alone is likely much less).

Camden about 1,300

Elizabeth about 1,000

The entire area (several towns/districts) around Perth Amboy about 1,000 (meaning that Perth Amboy is likely only a fraction of that amount)

And again, Lakewood, over 10,000! (and Passaic, another significant amount)

In other words, all of the other locations combined do not have the sum total of low income private school enrolled children that Lakewood has. Lakewood would likely be the epicenter of NJOSA scholarship distribution. I noted in my first post on this topic that if the average scholarship amounts were as proposed, the Lakewood Yeshiva schools would stand to take in as much as $67 million per year in these indirect taxpayer subsidies.

The clever subversion of taxpayer rights

I have a secondary, related concern when it comes to Tuition Tax Credits, these days, often framed as “Opportunity Scholarship Acts.”

Tuition Tax Credit programs create an indirect subsidy of private schooling, whereas Vouchers provide a direct subsidy.  The latter is a more honest approach and one that at least allows for legal recourse by concerned taxpayers – even if they eventually lose. It is currently the case that voucher programs which provide direct subsidies to families, even where the majority of those families choose to use their subsidy for religious schooling, are constitutional under the U.S. Constitution (but not under some state constitutions which expressly prohibit use of public funding for religious education). Specifically, the U.S. Supreme Court has determine that these subsidies do not violate the establishment clause of the U.S. Constitution, because the distribution of the subsidy is mediated through individual/family choices and the subsidy/voucher program (at least by Cleveland design) is neutral to religion (see: http://www.oyez.org/cases/2000-2009/2001/2001_00_1751  – the dissent is worth listening to)

This is not to say, however that a state might not be vulnerable to legal challenge over a voucher system if it could be shown that the state had actually made policy decisions with the intent of guiding students and resources toward specific religious schools/institutions, but rather that the Cleveland model did pass muster. One might certainly scrutinize the NJ legislature’s choice to include Lakewood in NJOSA, with the Lakewood Yeshiva schools essentially as the primary beneficiary of the program. This would seem somewhat analogous to a 1990s scenario where NY State redrew one district’s boundaries so as to encompass a single homogeneous religious community (see: http://www.oyez.org/cases/1990-1999/1993/1993_93_517) Could NY State now go back and pilot a voucher program in Kiryas Joel instead? Would the choice of a homogeneous religious community to pilot a voucher program violate the establishment clause? Would it be substantively different from the more “neutral” Cleveland Voucher program? Maybe.

But, here’s the kicker with Tuition Tax Credit programs.  They are indirect subsidies, generated by providing full tax credits to corporations to gift money to a state approved (independently governed) entity (voucher governing body). Thus, a hole of “X” is created in the state budget. That hole is paid for by the fact that the state no-longer has to allocate state aid (>or= X) to local public districts where students accept the scholarship to attend private schools instead. It’s the mathematical equivalent of simply allocating the same sum in state revenue directly to private schools, but it’s achieved indirectly through a third party entity.

Who cares? Why is that important? If the state has gamed this system to favor and disproportionately subsidize a specific religion, can’t we still do something about it? The answer to that question is probably not, at least via legal action!  The U.S. Supreme Court has recently determined that taxpayers do not have legal standing to challenge the distribution of these indirect subsidies. As far as we can tell no one really seems to have a right to challenge these policies for potentially violating the establishment clause. If if was a voucher program- direct subsidy – there would most likely at least exist the right of taxpayers to challenge the policy in court, even if it was eventually determined that the policy was constitutional (sufficiently similar to the Cleveland model). But the indirect tuition tax credit approach cleverly permits diversion of tax revenues while negating entirely taxpayer rights to challenge that diversion. See: http://www.oyez.org/cases/2010-2019/2010/2010_09_987

In other words, the court never even gets to address the substantive question of whether the legislature has intentionally gone out of its way to favor and subsidize a specific religion.

(Real) Graph vs (Fake) Graph Friday

This post provides a quick follow up to yesterday’s post (late last night) when I critiqued a questionable graph from an NJDOE presentation here: State of NJ Schools presentation 2-29-2012

It turns out that the slide presentation had many comparable graphs that deserve at least some attention. First, there’s this graph which attempts to argue that early reading proficiency is a statewide issue, and not just a problem of low income urban neighborhoods:

Rather impressive eh? Certainly gives the impression that early reading deficits are concentrated not in the poorest districts but in the least poor ones.

Why would someone make such an argument? Well, one reason would be if this argument was being coupled with arguments to redistribute funding to those less poor district to help them out – to argue that educational “risk” is not concentrated in poor districts, but rather distributed across all districts.

The problem here is that it’s completely absurd to compare total counts of students who are non-proficient across groups without any regard for the total counts of all students. That is, what percent of kids are proficient in each poverty group. Well, here’s what that picture ends up looking like:

Pretty much as we might expect. Lack of reading proficiency in 3rd grade as measured on state assessments is a much bigger problem in higher poverty districts, with poverty here measured as % Free Lunch and with reading proficiency tabulated for general test takers

Here’s the next graph, which compares charter school reading and math proficiency rates in Newark to Newark Public Schools:

In this case, the title is somewhat appropriate in that charter school performance does indeed vary in Newark. But the graph is pretty much meaningless and deceptive.

The graph relates average Language Arts and Math proficiency across schools showing basically that schools which are higher on one are also higher on the other. That’s really no big surprise. But the graph ignores entirely the substantive student population differences that explain a large portion of the difference in these proficiency rates. The graph appears to be not-so-subtly constructed to reinforce the central point of this section of the presentation slides – that charters outperform district schools.  That point continues to be built on analyses that were already thoroughly debunked many times over. This graph goes a step further by then cherry picking a few charters to name – all of which appear superior to the “District.”

So, what does it look like if we take all of these schools, and separate the district into it’s schools, and plot the combined proficiency rates with respect to % Free Lunch? Well, here it is:INCLUDES NJASK3 TO NJASK8 (no HSPA)

Yes, this graph reinforces the title of the NJDOE graph, but in a much more reasonable light. That said, there are a number of other student population factors that would need to be accounted for in a more thorough analysis. 

Among other things, while the first graph appears to suggest that TEAM Academy is a relative laggard compared to schools like North Star or Robert Treat, my representation here shows that TEAM is actually further above it’s expected performance than either of the other two. TEAM simply serves a lower income population than the other two. Further, district schools serving similar populations do similarly well. And several charter schools do as poorly (and worse) than comparable district schools.