Blog

Teachers Unions: Scourge of the Nation?

UPDATED: 1/29/2015

Let me start by stating that I, myself am somewhat agnostic when it comes to the questions around whether I believe teachers unions are generally good or bad for the overall quality of our education system and for educational equity.  In my personal experiences as a young teacher in the early 1990s, I had my issues with my local teachers unions (in New York State in particular), resulting in some pretty heated battles with local and regional union officials [and some pretty nasty internal politics in my own school].  As a young teacher, I was anything but a fan of the teachers union. But unlike many of my TFA pals [I was a few years too early for TFA, but had friends & later colleagues in the first few waves] who only stuck it out in teaching for a year or two and may have developed similar negative feelings toward their local union, I did outgrow that initial reaction – which in my view- was somewhat isolated – and partly a function of my own youthful ignorance.  I didn’t stick it out in public school teaching much longer than that [the local union actually ran me out!], but did have the unique experience of working in an elite private school that had a union, and I worked in that school during a contract renegotiation.

The idea for this post first came about when I read the following quote in an article in the Economist. This has to be among the most utterly stupid statements I think I’ve ever read in my life:

…no Wall Street financier has done as much damage to American social mobility as the teachers’ unions have. http://www.economist.com/node/21564556

And then there’s this more recent quote:

Many schools are in the grip of one of the most anti-meritocratic forces in America: the teachers’ unions, which resist any hint that good teaching should be rewarded or bad teachers fired. http://www.economist.com/news/leaders/21640331-importance-intellectual-capital-grows-privilege-has-become-increasingly

Now… this quote is these quotes are ridiculous at many levels.  Most notably, the first quote is stupid simply because one could never possible contrive a reasonable quantifiable comparison of the supposed negative effects of either the individual hedge fund manager or the supposed monolithic “teachers union.” It’s the empirical equivalent of arguing whether Superman can beat up Hulk. It’s just asinine.

UPDATE: The second quote above comes from a piece that subsequently implies that teachers’ unions are a major, if not the primary cause of educational inequality across children- specifically between rich and poor children. Here’s a little more on the topic of “teacher equity” in particular. (Post 1 | Post 2)

On the heels of this quote came the Thomas B. Fordham Institute report rating the strength of teachers unions – or unionization more generally – across states.  Perhaps the most useful aspect of this report is that it provides us with insights regarding the heterogeneity of unionization across American states.  Unions and unionization are not monolithic.

As recognized by the Fordham report, we really don’t have an American education system. We have 51 systems. They are all somewhat different, with different standards, different funding systems, different union rules and protections and different student outcomes.  The existing variations across our state systems of education alone render the economist statement utterly stupid and misguided.  Those variations also provide for some fun opportunities to explore the relationship between TB Fordham’s characterization of teachers’ union strength across states and other features of state education systems.

In this post, I use data from several reports that attempt to characterize state education systems to probe two main questions – whether there exists any association between general indicators of education quality across states and union strength, and whether there exists any association between indicators of educational equality across states and union strength.

How is union strength related to funding levels and funding fairness?

Along with colleagues at the Education Law Center of New Jersey, I have been preparing for the past few years, annual reports on education funding fairness. In the Funding Fairness report, we use a statistical model on three years of national data on all school districts to project the cost adjusted per pupil state and local revenues for all districts and state averages nationally, and we characterize the overall fairness – progressiveness or regressiveness of state school finance systems. Below, I evaluate the relationship between “union strength rank” from the TB Fordham report and funding “levels” (an indicator of adequacy) and funding “fairness” (whether higher poverty districts receive systematically more, or less funding per pupil than lower poverty districts in that state).

An important caveat here since I like to pick on inappropriate graphs myself is that I really should not be making scatterplots where the x-axis variable is a “rank” measure. Rank is not an interval measure. But this is purely for illustrative purposes, so please forgive my misuse of rank data in this way! [or at least if you slam me for it, acknowledge that I pointed this out!]

Figure 1

In Figure 1 we can see that states with stronger teachers unions [left hand end] tend to have more adequate overall funding levels. It is however more clearly the case that states with weak teachers unions (ranked 45 to 50th) tend to have particularly low adjusted funding levels. This is certainly not to suggest any direction of causation. That’s the whole trick here. Most of this is probably quite circular – endogenous. [the union cynic might argue that this merely shows that teachers’ unions have extorted funds from the taxpayer] That states which tend to be more educated and progressive happen to both have stronger teachers unions and to spend more on education – but for those states like California that by historical artifact referendum have systematically deprived their education systems for decades.

Figure 2

Perhaps more to the point of the Economist assertion, we see that states with weaker teachers unions also tend to have less fair funding distributions – or are systems where it is more likely that high poverty districts have systematically fewer resources per pupil than lower poverty ones.  Again, this result is likely a function of the endogenous relationships mentioned previously.

See: http://www.schoolfundingfairness.org/

UPDATE: So, wait a second, if stronger union states tend to have fairer funding distributions, might that actually enhance equity? In a really big, important and substantive way? Hmmm….

How is union strength related to competitiveness of teacher pay?

Here, I look at the relationship between union strength and the relative wage of teachers compared to non-teachers in the same state.  This is a particularly important comparison for two reasons. First of all, the relative competitiveness of teacher wages likely has significant effects on the quality of individuals who choose to enter the teacher workforce versus other employment opportunities (selecting from HS into College).  Overall wage competitiveness can have long run effects on overall teacher workforce quality.  Further, this is the one comparison I make in this post where we might hypothesize a direct, easily interpreted relationship. That is, we might expect stronger unions to lead to more competitive wages.  Here, I compare the weekly wage % (teacher percent of non-teacher) from the Economic Policy Institute with the TBF union strength rank.

Figure 3

Somewhat to my own surprise, this relationship is actually quite strong!… with states having stronger teachers unions also having generally more competitive teacher wages.

See: http://www.epi.org/publication/the_teaching_penalty_an_update_through_2010/

Is union strength associated with NAEP achievement levels?

Now, the usual retort to teacher union bashing is to point out that states like New Jersey and Massachusetts have strong unions and also have high NAEP scores, and states like Alabama and Mississippi have weak unions and low NAEP scores.  Yeah… okay… but clearly there’s a lot goin’ on there that has little or nothing to do with unions.  But let’s indulge this premise a little further with some additional graphs just to see the patterns.

In these first few figures I present the relationship between NAEP scores for children in families above the 185% income level for poverty (not on free or reduced lunch) and union strength. Note that the patterns are similar for scores for children qualified for reduced lunch or for free lunch, but I’ve not included them here… ‘cuz there are already enough graphs in this post. I’d be happy to share them though.  In general, what we see in Figure 4 and Figure 5 is that NAEP scores for non-low income kids tend to be slightly lower – with little clear pattern – in weak union states.

Figure 4

Figure 5

Figure 6, however, clarifies that NAEP scores tend to be higher for non-low income children in states where incomes are higher for non-low income children.

Figure 6 (but income dictates NAEP)

We can use the information in Figure 6 to adjust the NAEP scores (are they higher or lower than would be expected, given the income levels) for household income differences.  When we make that adjustment, we get Figures 7 and 8.

Figure 7 (income adjusted NAEP)

Figure 8 (income adjusted NAEP)

Still we see that adjusted NAEP scores are somewhat though hardly systematically lower in states with weaker unions. What we certainly do not see here is that NAEP Scores are systematically lower in states with stronger unions. That is, Unions certainly aren’t driving NAEP scores into the ground!

But, while the second set of graphs is more appropriate than the first, both are dreadfully oversimplified characterizations of complex relationships.

Is union strength associated with NAEP achievement gaps?

This question is perhaps most on target with the Economist claim. Following the economist logic, one might assert that teachers unions likely lead to larger achievement gaps, thus limiting social mobility. Measuring poverty related income gaps and comparing them across states is tricky, as I’ve discussed in numerous previous posts. Specifically, the size of the achievement gap between kids not qualified for free or reduced lunch and those qualified for either free or reduced lunch tends to be highly related to the size of the income gap between the two groups – as shown in Figure 9! That is, we can’t just do straight up achievement gap comparisons- we must adjust for the income gap.

Figure 9 (Income Gaps and NAEP Gaps)

Figure 10 and Figure 11 present the income gap adjusted achievement gaps in relation to union strength rank.  What we see is little or no relationship between union strength and achievement gaps. While this does not illustrate that stronger unions lead to smaller achievement gaps…. It also does not by any stretch illustrate that stronger unions lead to larger achievement gaps… an expectation that might reasonably be derived from the claim made in the Economist.

Figure 10

Figure 11

Then again… these are still cursory… descriptive analyses – using only two variables at a time to characterize education systems that are far more complex than can be legitimately characterized with only two variables at a time. It’s exploratory. It’s a start… and there’s certainly more to be explored here… but likely questions that can never be satisfactorily untangled with available data.

See: https://schoolfinance101.wordpress.com/2011/09/13/revisiting-why-comparing-naep-gaps-by-low-income-status-doesnt-work/

Is union strength associated with NAEP achievement growth?

Finally, I suspect that some curmudgeonly reactors to this post will attempt to argue that weak union states have seen more growth in NAEP achievement over time. Well, Figure 12 kind of thwarts that notion as well. Not much relationship there either, but certainly the only one in this post at all that shows even the slightest upward tilt.

Figure 12

But alas, even that tiny upward tilt is a function of the fact that states that saw the greatest growth on NAEP were simply the states that had and still have the lowest overall performance levels – as shown in Figure 13. And, states with lower average performance levels – now and then – tend to have weaker unions.

Figure 13

For a more thorough discussion on this point, see: https://schoolfinance101.wordpress.com/2012/07/27/learning-from-really-bad-graphs-ill-informed-conclusions-thoughts-on-the-new-pepg-catching-up-report/

Conclusions

So what does this all mean then? Are unions good, or are they bad? Do they increase inequality and lower quality? It’s certainly difficult given the data provided above to swallow the bold assertion in the Economist that teachers’ unions are the scourge of the nation and primary cause of declining social mobility.  That’s just a load of unsubstantiated crap!

But then what can we learn here. Well, it is perhaps important that there appears to be at least some likely indirect and certainly endogenous relationship between unionization and funding fairness and funding levels. As I’ve discussed in related research funding fairness and funding levels – and school finance reforms that improve equity and adequacy do matter!  To summarize:

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more  adequate funding with more accountability for its use may be most promising.

http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

See also this post in which I probe more specifically the changes in achievement gaps over time in Massachusetts and New Jersey.

Further, the potentially more direct relationship between unionization and relative competitiveness of teacher wages compared to other labor market opportunities may be important in the long run.  In a related policy brief from last winter, I noted:

To summarize, despite all the uproar about paying teachers based on experience and education, and its misinterpretations in the context of the “Does money matter?” debate, this line of argument misses the point. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.

http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

So, while nothing in this post puts to rest the big – unanswerable – questions of the overall equity and quality effects of teachers unions on our supposed monolithic American public education system, these analyses do at least raise serious questions about the notion that teachers unions are the scourge of the nation cause of all of the supposed – also unfounded – ills of American public schooling.

Cheers! It’s good to be back!

Friday Afternoon Graphs: Graduate Degree Production in Educational Administration 1992 to 2011

I’ll let the pictures tell the story this time. [UPDATED – Errors in original]

Data source: http://nces.ed.gov/ipeds/datacenter/DataFiles.aspx

School Labels & Housing Values: Potential consequences of NJDOE’s new arbitrary & capricious school ratings

There exists relatively broad agreement in the empirical literature that perceived quality of local public goods and services – including local public schools – influences significantly the value – as represented in demand/sales prices – of residential property. In other words – perceived school quality affects housing prices and housing values. All else equal, one pays a premium to live in a school district or attendance zone within a district that is associated with a “good” school.

Indeed this “capitalization” of school quality (perceived or real) in home values is at the root of much of the disparity underlying highly residentially segregated state education systems. It’s a long run, complex chicken-egg cycle sort of thing. Some communities have more which allows them to spend more… to improve perceived quality… and capitalize that value into their homes/property values, increasing the town’s ability to raise revenue further, and increasing barriers to entry for families with lower income.

Realtors, the real estate industry and state and local publications like New Jersey monthly and national publications like Newsweek and U.S. News drool over oversimplified characterizations of good and bad schools. As trivial as this stuff may seem to many of us, it is consequential, or at least can be.

Beyond magazine ratings, state school rating schemes have been shown be consequential for home values. The key is that summary type ratings, broad classifications or grades – ACCURATELY REFLECTING QUALITY OR NOT – seem to have the most significant impact. For example, in one recent study specifically evaluating post-NCLB classification schemes & other metrics, authors found that “Results show that while all school quality measures tested have some explanatory power, school district ratings and performance index, which are comprehensive measures of school quality, are the most appropriate measures and are readily capitalized into housing prices.”[1] In one of the better known studies on this topic, David Figlio evaluated the influence of Florida’s letter grading system on home values, finding:

This paper provides the first evidence of the effects of school grade assignment on the housing market. Our results suggest that the housing market responds significantly to the new information about schools provided by these “school report cards,” even when taking into consideration the test scores or other variables used to construct these same grades. These results suggest that innocuous-seeming school classifications may have large distributional implications, and that policy-makers should exercise caution when classifying schools.

http://bear.warrington.ufl.edu/figlio/house0502.pdf

Now, the caveat to Figlio’s findings is the initial shock on housing prices of revealed grades may fade with time.

These findings raise significant questions about the potential impact on housing values located in attendance boundaries of schools granted these new labels by state agencies, in accordance with their NCLB waiver applications.  In their waiver applications state agencies were (seemingly) under the gun to find ways to classify as problem schools and/or failing schools, not exclusively poor minority schools in the inner city. Indeed, many set out to make poor, minority schools their primary target.  As I’ve shown in recent posts on New York and New Jersey, states did indeed classify as failing schools largely those schools that are predominantly poor, predominantly minority and in the inner city.

But, in their effort to marginally diversify their “bad” schools list, states also proposed achievement gap metrics and subgroup metrics to be used for identifying “other” more diverse and less poor schools for disruptive state intervention.  Most of these schools New Jersey ended up being classified as “focus” schools, or “we’re watching you!” schools and we’re going to push interventions on you through our regional achievement centers.  Here’s the list of “focus” schools in generally non-low-income communities (middle and upper income) in New Jersey:

Table 1. Focus Schools in Non-Low-Income Districts

http://www.state.nj.us/education/reform/PFRschools/Priority-Focus-RewardSchools.pdf

A number of “focus” schools occur along the Northeast Corridor around Middlesex County. This is particularly true of “focus” schools in non-low-income (lighter blue) districts.

Figure 1. Locations of Focus, Priority and Reward Schools

All of these schools achieved their “focus” status by having large achievement gaps between two groups either by race, language proficiency or poverty (or disability?… no detail is provided!), rather than by low average or overall performance. Many are middle schools, in part because middle schools serve as a funneling point within mid-sized suburban districts, where children from neighborhood schools first come together in a single location (or perhaps two locations), creating sufficient subgroup sample sizes for calculating gaps.

Notably, a school can only have a measurable achievement gap between ethnic groups if it has at least 30 tested students in each group!  So really, most of the “focus” schools in middle and upper middle class New Jersey districts are middle schools in more diverse districts.

Far fewer of the more affluent schools in the state even have at least 30 members of disadvantaged minority groups taking state assessments in a given year! As such, racial achievement gaps cannot even be calculated for these districts.

Yes, gaps are a problem… but these measures… and resultant classifications are a twisted combination of ignorant and arbitrary.

Ignorant, arbitrary or otherwise, these classifications may have significant consequences for home values. And homeowners in these districts (and those in poor urban “priority” school zones) should be rightfully outraged at this potentially highly consequential abuse of data. [and of course those in “reward” school zones can quietly basque in the glory of their unearned accolades]

After all, it is the broad labeling that matters more than precise and nuanced characterizations of actual schooling quality!

Figure 3 shows the average proficiency rates of the “reward” schools and “focus” schools in Middlesex County – focusing only on those schools with fewer than 20% of children qualified for free lunch. That is, lower poverty schools.  In terms of overall proficiency, the “focus” schools fit reasonably into the broader mix of schools in Middlesex County.

My intent here is certainly not to downplay the gaps that may persist in these schools, though it’s really important to acknowledge that you can only even measure that gap if diversity exists to begin with. My point in this graph and post in general is that the state has created a labeling system misuses measures that weren’t very good to begin with to create arbitrary and capricious school labels that may have real and substantial consequence for home values. In many cases here, districts that are home to a focus school are immediately adjacent to districts that are home to ‘reward’ schools (an equally unearned label!).

Figure 3.

The kicker here is that even if the public were to become wise to the questionable veracity of these labels, that state has used this labeling system in the context of granting itself near unilateral authority to exercise substantial control over the operations of these schools [an authority which may not actually exist!].

So, it’s not just about the labels – which may be entirely meaningless – but it’s also about – much more about – a substantial threat to local governance of those schools. Now, I’ll admit that I have mixed feelings about “local governance,” because it is often local governance that reinforces disparities across children and schools.

But, that said, the state’s choice to use these labels quite explicitly as a threat to local governance – rather than merely as a “label” to increase awareness and encourage increased local accountability – may increase the consequences for local home values. That is, prospective home buyers may be more likely to avoid purchasing homes in neighborhoods or districts where they perceive that they may lose control to the state of their schools and this effect may be much greater than the effect of a negative label alone. Further, it’s entirely possible that in these middle class communities otherwise perceived as having pretty good schools, that public perception would be that proposed state interventions are more likely to make the schools worse than better (in addition to the threat of intervention itself).

Indeed, these are empirical questions and ones I hope to explore over the next few years as annual housing sales data are released.

Gap measurement in NJ: Largest Within-School Gaps: schools with the largest in-school proficiency gap between the highest-performing subgroup and the combined proficiency of the two lowest-performing subgroups. Schools in this category have a proficiency gap between these subgroups of 43.5 percentage points or higher. see: http://www.state.nj.us/education/reform/PFRschools/TechnicalGuidance.pdf


Data, Data, Data? Dissecting & Debunking NJDOE’s State of the Schools Message

Time again for an NJ State of the Schools Address, as reported HERE in NJ Spotlight (with absolutely no critical question/reporting whatsoever! More or less spoon fed regurgitation).

As I’ve written a number of times on this blog, state officials in New Jersey have decided on specific marketing/messaging plan in order to support current policy initiatives. Those policy initiatives involve:

  1. expanding NJDOE authority to impose desired “reforms” (charter/management takeover, staff replacement, etc.) on specific schools otherwise not under their direct authority.
  2. cutting funding from higher poverty, higher need districts and shifting it toward lower poverty, lower need ones.
  3. expanding charter schooling and promoting other  “innovations” in high poverty concentration schools.

The supposed impetus for these reforms is that New Jersey faces a very large achievement gap between low income and non-low income children (one that is largely mis-measured). While it would seem inconsistent to suggest reducing funding in low income districts and shifting it to others, the creative messaging has been that the additional resources are quite possibly the source of the harm… or at the very least those resources are doing no good. Thus, the path to improvement for low income kids is to transfer their resources to others.  What I have found most disturbing about this messaging – other than the ridiculous message itself! – is the flimsy logic and disingenuous presentations of DATA that have been used to advance the argument.

Look if the message is going to be about Data, Data, Data – then now is the time to take a more thorough, context-sensitive look at the data, and try to better understand what’s really going on.

Let’s do a walk through of some of the information presented in the most recent state of the schools presentation.

Here’s a link to the slides from the recent presentation:

http://www.state.nj.us/education/news/2012/0919con.pdf

NJDOE Message

The most recent state of the schools presentation is now in the post-NCLB waiver era, where we are now presented with those template classifications of schools as Priority, Focus and Reward schools.
The state of the schools presentation revolves to a large extent around these categories, because it is those Priority schools that are the target of the most immediate and disruptive interventions.

Below are the slides that were presented to characterize schools by their performance category. The message to be conveyed by these slides was:

  1. Priority Schools are overspenders (or at least very well resourced)
  2. Priority Schools have very well paid teachers who have slightly higher than average experience
  3. Yet still, priority schools have really crummy outcomes!

Therefore, we must have wide latitude to intervene!

EXHIBIT A – PRIORITY SCHOOLS SPEND MORE(?)

EXHIBIT B – PRIORITY SCHOOLS HAVE HIGH PAID TEACHERS & LOW OUTCOMES!

EXHIBIT C- GAPS REMAIN LARGE

Omitted Information What about demographic differences?

Clearly, a few things are being overlooked in the first two slides which claim characterize Priority schools as schools with plenty of resources that simply don’t get the job done. Now, there’s a little more to the story than that!

Most notable, as I show below, priority schools have about 80% of children qualified for free lunch and reward schools less than 10%! Yet as the NJDOE slide above shows, at the high end these school districts spend slightly under 30% more than state average. Notably, this shoddy comparison does not compare these districts to others in their own labor market.

Indeed, New Jersey more than other states has put some money into these districts. See “Is school funding fair?” But, let’s be clear, these margins of funding difference, while helpful, hardly make these districts – given their needs – flush with excess resources!

In fact, the strongest empirical research on this topic suggests that it would take an additional 100% or so per pupil funding for a district that is 100% low income versus a district that is 0% low income. Here, we are looking at nearly that extreme of low income differential, and not nearly that extreme of funding support! So while these districts are better off than similar districts in other states, implying that they’ve got more than enough to close achievement gaps is a huge stretch.

But do those demographic differences matter?

This figure shows just how much the demographic differences represented above matter with respect to student achievement, and specifically how much school demography continues to dictate the performance classification of schools under the NJDOE waiver plan.

As I pointed out on a recent post, NJDOE has basically flagged schools in low income neighborhoods for experimentation and substantial disruption (closure, etc.) with an option to override any/all local input.

Notably, this pattern is likely better than it would otherwise be because of New Jersey’s past efforts to target additional resources to high need settings, including pre-kindergarten programs, smaller class sizes and more competitive teacher salaries than might otherwise exist in these settings.

What about the teacher pay and teacher characteristics claim?

But what about those salaries? The NJDOE slides present a picture of teachers who – by their argument – are certainly paid enough. And, in fact, setting aside (ignoring entirely the demography of the schools), the implication of the NJDOE slides is that hey… we’re paying these teachers a few thousand more than the average teacher in the state, but clearly they just aren’t very good, or at least there are a bunch of them that aren’t and need to be fired! Further, they have slightly more experience than teachers in other schools… yet they still stink… indicating that experience clearly doesn’t matter. Notice that they didn’t present degree levels.

Okay… now let’s do a legitimate walkthrough of the most recent available data on NJ teachers with respect to the performance categories of schools. I use the 2011-12 Fall Staffing Reports and I fit a regression model of teacher salaries for all elementary and middle level classroom teachers (secondary later if I get a chance). In that model, my goal is to compare the salary a teacher would make:

  • at the same experience level
  • with the same degree level
  • having the same job code
  • working full time
  • in the same labor market (and type of district in that market)
  • in the same year

That is, I’m comparing apples with apples. This first graph shows the average difference in salary on the above comparison bases, statewide. Statewide, teachers in priority schools are earning a lower salary and teachers in reward schools a higher salary than teachers in “all other schools.” But these averages do mask some important differences across labor markets.

Here are the North Jersey/NY projected teacher salaries by experience level, where Newark carries significant weight in the model. Priority school salaries by experience are in blue, reward in red. On average, the differences are rather subtle. Reward schools salaries jump ahead in the mid-range, and priority rise again later, but fall behind in the mid range. But, it’s really important to understand, that simply having roughly the same salary does not mean that salary is actually competitive for recruiting and retaining teachers of comparable qualifications! In fact, to get teachers to work in a high need setting is likely to require a substantively higher wage!

As I explain in a recent review of the literature on this topic: With regard to teacher quality and school racial composition, Hanushek, Kain, and Rivkin (2004) note: “A school with 10 percent more black students would require about 10 percent higher salaries in order to neutralize the increased probability of leaving.”33 Others,however, point to the limited capacity of salary differentials to counteract attrition by compensating for working conditions.34 see: http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

  • Hanushek, Kain, Rivkin, “Why Public Schools Lose Teachers,” Journal of Human Resources 39 (2) p. 350
  • Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438
  • Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

Now let’s look at south jersey, which appears to be the source of most of the deficit that shows up statewide. In South Jersey/Philly metro, teachers in priority schools are making a much lower wage especially in the mid-range. Non-classified and reward schools lead the way on salaries across most of the experience range. Hey… is this chicken or egg? Do salaries matter – or are more advantaged schools simply able to pay higher salaries.

One issue that NJDOE appears to be ignoring entirely is that the classification of these schools may actually lead to additional teacher sorting – making it even harder to staff priority schools with high quality teachers down the line.

Here are the degree levels of classroom teachers in these schools – something notably absent in the NJDOE presentation. The differences between priority and reward schools are quite striking.

PRIORITY SCHOOLS HAVE FAR MORE TEACHERS WITH ONLY A BA AND FEWER WITH AN MA THAN REWARD SCHOOLS!

Finally, here are the concentrations of novice teachers, where a sizable body of research literature points to the problem of teacher churn in high need schools and the relationship between high novice teacher concentrations and lower student outcomes.

What about the performance of low income children in New Jersey?

Again, part of the message being presented in the state of the schools address is that New Jersey in particular has failed its low income children – as indicated by the suspect, over time proficiency rate graphs presented above. These graphs are presented as coupled with the funding/resource graphs to imply that funding is clearly unhelpful at best and harmful at worst when it comes to fixing the achievement gap.

As I’ve written on this blog before, New Jersey has made substantive gains in recent decades for low income children. Further, to make comparisons of achievement gaps, one must focus on the most comparable measures and most comparable settings. In one recent blog post, I compared Massachusetts, Connecticut and New Jersey – which in terms of income distributions and the characteristics of those above and below the Free/Reduced Income thresholds are most similar. The following graphs show that children of HS dropouts and low income children in NJ and MA have both higher levels of performance and have outpaced the gains in performance of similar children in Connecticut and Rhode Island (but especially CT!)

What has New Jersey done to improve performance of low income children?

I also elaborated in that previous that one key difference between these states is that NJ and MA, more than the others have shifted resources toward higher need districts. The first graph shows the disruption over time in the relationship between district income and district resources. MA and NJ have most significantly disrupted this relationship, providing systematically more resources per pupil in lower income districts.

This second graph shows the pattern across districts by poverty in each state. Note that in CT, while a few high poverty districts (Hartford and New Haven) have higher current spending, the CT pattern is less systematic. Further, in those few districts, much of the additional spending is granted through magnet school aid, and thus may have limited positive impact on the districts’ neediest students.

To the best of my understanding, teacher tenure laws are/were strong in each of these states. Few if any districts in these states base teacher evaluation heavily on student test scores – especially during the periods represented in the graphs above – which predate Race to the Top. That is, clearly the differences in low income achievement growth between these states have little/nothing to do with state teacher evaluation policy. To go even further, NJ and CT have relatively small charter school market share, so charter school market share likely is not a major factor either.

Further, as explained in this report, and in this article, substantive and sustained school finance reforms do matter! And the evidence on the effectiveness of these reforms far outweighs the more speculative reforms being suggested as replacements for funding in New Jersey.

What does NJDOE & the current administration propose to do about future funding?

Finally, as I noted previously, the current direction of policy initiatives is to attempt to reshuffle funding away from higher poverty/need districts and toward lower poverty/need ones. Here’s the graph from the previous post.

The Strange Logic of it All?

Coupling this DOOHNIBOR (uh… reverse robinhood) strategy with arguments for disruptive reforms in high poverty settings is illogical at best and reckless and irresponsible at worst.

Children in high poverty settings in New Jersey have made substantive gains over time.

It is quite likely that New Jersey’s investments in the schools and communities of these children have played a significant role in those gains.

Yet, even in New Jersey, where the state has made those efforts, poverty-related disparities do persist and require attention.

There is little or no evidence that expanded charter schooling is substantively improving the outcomes of our lowest income children, largely because those “successful charter schools” of which we most often speak are not serving our lowest income children in any significant numbers, and in some cases are increasing concentrations of disadvantaged children left behind in district schools.

And there’s little evidence that either New Jersey’s failures or gains are a function of an oversimplified good teacher/bad teacher dichotomy, suggesting a need for oversimplified reformy solutions like teacher deselection and/or pay-for-test scores.

Despite the state’s efforts to provide support to high poverty settings/schools, teacher wages still are not where they necessarily need to be in those districts to recruit and retain a high quality applicant pool year after year. There remain disparities in teacher qualifications, including novice teacher concentrations. Teacher quality disparities may be/are an issue – but not in the way they are presently being framed!

These are the basic issues that need to be addressed. They aren’t sexy. They aren’t reformy. They aren’t consistent with the current marketing/messaging of NJDOE.

But they are based on data, data, data, DATA, DATA and more freakin’ Data!

And there’s a lot more where that came from!

 
 

Teacher Salaries, Demographics & Financial Disparities in the Chicago Metro

No time to really write much here today, but I do have a few figures to share. I’m posting these mainly because I keep seeing so many ridiculous a-contextual… and in many cases simply wrong statements about Chicago teachers’ salaries.  As I understand it, salaries are not really the main issue in the contract dispute… but rather… the teacher evaluation system. I’ve already written extensively about the types of teacher evaluation frameworks that I believe are being deliberated here, but I’m not following the issue minute by minute.

This post may be most relevant! 

Someone has to just say no to ill-conceived teacher evaluation policies. Perhaps this is the time.

That aside, there are typically two ways one might choose to compare teacher salaries to determine how they fit into their competitive context. One is to compare teacher salaries to non-teachers of similar age and education level. The overall competitiveness of teacher salaries tends to influence the quality of entrants to the profession. The other is to compare teacher salaries – for similar teachers – across districts within the same labor market.

When taking the latter approach, it is also important to consider the demographic differences across settings. All else equal, teachers will gravitate toward jobs with more desirable working conditions. So, in high need urban settings, equal compensation alone would be insufficient.

Bear in mind that I’ve explained on numerous previous posts how Chicago is among the least well funded large urban districts in the nation!

So, here’s a quick run-down on salaries and student populations – and funding equity (or lack thereof) – in pictures and tables.

Figure 1. Concentration of Predominantly Black and Hispanic Schools and Low Income Districts (and resource inequity)

[this paper explains the model behind the funding disparity analysis]

Figure 2. Demographics of Selected School Districts

Figure 3. Salary by Experience Generated from Model of Teacher Level Data (publicly available here)

So, in the mix, Chicago salaries for the first several years of experience are relatively average – or even slightly above. But, they do trail off at higher levels of experience and eventually fall behind. Remember though that comparable salaries would be generally insufficient for recruiting/retaining comparable teachers in a higher need setting.

Other even higher poverty, higher minority concentration districts like Harvey and Dolton are even more disadvantaged in terms of teacher salary competitiveness.

For more on the importance of teacher salaries, see: http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Cheers!

ADDENDUM

I’ve been fielding a few random comments along the lines of “so what… Chicago’s outcomes still stink and they clearly spend more than they should, and pay their teachers more than they should for those stinky outcomes!”  Some of these comments point to higher graduation rates in Springfield, coupled with lower spending. Of course, this comparison assumes that it would cost the same in Springfield and Chicago to accomplish similar outcomes. So, I ran a check based on models I’ve run for recent academic papers. The models are fully elaborated here:

Baker.AEFP.NY_IL.Unpacking.Jan_2012

Specifically, I estimate models to adjust for the various costs faced by districts toward achieving common outcome goals. Those models account for differences in the student population served, differences in regional labor costs and differences in economies of scale (really only affects small districts).

These graphs show the relationship between need and cost adjusted operating expenditure per pupil and student outcome measures. The first uses the state assessment scores, centered around the average district – and averaging these centered scores across all grades and tests. It’s like a combined outcome index of all test scores.

Chicago falls pretty much in line here. It has very low need/cost adjusted spending… and, well… low outcomes. But they certainly don’t appear to have lower outcomes than expected given their resources!

The second uses graduation rates.

It’s a little harder to judge what’s going on here… but Chicago still does not appear to be substantially out of line. Graduation rates can vary for any multitude of reasons… including having lower standards for graduation.

In other words, I’m not buying the argument that “yeah… but… Chicago still ain’t cuttin’ it… even with what it has.” Is it possible to have what you need and still not cut it? Yes. It is certainly possible that a district would have far more adequate resources and still do a crummy job. But, Chicago does not appear to be such a case.

 

 

Ed Waivers, Junk Ratings & Misplaced Blame: Jersey Edition

I’ve been writing over the past few weeks about NCLB waivers and the schools that are being targeted by states under the waiver program as targets for federally endorsed state intervention. [all of which is built on highly suspect legal/governance assumptions]

My concerns here operate at a number of levels. First, the current Federal Administration has again used an “incentive” application process to coerce states to adopt really, really ill-conceived policy frameworks. These policy frameworks consist of two major parts:

  1. school and district performance classification schemes that are largely if not entirely built on misinterpretation and misrepresentation of generally low quality data; and
  2. poorly vetted, ill-conceived, aggressive/abrupt (closure, turnaround) intervention strategies as likely (if not more so) to do harm as they are to do any good.

So… yeah… it boils down to ramming bad, disruptive restructuring plans down the throats of schools/districts/communities that have been classified by biased, and unjustifiable measures. Further, much of this is being proposed without carefully evaluating whether there exists legal authority to do any of it.

Junk Classifications 101

So, let’s take a look at how the school classifications have played out in New Jersey. New Jersey, like other states proposed to classify its worst schools as Priority schools – subject to immediate disruptive intervention, the next lowest set as Focus schools – the you’re next/we’re watching you schools – and another set as “reward” schools – or you kick ass so we’re gonna give you a prize!

Matt Di                                  Carlo over at Shanker Blog has given considerable attention to the issue of state school grading systems and the extent to which they measure or even attempt to measure school effects on student test scores (not to be conflated with actual school “effectiveness”), or instead simply capture the compounded influence of a variety of student background factors on various accountability measures. In other words, are school ratings simply classifying poor minority schools as bad schools and thus branding their teachers and administrators as necessarily ineffective, while not even attempting to actually discern their effectiveness.

Further, in my last post on New York City schools I showed that while there were subtle differences in mean teacher percentile rank across schools rated as the worst (priority) versus those rated best (good standing), a) there were still many “best” schools where teacher average test score effect was much lower than in “worst” schools and b) schools that had lower income students and more minority students were still much more likely to be rated as among the “worst” even if their teacher “effects” were similar.

New Jersey Classifications

This first figure shows the demographic composition of schools by their classification. Perhaps the most astounding feature of this graph is that priority schools are nearly 100% black and Hispanic, while reward schools have very low levels of low income, black or Hispanic students.

Here are a few maps to illustrate the geographic distribution of priority, focus and reward schools, for those who know Jersey. We can see that the priority schools are concentrated in the larger, poorest urban centers and focus schools in and around other poor cities/towns.

Not surprisingly, the reward schools for the most part are scattered through the more affluent suburbs of northern Bergen County and out through the most affluent areas of north central NJ (Morris/Somerset/Hunterdon). Okay… I was actually surprised that they had concocted a rating system that was so absurdly biased. The second set of maps shows that there are some reward schools in the northern half of the city of Newark (the area with lower black population share).

Underlying Measures for Classification

It was assumed that states would be proposing ratings based on a mix of status and improvement measures… and that doing so would somehow mitigate the extent of demographic bias in the classifications. States could also use subgroup and achievement gap measures. States wouldn’t, for example, simply be proposing to step in and close down all of the majority low income and minority schools and turn them over to private management/or otherwise displace their entire teaching and administrative staffs.

Of course, the measures available in most states aren’t always that useful to sifting through the demographic biases.New Jersey’s are particularly bad. The following figure shows the racial and low income composition of schools by the types of measures that determined their status. Both the progress ratings and the performance level ratings are hugely biased! As it turns out, so are the achievement gap and subgroup measures. Notably, many affluent New Jersey districts (where the reward schools are) likely have too few low income or minority students to even report gaps.

Remedying Poverty by Deprivation?

In my analysis of New York State, I also showed that priority schools are far more likely to appear in school districts that have been most underfunded by the state of New York relative to its own promised school funding formula (the one the state adopted/proposed as a remedy to court order several years back).

Now, New York state has one of the worst state school finance systems in the nation. One in which districts with more needy students have systematically fewer resources. New Jersey is a far cry from New York in this regard. New Jersey has done better than most states with respect to funding equity and adequacy. 

And compared to demographically similar states, New Jersey has some positive results to show for its overall funding effort and for its targeting to high poverty districts.

But lately, New Jersey has started down a different road in state school finance policy. The state has chosen in recent and proposed for future years to significantly underfund their own legislatively adopted state school finance formula.

That in mind, the following slides present an analysis somewhat similar to that presented in New York State, but looking forward instead of back. I’m not proposing some lofty “what should be” funding levels based on academic analysis here. Rather, I’m simply looking at the extent to which New Jersey is currently, and proposed to fund districts under its own formula SFRA. This is the formula that was adopted by the legislature under the previous administration and was subsequently upheld by the state court. More on these issues in a later post.

I’ve not had time to reconstruct my own simulations of SFRA projected out over the next several years, so I’ve used data pulled together by the Education Law Center and SOS NJ in which they have projected (SOS NJ) out the SFRA funding shortfalls for each district for the next 5 years. The figure below shows that in the current year, funding shortfalls from the current legislated formula are smaller in districts that are home to priority and focus schools (note that the formula itself significantly reduced targeted effort to these districts when it was implemented).

But, over the next few years, it is expected that as these schools – priority and focus – are subjected to takeover/overhaul/closure – their districts will be increasingly shorted in their funding with respect to what the formula estimates.  That is, the overall strategy here appears to be to identify high need schools for takeover/closure and then systematically and substantially reduce their financial support over time.

Cumulatively, over the next five years, districts of priority schools stand to lose much more on a per pupil basis (relative to what the formula dictates they should receive) than districts of reward schools.

Put bluntly, the goal is to “reform”(?) priority and focus schools and close achievement gaps by taking all of that harmful money away from them and giving it to others who are far less needy! Yeah… that’ll learn-’em!

This is all strangely consistent with the framing of the commissioners report, that was not a report, on school funding and achievement gaps in New Jersey. In that report, Commissioner Cerf essentially proposed (via a series of bad and worse graphs) that the road toward closing New Jersey’s achievement gap should be paved by reducing funding to high need minority districts and shifting it to lower need, lower minority concentration districts. Strange logic indeed.

And these reductions presented above don’t account fully for the plethora of other alterations proposed to the state school funding formula that might further reduce funding to higher need districts – funding to districts that are home to priority and focus schools.

The following posts critique some of the proposed changes, and address other related issues:

Closing Thoughts

As I noted on my previous post, I can hear the reformy outcry now that this is all warranted because we’ve provided poor and minority kids the worst schools and worst teachers for so many years. This is merely an attempt to remedy this persistent, intractable disparity.  The problem with this logic is the placement of blame (in addition to the questionable legal authority and ill-conceived remedies).

We’re not measuring school performance here. There’s no basis in these classification schemes for implying that the teachers and administration are the ones who failed the children. These are junk, gerrymandered classification schemes. They are based on arbitrary distinctions being made with inadequate data/information.

Follow up on Ed Waivers, Junk Rating Systems & and Misplaced Blame – New York City’s “Failing” Schools

About a week ago, I put up a post explaining a multitude of concerns I have with the current NCLB waiver process and how it is playing out at the state level. To summarize, what we have here is the executive branch of the federal government coercing state officials to simply ignore existing federal statutes, by granting waivers to state officials who adopt the current administration’s preferred education reform strategies. Setting aside the legal/governance concerns, which are huge, few if any of these preferred strategies are informed by any sound research/analysis.

Equally if not more disturbing is how this waiver process is playing out at ground level, and the message it sends.  Once again (as in Race to the Top) the administration has encouraged the adoption of ill-conceived homogenized policy frameworks across states. States are encouraged, through the waiver application process, to propose how they will abuse data yielded by their generally inadequate data systems to inappropriately classify local public schools as a) in good standing, b) focus or c) priority.  States have been granted some flexibility as to how they will abuse their own data to gerrymander local schools into these categories.

Once schools are placed into these categories – regardless of any validity check on the meaning of those categories – those schools become subject to a prescribed set of largely unproven state intervention options – with yet another layer of complete disregard for statutory and constitutional authority (state statutes & constitution) of states to take such action.

In short, what we have is the federal executive branch  using authority it doesn’t have to grant states authority they don’t necessarily have, to unilaterally impose substantive changes on individual local public schools.

But I digress, yet again.

So, what about those categories? And how do they break down? In other words, which districts and which children are most likely to be subjected to these interventions/experimentation?

I started last week with the state of New York, pointing out that on average, New York State has (ab)used its currently available data to characterize as priority schools and focus schools, primarily those schools that are a) high in poverty, b) high in minority concentration, c) low in taxable property wealth and d) low in aggregate household income per capita.

To rub salt in the wound, even though back in 2003 New York State was ordered to correct deficiencies in school funding across districts, and even thought state itself proposed a relatively inadequate formula to address those disparities, the state has continued to ignore its own formula – shorting by the largest amounts, those districts that are home to the most priority schools. (for a thorough analysis, see: https://schoolfinance101.com/wp-content/uploads/2010/01/ny-aid-policy-brief_fall2011_draft6.pdf)

This is where we last left off:

But, my analysis last week largely left out New York City schools. Bear in mind that New York City like many other high need districts around the state continues to be substantially shorted under the state’s own proposed foundation funding formula.

Demographics in New York City

First, let’s do a walk-through of the demographic characteristics of New York City schools by their classification under the new state rating system. Bear in mind that the degree of variation in demography across schools in New York City is somewhat more limited than across the state as a whole.  On average, in New York City, more schools have higher average minority concentrations and higher average low income concentrations than the state as a whole.

The following figures play out largely as one might expect – with priority schools having a) the highest concentrations of low income students, b) elevated concentrations of black and Hispanic students, c) and the highest concentrations of LEP/ELL and special education students.

In other words, it would certainly seem that while reformy-rhetoric dictates that demography should not determine destiny, demography remains a pretty strong determinant of a school’s post-nclb-waivery-classification-status.

When I run statistical tests of the relationship between demographic factors and the likelihood that a school is identified as a “priority” school, I find:

  1. A 1% increase in % Free Lunch (controlling for grade level) is associated with a 4.6% (p<.01) increase in the likelihood of being classified as a “priority” school.
  2. A 1% increase in % Black (controlling for grade level) is associated with a 1.2% (p<.01) increase in the likelihood of being classified as a “priority” school.
  3. A 1% increase in % Hispanic (controlling for grade level) is associated with a 2.1% (p<.01) increase in the likelihood of being classified as a “priority” school.
  4. A 1% increase in % Special Education (controlling for grade level) is associated with a 13% (p<.01) increase in the likelihood of being classified as a “priority” school.

Resources by Demographics in New York City

My next cut at the NYC data explores the position of priority schools with respect to a) low income students, b) special education students and c) per pupil spending (school level).

In the first two figures, we see that priority secondary schools (or at least those serving students through the secondary grades) are somewhat spread out by % free or reduced lunch. Note that bubble/diamond/triangle size indicates school size. However, there don’t appear to be any priority high schools among the lowest poverty schools, and a relatively large share appear among the higher poverty schools. They are relatively average, compared to schools with similar % free or reduced lunch, in terms of their school site spending.

In the second figure, we can see that those high schools identified as priority all have at least a minimum threshold of children with disabilities. But, all high schools with few or no children with disabilities are in good standing. That includes numerous relatively small high schools that also have much higher per pupil spending even though they have far fewer children in special education.

Priority middle schools also have relatively average per pupil spending compared to schools with similar concentrations of low income or special education students – but they all tend to have relatively high concentrations of low income children and children with disabilities.

None of the middle schools with lower concentration of low income children or children in special education are classified as priority schools.

For elementary schools, priority schools (again in red… but somewhat hidden behind others) tend to be very high in concentrations of low income students and moderately high in concentrations of children with disabilities. Again, they are relatively average in their per pupil spending compared to similar schools.

Outcomes in New York City

Now, the findings in the previous section might… might… on the outside chance suggest that there really is something about these priority schools that warrants additional investigation. After all, they do have similar resources to other schools serving high need populations. And while the priority schools tend to have high need populations, other schools with similarly high need populations and similar resource levels are either “focus” schools or “in good standing.”

But, it is still important to remember that the state has NOT identified as priority schools, any schools that serve low need populations and do less will in terms of outcomes compared to other schools serving low need populations and with comparable resource levels.

For this next graph, I took the NYC teacher level value-added data and averaged them to the school level for all teachers in each school, as in my previous post on NYC charter school teachers. (caveats included on previous post). Note that I’ve removed quite a bit of the variation in these value-added scores by aggregating them to the school level prior to constructing this graph.

While the differences in mean teacher value-added do fall in the right rank order – highest mean for good standing, second for focus and lowest for priority, the variations among schools around these means certainly muddy the waters a bit. Yes, the means are different, but the boxes overlap quite substantially!  And the consequences of being in one group versus another are quite substantial.

Shouldn’t it be the case that no priority schools have “better” average teachers (by the limited noisy measure used here) than schools in “good standing?” How does it make sense that there are schools in “good standing” that fall well below the average for teacher value-added of “priority” schools, even if those cases are relatively rare?

For one last statistical shot at teasing out what’s going on here, I ran a logistic regression to figure out a) whether and to what extent these differences in value-added are significant predictors of landing in priority status and b) whether a school that has more low income and minority students is more likely to land in priority status – even if it has the same teacher value-added scores?  That is, even if those statistically “bad” teachers aren’t to blame!

In other words, is there racial/socio-economic bias in the school ratings among schools with similar teacher value added?

Here it is:

For interpretation, I used the average value-added percentile of teachers here (in place of the standardized value-added score).

What this output shows is that for a 1 percentile rank increase in the school average teacher value added, a school is about 8.4% less likely (91.6%% as likely) to be classified as priority. Having a higher aggregate value-added teacher percentile rank significantly reduces the likelihood that a schools is identified as priority. That makes sense… but certainly isn’t the whole story… and as the graph above shows, there’s a fair amount of variation within each category.

Here’s the problem. Even among schools that have the same aggregate teacher value added percentile:

  1. schools with 1% higher free or reduced price lunch are 4.4% more likely to be classified as priority
  2. schools with 1% more black children are 8.7% more likely to be classified as priority
  3. schools with 1% more Hispanic children are nearly 8% more likely to be classified as priority
  4. schools with 1% more children with disabilities are 9% more likely to be classified as priority.

And each of these biases is significant, and of non-trivial magnitude, as well.

Let’s review what this means in simple, blunt terms.

These findings mean that even in schools where the teachers have the same average value added rank/percentile, schools with more low income, minority and special education children are more likely to face anti-democratic intervention!!!!!

Now that doesn’t seem to make a whole lot of sense when the supposed reason for the need for these interventions is that these poor, minority schools simply have all of the “bad teachers,” and when a central strategy to be employed is to replace/displace the school staff.

Closing Thought

I can hear the reformy outcry that this whole multi-level coercive illegal power grab to impose reformy intervention is in fact a critical step toward guaranteeing that demography isn’t destiny… to make widely known the fact that we’ve continued to provide minority and low income students the worst teachers and the worst schools. And that those teachers and schools must go [even if many of them outperform teachers in schools in “good standing” on their noisy/biased VAM estimate?]! Even if that means complete disregard for our current system of government. And even if that means that the parents, children and employees in these schools have to be forced to forgo additional constitutional and statutory protections (depending on the imposed reform).

It would be one thing if the measurement systems we were using to classify these schools as “failing” were valid for making such decisive declarations. That is, valid for making the assertion that it is the quality of the school and its teachers – and other controllable factors – that are primarily responsible for the performance.

Such arguments would be more reasonable if the disparate impact shown here was actually a disparate impact of school quality variation rather than a disparate impact in the application biased of school ratings, constructed from inadequate measures and inappropriate analysis (utter disregard for error, bias, etc., etc., etc.).

Ed Waivers, Junk Rating Systems & Misplaced Blame: Case 1 – New York State

I hope over the next several months to compile a series of posts where I look at what states have done to achieve their executive granted waivers from federal legislation. Yeah… let’s be clear here, that all of this starts with an executive decision to ignore outright, undermine intentionally and explicitly, federal legislation. Yeah… that legislation may have some significant issues. It might just suck entirely. Nonetheless, this precedent is a scary one both in concept and in practice. Even when I don’t like the legislation in question, I’m really uncomfortable having someone unilaterally over-ride or undermine it. It makes me all the more uncomfortable when that unilateral disregard for existing law is being used in a coercive manner – using access to federal funding to coerce states to adopt reform strategies that the current administration happens to prefer. The precedent at the federal level that legislation perceived as inconvenient can and should simply be ignored seems to encourage state departments of education to ignore statutory and constitutional provisions within their states that might be perceived similarly as inconvenient.

Setting all of those really important civics issues aside – WHICH WE CERTAINLY SHOULD NOT BE DOING – the policies being adopted under this illegal (technical term – since it’s in direct contradiction to a statute, with full recognition that this statute exists) coercive framework are toxic, racially disparate and yet another example of misplaced blame.

States receiving waivers have generally followed through by using their assessment data in contorted and entirely inappropriate ways to create designations of schools and districts, where those designations then permit state officials to step in and take immediate actions to change the governance, management and whatever else they see fit to change in these schools (and whether they have such legal authority or not).

Priority schools are the bottom of the heap, or bottom 5% and are subject to the most aggressive, and most immediate unilateral interventions (seemingly with complete disregard for existing state statutory or constitutional rights of attending children, their parents or local taxpayers, as well as explicit disregard for existing federal law).

Implicit in these classifications – and the proposed response interventions – is the assumption that priority schools are simply poorly run schools – schools with crummy leaders and lots of bad, lazy, pathetic and uncaring teachers… who have thus caused their school to achieve priority status. They clearly must go… or at least deserve one heck of a shaking up! Couldn’t possibly be anyone else’s fault. After all, the state must have clearly already done its part to provide sufficient financial resources, etc. etc. etc. It must be the bad teachers and crappy principals. That’s all it can be! Therefore, we must have immediate wide-reaching latitude to step in and kick out the bums – and heck – just close those schools and send those kids elsewhere, or convert those schools to “limited public access, privately governed and managed institutions” (privately manged charters) where layers of constitutional rights for employees and students may be sacrificed.

New York State’s Waiver Hit List

New York State Education Department released their hit list of schools recently.

http://www.p12.nysed.gov/accountability/ESEADesignations.html

Here’s quick run down of some key characteristics of schools and districts under each designation.

Demographics of Hit List Schools Statewide

I did a quick merge of the above classification data with data from the 2011 NYSED school report cards to generate the graph below, weighted by school enrollment.

https://reportcards.nysed.gov/

Notably, schools in “good standing” are lowest BY FAR in % of children qualified for free lunch, percent of children who are black, or Hispanic, and are also generally lower in percent of children who are limited in their English Language Proficiency. Race and poverty differences are particularly striking!

In short, the Obama/Duncan administration has given NY State officials license to experiment disproportionately on low income and minority children – or for that matter – simply close their schools. No attempt to actually legitimately parse “blame” or consider the possibility that the state itself might share in that blame.

AFTER ALL, NEW YORK STATE CONTINUES TO MAINTAIN ONE OF THE MOST REGRESSIVE STATE SCHOOL FINANCE SYSTEMS IN THE COUNTRY! 

And the underlying disparities in that system are quite striking!

But perhaps, if we just require all of these priority schools to be turned over to charter operators from New York City, they can work their miracles statewide – for less money – with the same kids – and generating decisively better outcomes????? And of course, dramatically reducing labor costs??? Okay… so the data don’t really support any of that.

Economics of Hit List School Districts Statewide

Here’s another perspective on the districts that house schools across these categories. New York State’s school funding model derives a combined wealth ratio based on two key factors that seem to be strong determinants of the local revenue those districts can raise on their own – Income per Pupil (aggregate income of residents divided by district enrollment) and Taxable Assessed Property Values per Pupil. Here’s how these two measures play out across categories (excluding New York City schools, where income and property wealth are a) difficult to accurately compare with the rest of the state and b) disproportionately weigh on the comparisons).

Say it ain’t so? Really, are the districts of schools that are in “good standing” that much higher income and that much higher in taxable property wealth than the schools that are identified as priority schools? Couldn’t be. The economics of these local communities clearly has nothing to do with it? Does it? It’s those damn lazy, apathetic teachers and their greedy overpaid administrators!

State Funding of Hit List School Districts Statewide

A while back, I wrote this brief:  NY Aid Policy Brief_Fall2011_DRAFT6

In the brief, I explain the layers of problems of the New York State foundation aid formula. I also wrote this blog post: https://schoolfinance101.wordpress.com/2011/11/08/more-inexcusable-inequalities-new-york-state-in-the-post-funding-equity-era/

In the brief, I explain how the current New York State school foundation aid formula is hardly equitable or adequate for meeting the needs of children attending the state’s highest need districts. But to rub salt in the wound – FOR THE PAST SEVERAL YEARS, THE GOVERNOR AND LEGISLATURE HAVE CHOSEN TO DISREGARD ENTIRELY THEIR OWN WOEFULLY INADEQUATE STATE AID FORMULA.

Even worse, when the Governor and Legislature have levied CUTS TO THAT FORMULA, they have levied those cuts such that they disproportionately cut more state aid per pupil from the higher need districts. As of 2011-12, some high need districts including the city of Albany had shortfalls in state funding (from what would be expected if the foundation formula was actually funded) that were greater than the total foundation aid they were actually receiving.

So, here’s one last graph for my statewide analysis – in which I summarize the foundation formula shortfalls per pupil by district, for the schools in each class. The foundation formula shortfall compares:

  1. What should be: full foundation formula aid that would be received per total aidable foundation pupil unit, using the 2011-12 Foundation Level (on Page 21, here: http://www.oms.nysed.gov/faru/PDFDocuments/Primer12-13A.pdf) and multiplying that foundation level times the regional cost index and pupil need index (and then determining the state share per the formula as described in the above linked document).  To be concise, this is the “what should be” target, but is based on last year’s target. So I’m being generous to the state, because the foundation level should have gone up again from 2011-12 to 2012-13 (see p. 21)
  2. What is: actual state foundation aid to be received from 2012-13 Aid Runs, with foundation aid adjusted for Gap Elimination Adjustment and partial restoration of GEA.

So… here are the funding gaps by status, with respect to the state’s own inadequate funding targets:

So, with respect to its own formula, the state is pretty much underfunding everyone. Of course, as I’ve noted on many previous occasions, the state is also dumping disproportionate unneeded tax relief aid into the wealthiest districts, which also happen to have the schools in “good standing.”

What we have here is that the state is most substantially underfunding – WITH RESPECT TO ITS OWN FUNDING FORMULA – the districts that house the majority of children enrolled in schools that the state has identified as “priority” schools.

Hey… here’s an idea for ya’…WHY NOT MAKE IT A PRIORITY TO ACTUALLY FUND THESE SCHOOL DISTRICTS AT AT LEAST THE LEVEL THAT YOU, THE STATE HAVE DECLARED ADEQUATE FOR THEM?

Is it reasonable to play this subversive “blame the teachers and administrators” game and kick them out and close their school when it is you… the STATE that has shorted them disproportionately on funding – FOR YEARS – with respect to your own funding formula benchmarks.

We can discuss the adequacy of those funding benchmarks another day. (read the brief: NY Aid Policy Brief_Fall2011_DRAFT6)

Additional analysis of New York City schools hopefully in the near future!

What do the available data tell us about NYC charter school teachers & their jobs?

This post is about rolling out some of the left over data I have from my various endeavors this summer.  These data include data from New York State personnel master files (PMFs) linked to New York City public schools and charter schools, NYC teacher value-added scores, and various bits of data on New York City charter and district schools including school site budget/annual financial report information.

Here, I use these data combined with some of my previous stuff, to take a first, cursory shot at characterizing the teaching workforce of charter school teachers in New York City. All findings use data from 2008 to 2010.

To summarize the following figures, New York City charter school teachers:

  1. Are relatively inexperienced (but not all in their first 3 years)
  2. Are young (but not all 22 or 23 years old)
  3. Have longer contract years
  4. Are paid well for their experience, with a portion of the additional pay covering the additional time
  5. Have smaller classes to teach
  6. Work in schools that spend much more than surrounding district schools
  7. Work in schools that serve much less needy student populations than surrounding district schools
  8. Have 4th grade students with relatively “average” to below average scale score outcomes compared to schools serving similar populations
  9. In some cases, have 8th grade students with high average scale score outcomes compared to schools serving similar populations
  10. Where data were available, have value-added scores which vary from the citywide average in both directions, with KIPP being the lowest and Uncommon schools the highest (in the aggregate). Notably, Uncommon schools also have consistently smaller class sizes and the fewest low income students.

That’s it. Nothing particularly surprising. Nothing astounding. No miracles, but, a subset of schools that are in some ways different from district schools. Further, they are different in ways that perhaps aren’t that sexy… aren’t that “reformy.” Salaries rise with experience and they just happen to rise faster than district school salaries in some cases. Class sizes are small… especially at the middle school level… even though we are often told that class size reduction is only important in early grades. And, well, the outcomes are kind of mixed.

Experience & Age

The first figure shows that the typical NYC charter school teacher has about 6 tears experience in 2010. Some anomalies occur in the 2008 data. NYC district school teachers have about double that. But, it should be noted that some of this difference is likely explained by the average age of the schools themselves, in addition to higher turnover (a topic for a later date).

One would expect that as the schools mature, their staff will also somewhat – unless they actually make a concerted effort to excess teachers beyond a specific experience level. If they do begin to accumulate more experienced staff, they will also accumulate the higher expenses associated a more experienced staff and accumulated retirees (assuming any stay long enough to retire from these schools).

The average charter school teacher is between 25 and 30 years old, compared to the average district school teacher being around 40.

I’ll admit, I’m having some trouble reconciling average years of experience being around 6 and age around 25, but it would appear for the most part that the experience and age line up such that most went directly into teaching from their undergraduate studies. That is, there aren’t likely many later career changers here.

Contracts & Compensation

Personnel master file data report that teachers in Harlem Village Academies, Achievement First, KIPP, Uncommon and Lighthouse schools are on 11 and 12 month contracts. Harlem Childrens’ Zone was similar but did not report 2010 data. By contrast BOE contracts were 10 month (according to averages taken from full time individual teachers in the data).

Average salaries were highest in 2010 in KIPP schools, with BOE schools second, and other high profile charter chains not far behind. However, these averages are a function of degree levels, experience levels and contract months.

So, we can use a regression model to isolate those effects and compare “otherwise similar” teachers, and to determine the wage differential associated with the additional contract months (to the extent that contract months vary within the charters).

This next figure uses a regression model to compare the salaries of “otherwise similar” teachers by job classification, full time status, degree and experience level.  On average, a teacher with all of the same attributes can make a salary that is $8,000 higher in an Ucommon school and nearly $6,000 higher in an Achievement First School. Teachers in HCZ and KIPP schools make an annual salary about $4,000 higher than those in BOE schools.  Controlling for months worked, KIPP and HCZ salaries are comparable to district salaries and Uncommon and Achievement First salaries remain higher.

I’ve often asserted that many teachers might be willing to/interested in working a longer year for a higher annual salary. After all, it might be desirable to earn more doing what you do best and are professionally trained to do, rather than searching for other part time Summer work that does not always take advantage of your expertise.  That may not be the case for all teachers, but would likely be the case for at least some (as it is for these charter teachers). More pay for more time. Makes sense.

The following figure uses the regression model to lay out the predicted salaries for a teacher in 2009 across experience levels. Notably, much like BOE school salaries, NYC charter teacher salaries do increase with experience. In particular, consistent with what I’ve shown previously for salaries in Texas, New Jersey and Connecticut high profile charters, Achievement First and Uncommon schools pay what appears to be a significant salary premium. None pay significantly less than district schools, and none are flat with respect to experience (like Gulen school salaries in NJ or TX). KIPP and others track for the most part with district salaries.

Resources & Working Conditions

Now, not to beat an issue into the ground, but the following figure summarizes the per pupil operating expenditure differences between NYC charters and district schools (with the data becoming cleaner, more accurate and precise each time I take a swipe at them). These margins of difference are ever so slightly higher than those in my previous study, but the model is similar. High profile charters in NYC substantially outspend district schools serving similar student populations.

I’ve addressed critiques of these figures previously and have provided sensitivity analysis and discussion here: https://schoolfinance101.wordpress.com/2012/05/07/no-excuses-really-another-look-at-our-nepc-charter-spending-figures/

The full length report on these and related figures is here:

  • Baker, B.D., Libby, K., & Wiley, K. (2012). Spending by the Major Charter Management Organizations: Comparing charter school and local public district financial resources in New York, Ohio, and Texas. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/spending-major-charter.

Now, for this next figure, I use a regression model to compare the demographics of NYC charter schools to district schools, where the dependent measure is the population characteristic, and the independent measures include a) grade range served, b) year of data and c) borough of location of the school. That is, these comparisons are of the student population compared to same grade level schools in the same borough.

Uncommon schools in particular have uncommonly low rates of the lowest income children. This is typical in New Jersey as well (for North Star Academy in Newark). They also have low LEP/ELL rates, but so do they all. Achievement First and KIPP schools also have low rates of the lowest income children – quite a bit lower than district schools. Success Academies in particular have very low rates of LEP/ELL children. All have relatively low rates of children with disabilities.

These differences – when combined – make for very different school environments than for district schools in the same borough.

Now, despite serving much less needy student populations, these charter schools tend to have much smaller class sizes than district schools serving the same grade levels. HCZ schools have small elementary and middle school class sizes – from six to eight fewer students in a class than district schools. Uncommon schools have very small middle grades classes, as do Village Academies and Achievement First Schools. Interestingly, despite their financial dominance, KIPP schools have smaller class sizes than district schools, but not to the extent of others (note that some of KIPPs financial resources are allocated to such endeavors as KIPP to College).

A really interesting twist here is that the emphasis on smaller classes appears to be at the middle school level.  My gut instinct as a former middle school science teacher tells me that this makes sense. But, if one sticks strictly to the strongest findings in research literature, one would expect targeting at the elementary level and would have less justification for targeting small classes to the middle level. But perhaps small middle school class sizes are actually part  of the secret sauce of successful charter schools?

NYSED School Report Cards: https://reportcards.nysed.gov/

Outcomes

Now, with smaller class sizes, higher paid teachers working longer years, and less needy students we would not only expect a higher average performance level, but we’d also likely expect some higher average rates of gain in these schools.  And lottery based studies of NYC charters have revealed some positive findings of differences in performance between lotteried in and lotteried out students. It’s critically important to understand that while the sorting process may have been randomized in these studies the contexts – peer groups, etc. – into which they’ve been sorted is anything but. Again, different peers, different levels of resources – class sizes – length of year – teacher salaries, etc.  And some of this stuff is the stuff of difference that is of particular interest for setting policy.

The figures I offer below are merely descriptive. Again I use regression models to compare the outcomes of schools that are similar in terms of students served. But regression models in this case are used for descriptive purposes (as they often are). I’m really just describing the average difference in outcomes between the charters and schools serving similar populations. In the third slide below, I do use the average teacher value-added outcomes for those schools – but note that very few teachers in any given NYC school actually have sufficient numbers of students in tested grade levels for generating the value added estimates.

In the first figure here, we see that 4th grade assessment performance in many NYC charters is, on average, lower than district schools. The vertical axis indicates the number of points (around a mean near 680) above or below the mean of similar district schools. Most differences are well less than 1 standard deviation ( which is about 17 pts., but a standard deviation would be a pretty big difference. Again, these are levels, not gains.). 4th grade math scores are higher in Uncommon and Achievement First Schools. Most other 4th grade scores are similar to or lower than district schools. In v1 (version 1) models, I use % free or reduced price lunch (which varies little across many of these schools) and in Version 2, I use percent Free only. Charter schools compare less favorably when I evaluate them on the basis of their free lunch populations alone.

Uncommon Schools and Village Academies appear somewhat stronger on their eight grade performance, while others are a mixed bag. Again, these are level – scale – scores, relative to schools serving similar populations. Now, there has been at least some discussion of attrition as a factor in Village Academy performance. I’ve personally found attrition to be a major issue in Uncommon’s Newark School, North Star. But that’s an issue for another day.Matt Di       Carlo over at Shankerblog did a nice explanatory piece on the role of attrition the other day.

Here, what we have are average scale scores that are quite a bit higher than “demographically similar” schools. That said, when I account for free lunch instead of free or reduced lunch, those differences are somewhat muted. That is, when I compare against schools with comparable shares of the lowest income children, positive charter performance differences are muted and negative ones larger.

Again, Harlem Chidlrens’ Zone scores are relatively low, especially when comparing on the basis of fee lunch (v2 models). KIPP scores are pretty much comparable to demographically similar schools and Achievement First and others a mixed bag.

Finally, this figure shows the average value-added scores for teachers in these schools. Uncommon Schools are the only ones that appear to have noticeably above average teachers. One might stretch the data here beyond their capacity to argue that perhaps Uncommon schools are getting results for the premium they pay their teachers… and the fact that they generally provide those teachers with smaller classes. Not an entirely unreasonable story, though a) the underlying dynamic is likely far more complex than this and b) the value added metrics used here are anything but stable and/or decisive.

NYC Teacher Value Added Scores: http://www.ny1.com/content/top_stories/156599/now-available–2007-2010-nyc-teacher-performance-data#doereports

Implications

So, what does this all tell us? Perhaps not much… while at the same time, quite a bit more than we may have already know. At the very least, these data should serve to clear up common misconceptions & reformy misrepresentations of the NYC charter sector.

No… these schools are not staffed by peace core like, minimum wage missionaries (not that I’ve really heard this one all that much). They’re getting paid reasonably and getting paid a premium for at least some of the extra time and effort. Is it enough to sustain the model? Can these schools afford this down the road? Who knows?

No… these schools are NOT proving that money and class size don’t matter.  That would be difficult to prove with a subset of schools that for the most part spend more than district schools and provide smaller class sizes.

And No… these schools are not totally and invariably kicking butt on all student outcome measures, be they performance level measures or performance gains.

And finally… No… these schools are not doing it all… with the same kids!

But this isn’t just a laundry list of stuff to try to hold against these schools. Rather, it’s an attempt to lay out more clearly with publicly available data, what’s going on here, in an effort to move the conversation forward, beyond the usual talking points and into stuff that may really matter!

Helicopters can improve minority college attendance & other misguided policy implications: Comments on the Brookings “Voucher” Study

Here’s my quick response to the Brookings report released yesterday on the long term effects of vouchers on a randomized pool of participants in New York City.

Let’s say I conducted a study in which I rented a fleet of helicopters and used those helicopters to, on a daily basis, transport a group of randomly selected students from Camden, NJ to elite private day schools around NJ and Philadelphia. I then compared the college attendance patterns of the kids participating in the helicopter program to 100 other kids from Camden who also signed up for the program but were not selected and stayed in Camden public schools. It turns out that I find that the helicopter kids were more likely to attend college – therefore I conclude logically that “helicopters improve college attendance among poor, minority kids.” The simple policy solution then is to rent more helicopters and use them to send kids, well, wherever. After all, it’s the helicopters that matter????? Clearly, that would be a ridiculous assertion.

Similarly, the “major” findings of the new Brookings study were that, in particular, black participants in the voucher program seemed to have an increased likelihood of attending college. The study involved a randomized pool of individuals who qualified, applied and received the vouchers (and attended a private school) and those who qualified, applied and didn’t receive the vouchers.

The study purports to find [or at least the media spin on it] that “vouchers” as a treatment, worked especially for black students. I won’t  spend my time quibbling over design and statistical issues here, because I think the simplest issue to address – the big one – is the definition of the treatment itself.

This is not necessarily a study of whether “vouchers” as a treatment affect long run outcomes. Just like my hypothetical above had little to do with helicopters! Rather, it is a study of using “vouchers” as a funding mechanism to place a relatively small sample of low income minority children into a set of schools with fewer low income and minority peers. Schools that happen to be private schools. So it’s not really a study of whether “private schools” are more (or less) effective either. As such, the study really has little or no policy implications for “voucher systems” themselves, or private schooling.

Personally, I was struck to find that the only reference to peer group or peer composition was in a single sentence at the end of the report – but this sentence really says it all:

To the extent that student learning is dependent on peer quality the impacts reported here could easily change.

Yeah… that’s no throwaway line here! In fact it has the potential to completely re-frame the entire paper.

So what is the treatment?

Well, the use of a “voucher” system to alter the educational setting for a group of kids is most certainly not the treatment. Voucher is merely the mechanism used here to achieve the treatment.  It may be a policy mechanism that is useful under limited circumstances to achieve changes in educational setting. But the “voucher” is NOT the treatment.

And the use of vouchers in this narrow context may have few if any policy implications for new voucher programs in Indiana or Louisiana if they do not result in low income minority students being better integrated with higher income peers predisposed to have college aspirations.

The sector of schooling is most certainly not the treatment either – public or private school – catholic or non-religious school. While the sector of schooling is a variable in this analysis, it also may or may not have anything to do with the characteristics of the educational setting that most influenced college going behaviors. There’s a whole separate body of literature on that topic that is notably absent in this report. And it’s quite possible that we could find a policy mechanism which has nothing to do with either vouchers or private, or catholic schools which shifts more low income minority students into educational settings that promote college going behaviors.

So before we get in some huge tizzy about “vouchers” and “private school” effects, lets go back and define the treatment in this study for what it actually is and then try to figure out what it means in terms of effective policies for increasing college attendance among low income and minority students.

Tangent:  I found this paragraph particularly interesting:

The voucher offer also has a much larger impact than does exposure to a more effective teacher. Elementary school teachers who are one standard deviation more effective than the average teacher are estimated to lift their students’ probability of going to college by 0.49 percentage points at age 20, relative to a mean of 38 percent, an increment of 1.25 percent (Chetty et al. 2011b). If one extrapolates that finding (as the researchers do not) to three years of effective teaching, the impact is 3.75 percent. The impacts identified here for African American students—an increase of 24 percent—are many times as large.

Basically, what this paragraph/extrapolation boils down to is the distinct possibility that the variations in setting (largely peer group) achieved in this voucher study yield what appear to be stronger effects than the measured (really noisy measured – which may matter here) variations in teacher effect in the Chetty study. In other words, quite possibly… peer composition is actually the strongest in-school effect on long term student outcomes!?? [note that this is speculative based on the somewhat questionable comparison made in the above paragraph].

Second Tangent: Quite honestly, the cost comparison comments in this paragraph are so shoddy, poorly documented, etc. that they do much to undermine the report and should be cut.

These impacts are somewhat larger than the long-term impacts of the much more costly class-size intervention in Tennessee. Dynarski et al. (2011) estimate that being assigned to a smaller class in the early elementary grades increased college enrollment rates among African Americans by 19 percent (5.8 percentage points on a base of 31 percent). Reduction of class size in Tennessee was estimated to cost $12,000 per student (Dynarski et al. 2011), whereas the social cost of the SCSF intervention was about $4,200 per student to the foundation and reduced costs to the taxpayer by reducing the number of students who would require instruction within the public sector. If the government had paid for the voucher, the expenditure could have taken the form of a simple transfer from the public sector to the private sector, which in the long run need not add to the per-pupil cost of education. In fact, it could decrease costs because Catholic schools spend less on average than public schools. Around the time of the SCSF evaluation, New York City public schools spent more than $5,000 per student, as compared to $2,400 at Catholic schools (Howell and Peterson 2006, 92).

First, this paragraph and the sources it cites provide little if any solid evidence regarding “costs” and “expenditures” citing only that the Archdiocese of NY at the time said so (p. ii) regarding Catholic school costs being about $2,400 per pupil (and ballparking out of nowhere the $5k public district figure). This paragraph also compares estimated expenditures on one strategy (class size reduction) in one context (TN) and point in time to partial subsidies on another strategy in another context at another point in time.  A while back, I criticized another report, also by Matt Chingos (nothin’ personal, I generally like his work) in which he referred to Class Size Reduction as the “Most Expensive School Reform” without legitimately comparing the costs of CSR to any other reform.  The above paragraph is strikingly similar in its gaping holes of logic and evidence. I could go on and on. The authors also make no attempt to provide reasonable assumptions & estimates for the full cost of operating and scaling up a large scale voucher system. As such, this stuff really has not place in this paper.  For a more thorough discussion/analysis of public/private school spending, see: http://nepc.colorado.edu/publication/private-schooling-US

Since the authors didn’t actually conduct any real analysis of schooling resources/finances, they really shouldn’t have gone there in their conclusions. This kind of back of the napkin, half-baked cost savings assertion really cheapens a study that does have some interesting findings to offer.