Ed Schools – The Sequel: Rise of the Intellectually Dead

Warning: The following post contains the elitist musings of an ivory tower professor who has only professed at major research universities, who attended a selective liberal arts college & received his doctorate from an Ivy league institution (well… a branch of one… Teachers College at Columbia).

A while back, I wrote a post on “ed schools” the point of which was to show the shift in production of degrees that had occurred between the early 1990s and late 2000s. When I wrote that first post, ed schools were coming under fire from DC think tanks like the National Center on Teaching Quality (NCTQ), which seemed largely unable to understand the most basic issues of degree production in education (I’m unsure they’ve learned much since then!). And now, it would appear that our esteemed U.S. Secretary of Education has decided that ed schools and teacher preparation will be of primary interest in the second term of this administration.

The problem as I previously indicated, was that most of this rhetoric about ed schools and their supposed failure of society and production of generations of ill-equipped American youth, is that the rhetoric of “ed school” assumes a static definition of ed school – rooted in a 1950s to 1970s characterization of the regional public teachers college, and built on an assumption that teachers obtain their training and a teaching credential – for the one thing they teach – through a single institution as the core of their undergraduate education. Being “teachers colleges,” these schools are obviously lax on admission standards, have curriculum that is neither academically rigorous nor practical, etc. etc. etc. (the conflicting rhetoric in this regard is fun to follow – too much theory… no practical application… but not academically rigorous, etc.), and well… simply must be replaced by a vast set of alternative routes/pathways/programs!

In short, the vast majority of the critique of teacher education assumes this monolithic AND STATIC entity of teacher preparation housed in state colleges and universities. Emporia state in Kansas – that’s you! Monclair in NJ – that’s you! West Georgia – you too! And those state flagships with teacher prep programs? Damn you Rutgers, Michigan, Illinois for producing increasing numbers of underqualified teachers! The wrath of NCTQ and now Arne Duncan will be upon you!

But degree & credential production in education has not entirely been static over time. In fact, anything but! There are clearly emerging trends. And if we believe that there really has been a decline in the academic quality of those receiving credentials in education, it would behoove us to take a close look at those trends. But since no-one else seems to be doing that – especially not NCTQ – I figured I should take another shot at it.

A couple of key points are in order. FIRST – it is important to understand that these days, many initial teaching credentials are already granted through alternate routes outside of undergraduate programs and to individuals with degrees in fields other than education. In addition to non-degree alternate routes which I cannot even capture with the data in this post, many initial teaching credentials are granted through graduate programs – at the masters degree level and an even larger share of additional – second/third credentials received by practicing teachers are obtained through graduate programs. Individual teachers may have collected a handful of different credentials, all from different institutions.

So, let’s take a look at undergraduate and masters degree production trends.

Undergraduate Training

Undergraduate degree production in “education” fields generally (most of which involves teacher preparation) has been most stable over time. Using 1994 Carnegie Classifications (the most stratified system of Carnegie classifications of the past few decades: see end of post for definitions), we see that the percent of degrees being produced by what were the public “teachers colleges” (Comprehensive 1… as opposed to those labeled as “Teachers Colleges”) still hold the lions share, but have declined over time. Research Universities which produced around 14% in 1990 now produce closer to 10% (those are your state flagships & major private universities). So… the major traditional public college and university role is declining slightly in market share.

That loss is being picked up by what is actually a very small subset of colleges – that also tend to be relatively small, and not so prestigious colleges. These are the “LA – Liberal Arts 2” colleges. It’s quite striking that growth in this subset is sufficient to shift the market shares of major state universities and comprehensive regional colleges. Incidentally, LA 2s were among the first to expand rapidly their production of online and distance MBAs… around the same time they started tapping the ed market. (this period overlaps with a trend among financially strapped, less selective colleges making the move to change their name to “university.“)

Slide19

Patterns are also relatively stable by the Barrons’ competitiveness ratings. Notably, colleges right in the middle of the competitiveness ratings have the largest market share. I know this conflicts with reformy ideas that all ed degrees are produced by the worst colleges – but at the undergrad level, it’s a pretty normal distribution. Competitive colleges have a consistent 50% market share. Indeed, they are not the top third. They are also not the bottom! They are… the middle… as one would expect for a profession with modest (at best) earnings expectations.

The next two categories out from there – one up (very) and one down (less), have just under 20%. But, the “less competitive” group seems to be showing an uptick (they are also heavy on those LA2s!). Highly Competitive and Non-Competitive are also relatively comparable, but with non-competitive slightly outpacing highly-competitive.

 

Slide20

Masters Degrees

It’s in the production of masters degrees where the real fun stuff is happening. First, let’s take a look at what’s been happening across institutions by type. Note that Comprehensive colleges were, in large part, designed to deliver bachelors and masters degree programs and many from early on had large education programs and teacher preparation programs in particular. But we see in the figure below that the market share of masters degree production for Comp1s has declined over time. So too has the market share for masters degrees for Research Universities (including state flagship universities).

Amazingly, it’s those LA2s again that have risen dramatically in degree production. These lower tier liberal arts colleges (we’re not talkin’ Williams, Haverford, etc… which are LA1s. Those schools aren’t crankin’ up masters in Ed… and they’re also not changing their name to Williams University, etc.), have become the second largest producers of masters degrees in education. Bear in mind that liberal arts colleges, as classified in the 1990s, were never really intended to be handing out graduate degrees – no less massive numbers of them.  LA2s have gone from only about 1% of ed masters production in 1990 to over 10% by 2011.

Slide23

The next figure reclassifies these schools by the competitiveness of their undergraduate programs (since we lack competitiveness measures for graduate programs). What we see here is that masters programs housed in “LESS COMPETITIVE” undergraduate colleges are the ones that are creeping up in market share. To a significant extent, these are online, credential granting programs run through LA2s.

Slide24

So, what we have here, is a rather dramatic expansion of graduate credentials in education being handed out by what some (including myself) might characterize as relatively low quality, non-selective undergraduate institutions that were never meant to be handing out graduate degrees to begin with.  But perhaps that’s just my ivory tower, Research I perspective.

Now lets take a look at the top 20 Masters degree producers in the early 1990s and then in the most recent three years. In the early 1990s, the largest producers were crankin’ out a few thousand over a three year period. These included some early entrants – pre-online era – to the degree mass-production game like Lesley College and National Louis U. But, there were also many programs housed in brick and mortar public universities in the mix, including both state flagships (UT Austin, Ohio State) and other pretty solid academic schools (Harvard, Columbia/TC).  Arguably, these [the public colleges in particular] are the schools now taking the brunt of the blame for the state of teacher preparation – Northern Arizona, Northern Colorado, Eastern Michigan, etc.

Slide26

But who has actually been crankin’ out the masters degrees and credentials in recent years? And, if there is a decline and pending crisis in education training/preparation, who might instead be to blame? Below is the more recent production of graduate degrees/credentials. First and foremost, we’ve now got schools crankin’ out over 3,000 per year – or 9k per 3 years. Phoenix, Waldon and Grand Canyon together produce more masters degrees than many of the next several combined.  There is a substantial gap in production before one reaches the first traditional teacher preparation program on the list.

Is it possible that the emphasis on traditional “ed schools” within state boundaries as the obvious source of our problems is misplaced?

Slide25

Graduate Degree Production in Educational Leadership/Administration

I’ve got one last bit to address here and that’s training in educational leadership/administration, a topic I’ve written about in my academic publications (see below). Degree production in educational leadership has followed many of the same trends we see in education more generally. And there has been comparable push to provide more “alternatives” for gaining access to principal, supervisor and district leadership credentials. NOTE- if you think some of what I’m displaying here makes education grad degree production look like a cesspool, I assure you that when it comes to the production of MBAs, the picture is equally if not even more ugly! (One can buy an MBA almost anywhere… perhaps even more easily than a degree in ed admin… and in many cases which I have observed directly, the level of academic rigor, even within major universities, is hardly different!)

The figure below shows that major research universities have played a declining role in the production of graduate degrees (all levels) in educational administration. Again, it’s those entrepreneurial LA2s that are crankin’ up the production – moving into 2nd place among institution types.

Slide7

Now lets take a look specifically at doctoral degrees. One can almost kind of understand the mass production masters degrees which in education are often tied to obtaining specific certifications perhaps in additional fields of specialization (special education, etc.). Yes, in many states, administration degrees are structured such that the masters is coupled with building level certification and doctorate with district level certification. Even then, how many doctorates does any one institution need to be cranking out? And who should be granting that level of degree?

By 1990s Carnegie classifications, doctorates should be (have been) largely granted out by Research and Doctoral Universities. Comprehensive colleges were generally masters producing schools, not doctoral granting institutions. These strata were, in fact, intended to reflect the capacity of institutions to grant certain types/levels of degrees.

Already by the early 1990s, Nova Southeastern had pioneered mass production of the education doctorate. But outside of the Nova model, most major producers of doctorates were actual universities (okay… a bit harsh… since NOVA actually is a university, and has a pretty well defined, conventional curriculum for their graduate programs).

Slide12

In the most recent years, Nova Southeastern has remained strong… but now right up there are such stellar academic powerhouses as Walden, Capella and Phoenix! (and Argosy)… many of which probably occasionally show up as side-bar advertisements on my blog! (as they do when I log into facebook).

A notable change in the past few years is the entrance of USC and Penn to this mix, with their new practitioner preparation programs which apparently crank out a sizable number of doctorates per year.  This raises the interesting question of whether leading universities should try to get into the mass production game? Is the system overall better for it, even if those institutions have to sacrifice some quality in order to mass produce? We’ll have to see if they can keep up with the Waldens and Capellas over the next several years.

Slide14

Closing Thoughts

To me, these trends are pretty astounding, and serious consideration of these trends must play into any discussion that alarmists might have about the supposed decline in the quality of teacher and administrator preparation (to the extent these alarmists give serious consideration to anything).  Those ringing these alarm bells seem more than happy to suggest that the obvious problem lies with traditional “ed schools” (read, regional and state flagship public colleges and universities) and that the obvious solution is to provide more alternative routes, online options – teacher preparation by MOOC…  (and likely not a MOOC delivered by Stanford U. faculty… but rather through Walden, Capella and the like) & expansion of schools relying on imported, short term labor supply.

I also find it strange to say the least that those who argue that the problem is that our teachers don’t come from the upper third of college graduates seem to believe that the solution is to expand the types programs that tend to grow most rapidly among colleges that cater to the bottom third (less & non-competitive).  To those reformy alarmists who feel they’ve identified the obvious problems and logical solutions, the above data should make sufficiently clear that we’ve already gone down that road.

Further, I’m thoroughly unconvinced that new models purporting to be more selective in the teachers they prepare, but relying largely on a self-credentialing model (we use our teachers to credential our teachers… and only accept as graduate students those who work in our schools?) focused primarily in ideological & cultural indoctrination   are a step in the right direction.  I have little doubt they’ll find a captive audience to self-credential and maintain a viable “business model,” (by requiring their own teachers to take courses delivered by their peers & bosses to achieve the credentials needed to keep their jobs) but this endogenous, back-patting self-validating model is no way to train the future teacher workforce.*

All of this begs the question of what next? Where do we go from here? How to we achieve integrity and quality in the production of degrees and credentials, and more broadly training and preparation of future teachers and administrators? I really don’t have any answers for these questions right now. But I’m pretty sure that the last two decades have taken us the wrong direction!

Related Research

Baker, B.D, Orr, M.T., Young, M.D. (2007) Academic Drift, Institutional Production and Professional Distribution of Graduate Degrees in Educational Administration. Educational Administration Quarterly  43 (3)  279-318

Baker, B.D., Fuller, E. The Declining Academic Quality of School Principals and Why it May Matter. Baker.Fuller.PrincipalQuality.Mo.Wi_Jan7

Baker, B.D., Wolf-Wendel, L.E., Twombly, S.B. (2007) Exploring the Faculty Pipeline in Educational
Administration: Evidence from the Survey of Earned Doctorates 1990 to 2000. Educational
Administration Quarterly 43 (2) 189-220

Wolf-Wendel, L, Baker, B.D., Twombly, S., Tollefson, N., & Mahlios, M.  (2006) Who’s Teaching the Teachers? Evidence from the National Survey of Postsecondary Faculty and Survey of Earned Doctorates.  American Journal of Education 112 (2) 273-300

1994 Carnegie Classifications

  • Research Universities I: These institutions offer a full range of baccalaureate programs, are committed to graduate education through the doctorate, and give high priority to research. They award 50 or more doctoral degrees1 each year. In addition, they receive annually $40 million or more in federal support.
  • Research Universities II: These institutions offer a full range of baccalaureate programs, are committed to graduate education through the doctorate, and give high priority to research. They award 50 or more doctoral degrees1 each year. In addition, they receive annually between $15.5 million and $40 million in federal support.
  • Doctoral Universities I: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the doctorate. They award at least 40 doctoral degrees1 annually in five or more disciplines.
  • Doctoral Universities II: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the doctorate. They award annually at least ten doctoral degrees—in three or more disciplines—or 20 or more doctoral degrees in one or more disciplines.
  • Master’s (Comprehensive) Universities and Colleges I: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the master’s degree. They award 40 or more master’s degrees annually in three or more disciplines. [Includes typical regional, within-state public normal schools/teachers colleges]
  • Master’s (Comprehensive) Universities and Colleges II: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the master’s degree. They award 20 or more master’s degrees annually in one or more disciplines.
  • Baccalaureate (Liberal Arts) Colleges I: These institutions are primarily undergraduate colleges with major emphasis on baccalaureate degree programs. They award 40 percent or more of their baccalaureate degrees in liberal arts fields4 and are restrictive in admissions.
  • Baccalaureate Colleges II: These institutions are primarily undergraduate colleges with major emphasis on baccalaureate degree programs. They award less than 40 percent of their baccalaureate degrees in liberal arts fields4 or are less restrictive in admissions. [Includes many cash-strapped, relatively non-selective, smaller private liberal arts colleges]

*I still like to believe that the most important background attribute of a “good teacher” or school leader is someone who is enthusiastic about their own learning, constantly seeking intellectual growth and challenge and that this attribute is often revealed in the types of advanced studies an individual chooses to pursue. To me, even if the Relay model does tap into a set of graduates of more selective colleges, if the Relay program itself is little more than a workshop on “no excuses” classroom disciplinary practices and typical inspiring edu-guru staff development fodder, then the Relay model is antithetical to developing truly good teachers. A workshop or two and perhaps some practical guidance from peers or teacher leaders – okay. But a graduate degree based on this stuff? Are you kidding? (just watch the RELAY GSE Videos here: http://www.relayschool.org/videos?vidid=5)

When Disinformation is Fueled by Misinformation! CHANCELLOR TISCH, YOU ARE WRONG!

Very recently, I posted a critique of the recent technical report on New York State median growth percentiles to be used in that state’s teacher and principal evaluation system.

Today, I read this piece in the NY Post – an editorial by NY State Board of Regents Chancellor Merryl Tisch, and well, MY HEAD ALMOST EXPLODED!

The point of the editorial is to encourage NY City’s teachers and DOE to agree to a teacher evaluation system based on supposedly objective measures – where “objective measures” seems largely to be code language for estimates of teacher effectiveness derived from student assessment data.

First, I have written several previous posts on the usefulness of NYC’s value-added model for determining teacher effectiveness.

  1. the NYC VAM model retains some persistent biases
  2. the NYC VAM model is highly unstable from year to year
  3. the NYC VAM results capture only a handful of teachers per school and their results tend to jump all over the place
  4. adopting the NCTQ irreplaceables logic, the NYC VAM data are so noisy that few if any teachers are persistently irreplaceable
  5. for various reasons, it is unlikely that these are just early glitches in the system that will get better with time

Setting aside this long list of concerns about the NYC VAM results, I now turn to the NYSED – state median growth percentile data (which actually seem inferior to the NYC VAM model/estimates). In her editorial, Chancellor Tisch proclaims:

The student-growth scores provided by the state for teacher evaluations are adjusted for factors such as students who are English Language Learners, students with disabilities and students living in poverty. When used right, growth data from student assessments provide an objective measurement of student achievement and, by extension, teacher performance.

Let me be blunt here. CHANCELLOR TISCH – YOU ARE WRONG! FLAT OUT WRONG! IRRESPONSIBLY & PERHAPS NEGLIGENTLY WRONG!

[now, one might quibble that Chancellor Tisch has merely stated that the measures are “adjusted for” certain factors and she has not claimed that those adjustments actually work to eliminate bias. Further, she has merely declared that the measures are “objective” and not that they are accurate or precise. Personally, I don’t find this deceptive language at all comforting!]

Indeed, the measures attempt – but fail to sufficiently adjust for key factors. They retain substantial biases as identified in the state’s own technical report. And they are subject to many of the same error concerns as the NYC VAM model.  Given the findings of the state’s own technical report, it is irresponsible to suggest that these measures can and should be immediately considered for making personnel and compensation decisions.

Finally, as I laid out in my previous blog post to suggest that “growth data from student assessments provide an objective measure of student achievement, and, by extension, teacher performance” IS A HUGE UNWARRANTED STRETCH!

While I might concur with the follow up statement from Chancellor Tisch that “We should never judge an educator solely by test scores, but we shouldn’t completely disregard student performance and growth either.” I would argue that school leaders/peer teachers/personnel managers should absolutely have the option to completely disregard data that have high potential to be sending false signals, either as a function of persistent bias or error. Requiring action based on biased and error prone data (rather than permitting those data to be reasonably mined to the extent they may, OR MAY NOT, be useful) is a toxic formula for public schooling quality.

The one thing I can’t quite figure out here is which is the misinformation and which is the disinformation. In any case, both are wrong!

The rest of what I have to say, I’ve already said. But, so readers don’t have to click the link below to access the previous post, I’ve pasted  the entire thing below. Enjoy!

COMPLETE PREVIOUS POST!

I was immediately intrigued the other day when a friend passed along a link to the recent technical report on the New York State growth model, the results of which are expected/required to be integrated into district level teacher and principal evaluation systems under that state’s new teacher evaluation regulations.  I did as I often do and went straight for the pictures – in this case- the scatterplots of the relationships between various “other” measures and the teacher and principal “effect” measures.  There was plenty of interesting stuff there, some of which I’ll discuss below.

But then I went to the written language of the report – specifically the report’s (albeit in DRAFT form)  conclusions. The conclusions were only two short paragraphs long, despite much to ponder being provided in the body of the report. The authors’ main conclusion was as follows:

The model selected to estimate growth scores for New York State provides a fair and accurate method for estimating individual teacher and principal effectiveness based on specific regulatory requirements for a “growth model” in the 2011-2012 school year. p. 40

http://engageny.org/wp-content/uploads/2012/06/growth-model-11-12-air-technical-report.pdf

13-Nov-2012 20:54

Updated Final Report: http://engageny.org/sites/default/files/resource/attachments/growth-model-11-12-air-technical-report_0.pdf

Local copy of original DRAFT report: growth-model-11-12-air-technical-report

Local copy of FINAL report: growth-model-11-12-air-technical-report_FINAL

Unfortunately, the multitude of graphs that immediately preceded this conclusion undermine it entirely. but first, allow me to address the egregious conceptual problems with the framing of this conclusion.

First Conceptually

Let’s start with the low hanging fruit here. First and foremost, nowhere in the technical report, nowhere in their data analyses, do the authors actually measure “individual teacher and principal effectiveness.” And quite honestly, I don’t give a crap if the “specific regulatory requirements” refer to such measures in these terms. If that’s what the author is referring to in this language, that’s a pathetic copout.  Indeed it may have been their charge to “measure individual teacher and principal effectiveness based on requirements stated in XYZ.” That’s how contracts for such work are often stated. But that does not obligate the author to conclude that this is actually what has been statistically accomplished. And I’m just getting started.

So, what is being measured and reported?  At best, what we have are:

  • An estimate of student relative test score change on one assessment each for ELA and Math (scaled to growth percentile) for students who happen to be clustered in certain classrooms.

THIS IS NOT TO BE CONFLATED WITH “TEACHER EFFECTIVENESS”

Rather, it is merely a classroom aggregate statistical association based on data points pertaining to two subjects being addressed by teachers in those classrooms, for a group of children who happen to spend a minority share of their day and year in those classrooms.

  • An estimate of student relative test score change on one assessment each for ELA and Math (scaled to growth percentile) for students who happen to be clustered in certain schools.

THIS IS NOT TO BE CONFLATED WITH “PRINCIPAL EFFECTIVENESS”

Rather, it is merely a school aggregate statistical association based on data points pertaining to two subjects being addressed by teachers in classrooms that are housed in a given school under the leadership of perhaps one or more principals, vps, etc., for a group of children who happen to spend a minority share of their day and year in those classrooms.

Now Statistically

Following are a series of charts presented in the technical report, immediately preceding the above conclusion.

Classroom Level Rating Bias

School Level Rating Bias

And there are many more figures displaying more subtle biases, but biases that for clusters of teachers may be quite significant and consequential.

Based on the figures above, there certainly appears to be, both at the teacher, excuse me – classroom, and principal – I mean school level, substantial bias in the Mean Growth Percentile ratings with respect to initial performance levels on both math and reading. Teachers with students who had higher starting scores and principals in schools with higher starting scores tended to have higher Mean Growth Percentiles.

This might occur for several reasons. First, it might just be that the tests used to generate the MGPs are scaled such that it’s just easier to achieve growth in the upper ranges of scores. I came to a similar finding of bias in the NYC value added model, where schools having higher starting math scores showed higher value added. So perhaps something is going on here. It might also be that students clustered among higher performing peers tend to do better. And, it’s at least conceivable that students who previously had strong teachers and remain clustered together from year to year, continue to show strong growth. What is less likely is that many of the actual “better” teachers just so happen to be teaching the kids who had better scores to begin with.

That the systemic bias appears greater in the school level estimates than in the teacher level estimates is suggestive that the teacher level estimates may actually be even more bias than they appear. The aggregation of otherwise less biased estimates should not reveal more bias.

Further, as I’ve mentioned on several times on this blog previously, even if there weren’t such glaringly apparent overall patterns of bias their still might be underlying biased clusters.  That is, groups of teachers serving certain types of students might have ratings that are substantially WRONG, either in relation to observed characteristics of the students they serve or their settings, or of unobserved characteristics.

Closing Thoughts

To be blunt – the measures are neither conceptually nor statistically accurate. They suffer significant bias, as shown and then completely ignored by the authors. And inaccurate measures can’t be fair. Characterizing them as such is irresponsible.

I’ve now written 2 articles and numerous blog posts in which I have raised concerns about the likely overly rigid use of these very types of metrics when making high stakes personnel decisions. I have pointed out that misuse of this information may raise significant legal concerns. That is, when district administrators do start making teacher or principal dismissal decisions based on these data, there will likely follow, some very interesting litigation over whether this information really is sufficient for upholding due process (depending largely on how it is applied in the process).

I have pointed out that the originators of the SGP approach have stated in numerous technical documents and academic papers that SGPs are intended to be a descriptive tool and are not for making causal assertions (they are not for “attribution of responsibility”) regarding teacher effects on student outcomes. Yet, the authors persist in encouraging states and local districts to do just that. I certainly expect to see them called to the witness stand the first time SGP information is misused to attribute student failure to a teacher.

But the case of the NY-AIR technical report is somewhat more disconcerting. Here, we have a technically proficient author working for a highly respected organization – American Institutes for Research – ignoring all of the statistical red flags (after waiving them), and seemingly oblivious to gaping conceptual holes (commonly understood limitations) between the actual statistical analyses presented and the concluding statements made (and language used throughout).

The conclusion are WRONGstatistically and conceptually.  And the author needs to recognize that being so damn bluntly wrong may be consequential for the livelihoods of thousands of individual teachers and principals! Yes, it is indeed another leap for a local school administrator to use their state approved evaluation framework, coupled with these measures, to actually decide to adversely affect the livelihood and potential career of some wrongly classified teacher or principal – but the author of this report has given them the tool and provided his blessing. And that’s inexcusable.

Teachers Unions: Scourge of the Nation?

UPDATED: 1/29/2015

Let me start by stating that I, myself am somewhat agnostic when it comes to the questions around whether I believe teachers unions are generally good or bad for the overall quality of our education system and for educational equity.  In my personal experiences as a young teacher in the early 1990s, I had my issues with my local teachers unions (in New York State in particular), resulting in some pretty heated battles with local and regional union officials [and some pretty nasty internal politics in my own school].  As a young teacher, I was anything but a fan of the teachers union. But unlike many of my TFA pals [I was a few years too early for TFA, but had friends & later colleagues in the first few waves] who only stuck it out in teaching for a year or two and may have developed similar negative feelings toward their local union, I did outgrow that initial reaction – which in my view- was somewhat isolated – and partly a function of my own youthful ignorance.  I didn’t stick it out in public school teaching much longer than that [the local union actually ran me out!], but did have the unique experience of working in an elite private school that had a union, and I worked in that school during a contract renegotiation.

The idea for this post first came about when I read the following quote in an article in the Economist. This has to be among the most utterly stupid statements I think I’ve ever read in my life:

…no Wall Street financier has done as much damage to American social mobility as the teachers’ unions have. http://www.economist.com/node/21564556

And then there’s this more recent quote:

Many schools are in the grip of one of the most anti-meritocratic forces in America: the teachers’ unions, which resist any hint that good teaching should be rewarded or bad teachers fired. http://www.economist.com/news/leaders/21640331-importance-intellectual-capital-grows-privilege-has-become-increasingly

Now… this quote is these quotes are ridiculous at many levels.  Most notably, the first quote is stupid simply because one could never possible contrive a reasonable quantifiable comparison of the supposed negative effects of either the individual hedge fund manager or the supposed monolithic “teachers union.” It’s the empirical equivalent of arguing whether Superman can beat up Hulk. It’s just asinine.

UPDATE: The second quote above comes from a piece that subsequently implies that teachers’ unions are a major, if not the primary cause of educational inequality across children- specifically between rich and poor children. Here’s a little more on the topic of “teacher equity” in particular. (Post 1 | Post 2)

On the heels of this quote came the Thomas B. Fordham Institute report rating the strength of teachers unions – or unionization more generally – across states.  Perhaps the most useful aspect of this report is that it provides us with insights regarding the heterogeneity of unionization across American states.  Unions and unionization are not monolithic.

As recognized by the Fordham report, we really don’t have an American education system. We have 51 systems. They are all somewhat different, with different standards, different funding systems, different union rules and protections and different student outcomes.  The existing variations across our state systems of education alone render the economist statement utterly stupid and misguided.  Those variations also provide for some fun opportunities to explore the relationship between TB Fordham’s characterization of teachers’ union strength across states and other features of state education systems.

In this post, I use data from several reports that attempt to characterize state education systems to probe two main questions – whether there exists any association between general indicators of education quality across states and union strength, and whether there exists any association between indicators of educational equality across states and union strength.

How is union strength related to funding levels and funding fairness?

Along with colleagues at the Education Law Center of New Jersey, I have been preparing for the past few years, annual reports on education funding fairness. In the Funding Fairness report, we use a statistical model on three years of national data on all school districts to project the cost adjusted per pupil state and local revenues for all districts and state averages nationally, and we characterize the overall fairness – progressiveness or regressiveness of state school finance systems. Below, I evaluate the relationship between “union strength rank” from the TB Fordham report and funding “levels” (an indicator of adequacy) and funding “fairness” (whether higher poverty districts receive systematically more, or less funding per pupil than lower poverty districts in that state).

An important caveat here since I like to pick on inappropriate graphs myself is that I really should not be making scatterplots where the x-axis variable is a “rank” measure. Rank is not an interval measure. But this is purely for illustrative purposes, so please forgive my misuse of rank data in this way! [or at least if you slam me for it, acknowledge that I pointed this out!]

Figure 1

In Figure 1 we can see that states with stronger teachers unions [left hand end] tend to have more adequate overall funding levels. It is however more clearly the case that states with weak teachers unions (ranked 45 to 50th) tend to have particularly low adjusted funding levels. This is certainly not to suggest any direction of causation. That’s the whole trick here. Most of this is probably quite circular – endogenous. [the union cynic might argue that this merely shows that teachers’ unions have extorted funds from the taxpayer] That states which tend to be more educated and progressive happen to both have stronger teachers unions and to spend more on education – but for those states like California that by historical artifact referendum have systematically deprived their education systems for decades.

Figure 2

Perhaps more to the point of the Economist assertion, we see that states with weaker teachers unions also tend to have less fair funding distributions – or are systems where it is more likely that high poverty districts have systematically fewer resources per pupil than lower poverty ones.  Again, this result is likely a function of the endogenous relationships mentioned previously.

See: http://www.schoolfundingfairness.org/

UPDATE: So, wait a second, if stronger union states tend to have fairer funding distributions, might that actually enhance equity? In a really big, important and substantive way? Hmmm….

How is union strength related to competitiveness of teacher pay?

Here, I look at the relationship between union strength and the relative wage of teachers compared to non-teachers in the same state.  This is a particularly important comparison for two reasons. First of all, the relative competitiveness of teacher wages likely has significant effects on the quality of individuals who choose to enter the teacher workforce versus other employment opportunities (selecting from HS into College).  Overall wage competitiveness can have long run effects on overall teacher workforce quality.  Further, this is the one comparison I make in this post where we might hypothesize a direct, easily interpreted relationship. That is, we might expect stronger unions to lead to more competitive wages.  Here, I compare the weekly wage % (teacher percent of non-teacher) from the Economic Policy Institute with the TBF union strength rank.

Figure 3

Somewhat to my own surprise, this relationship is actually quite strong!… with states having stronger teachers unions also having generally more competitive teacher wages.

See: http://www.epi.org/publication/the_teaching_penalty_an_update_through_2010/

Is union strength associated with NAEP achievement levels?

Now, the usual retort to teacher union bashing is to point out that states like New Jersey and Massachusetts have strong unions and also have high NAEP scores, and states like Alabama and Mississippi have weak unions and low NAEP scores.  Yeah… okay… but clearly there’s a lot goin’ on there that has little or nothing to do with unions.  But let’s indulge this premise a little further with some additional graphs just to see the patterns.

In these first few figures I present the relationship between NAEP scores for children in families above the 185% income level for poverty (not on free or reduced lunch) and union strength. Note that the patterns are similar for scores for children qualified for reduced lunch or for free lunch, but I’ve not included them here… ‘cuz there are already enough graphs in this post. I’d be happy to share them though.  In general, what we see in Figure 4 and Figure 5 is that NAEP scores for non-low income kids tend to be slightly lower – with little clear pattern – in weak union states.

Figure 4

Figure 5

Figure 6, however, clarifies that NAEP scores tend to be higher for non-low income children in states where incomes are higher for non-low income children.

Figure 6 (but income dictates NAEP)

We can use the information in Figure 6 to adjust the NAEP scores (are they higher or lower than would be expected, given the income levels) for household income differences.  When we make that adjustment, we get Figures 7 and 8.

Figure 7 (income adjusted NAEP)

Figure 8 (income adjusted NAEP)

Still we see that adjusted NAEP scores are somewhat though hardly systematically lower in states with weaker unions. What we certainly do not see here is that NAEP Scores are systematically lower in states with stronger unions. That is, Unions certainly aren’t driving NAEP scores into the ground!

But, while the second set of graphs is more appropriate than the first, both are dreadfully oversimplified characterizations of complex relationships.

Is union strength associated with NAEP achievement gaps?

This question is perhaps most on target with the Economist claim. Following the economist logic, one might assert that teachers unions likely lead to larger achievement gaps, thus limiting social mobility. Measuring poverty related income gaps and comparing them across states is tricky, as I’ve discussed in numerous previous posts. Specifically, the size of the achievement gap between kids not qualified for free or reduced lunch and those qualified for either free or reduced lunch tends to be highly related to the size of the income gap between the two groups – as shown in Figure 9! That is, we can’t just do straight up achievement gap comparisons- we must adjust for the income gap.

Figure 9 (Income Gaps and NAEP Gaps)

Figure 10 and Figure 11 present the income gap adjusted achievement gaps in relation to union strength rank.  What we see is little or no relationship between union strength and achievement gaps. While this does not illustrate that stronger unions lead to smaller achievement gaps…. It also does not by any stretch illustrate that stronger unions lead to larger achievement gaps… an expectation that might reasonably be derived from the claim made in the Economist.

Figure 10

Figure 11

Then again… these are still cursory… descriptive analyses – using only two variables at a time to characterize education systems that are far more complex than can be legitimately characterized with only two variables at a time. It’s exploratory. It’s a start… and there’s certainly more to be explored here… but likely questions that can never be satisfactorily untangled with available data.

See: https://schoolfinance101.wordpress.com/2011/09/13/revisiting-why-comparing-naep-gaps-by-low-income-status-doesnt-work/

Is union strength associated with NAEP achievement growth?

Finally, I suspect that some curmudgeonly reactors to this post will attempt to argue that weak union states have seen more growth in NAEP achievement over time. Well, Figure 12 kind of thwarts that notion as well. Not much relationship there either, but certainly the only one in this post at all that shows even the slightest upward tilt.

Figure 12

But alas, even that tiny upward tilt is a function of the fact that states that saw the greatest growth on NAEP were simply the states that had and still have the lowest overall performance levels – as shown in Figure 13. And, states with lower average performance levels – now and then – tend to have weaker unions.

Figure 13

For a more thorough discussion on this point, see: https://schoolfinance101.wordpress.com/2012/07/27/learning-from-really-bad-graphs-ill-informed-conclusions-thoughts-on-the-new-pepg-catching-up-report/

Conclusions

So what does this all mean then? Are unions good, or are they bad? Do they increase inequality and lower quality? It’s certainly difficult given the data provided above to swallow the bold assertion in the Economist that teachers’ unions are the scourge of the nation and primary cause of declining social mobility.  That’s just a load of unsubstantiated crap!

But then what can we learn here. Well, it is perhaps important that there appears to be at least some likely indirect and certainly endogenous relationship between unionization and funding fairness and funding levels. As I’ve discussed in related research funding fairness and funding levels – and school finance reforms that improve equity and adequacy do matter!  To summarize:

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more  adequate funding with more accountability for its use may be most promising.

http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

See also this post in which I probe more specifically the changes in achievement gaps over time in Massachusetts and New Jersey.

Further, the potentially more direct relationship between unionization and relative competitiveness of teacher wages compared to other labor market opportunities may be important in the long run.  In a related policy brief from last winter, I noted:

To summarize, despite all the uproar about paying teachers based on experience and education, and its misinterpretations in the context of the “Does money matter?” debate, this line of argument misses the point. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.

http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

So, while nothing in this post puts to rest the big – unanswerable – questions of the overall equity and quality effects of teachers unions on our supposed monolithic American public education system, these analyses do at least raise serious questions about the notion that teachers unions are the scourge of the nation cause of all of the supposed – also unfounded – ills of American public schooling.

Cheers! It’s good to be back!

Friday Afternoon Graphs: Graduate Degree Production in Educational Administration 1992 to 2011

I’ll let the pictures tell the story this time. [UPDATED – Errors in original]

Data source: http://nces.ed.gov/ipeds/datacenter/DataFiles.aspx

Data, Data, Data? Dissecting & Debunking NJDOE’s State of the Schools Message

Time again for an NJ State of the Schools Address, as reported HERE in NJ Spotlight (with absolutely no critical question/reporting whatsoever! More or less spoon fed regurgitation).

As I’ve written a number of times on this blog, state officials in New Jersey have decided on specific marketing/messaging plan in order to support current policy initiatives. Those policy initiatives involve:

  1. expanding NJDOE authority to impose desired “reforms” (charter/management takeover, staff replacement, etc.) on specific schools otherwise not under their direct authority.
  2. cutting funding from higher poverty, higher need districts and shifting it toward lower poverty, lower need ones.
  3. expanding charter schooling and promoting other  “innovations” in high poverty concentration schools.

The supposed impetus for these reforms is that New Jersey faces a very large achievement gap between low income and non-low income children (one that is largely mis-measured). While it would seem inconsistent to suggest reducing funding in low income districts and shifting it to others, the creative messaging has been that the additional resources are quite possibly the source of the harm… or at the very least those resources are doing no good. Thus, the path to improvement for low income kids is to transfer their resources to others.  What I have found most disturbing about this messaging – other than the ridiculous message itself! – is the flimsy logic and disingenuous presentations of DATA that have been used to advance the argument.

Look if the message is going to be about Data, Data, Data – then now is the time to take a more thorough, context-sensitive look at the data, and try to better understand what’s really going on.

Let’s do a walk through of some of the information presented in the most recent state of the schools presentation.

Here’s a link to the slides from the recent presentation:

http://www.state.nj.us/education/news/2012/0919con.pdf

NJDOE Message

The most recent state of the schools presentation is now in the post-NCLB waiver era, where we are now presented with those template classifications of schools as Priority, Focus and Reward schools.
The state of the schools presentation revolves to a large extent around these categories, because it is those Priority schools that are the target of the most immediate and disruptive interventions.

Below are the slides that were presented to characterize schools by their performance category. The message to be conveyed by these slides was:

  1. Priority Schools are overspenders (or at least very well resourced)
  2. Priority Schools have very well paid teachers who have slightly higher than average experience
  3. Yet still, priority schools have really crummy outcomes!

Therefore, we must have wide latitude to intervene!

EXHIBIT A – PRIORITY SCHOOLS SPEND MORE(?)

EXHIBIT B – PRIORITY SCHOOLS HAVE HIGH PAID TEACHERS & LOW OUTCOMES!

EXHIBIT C- GAPS REMAIN LARGE

Omitted Information What about demographic differences?

Clearly, a few things are being overlooked in the first two slides which claim characterize Priority schools as schools with plenty of resources that simply don’t get the job done. Now, there’s a little more to the story than that!

Most notable, as I show below, priority schools have about 80% of children qualified for free lunch and reward schools less than 10%! Yet as the NJDOE slide above shows, at the high end these school districts spend slightly under 30% more than state average. Notably, this shoddy comparison does not compare these districts to others in their own labor market.

Indeed, New Jersey more than other states has put some money into these districts. See “Is school funding fair?” But, let’s be clear, these margins of funding difference, while helpful, hardly make these districts – given their needs – flush with excess resources!

In fact, the strongest empirical research on this topic suggests that it would take an additional 100% or so per pupil funding for a district that is 100% low income versus a district that is 0% low income. Here, we are looking at nearly that extreme of low income differential, and not nearly that extreme of funding support! So while these districts are better off than similar districts in other states, implying that they’ve got more than enough to close achievement gaps is a huge stretch.

But do those demographic differences matter?

This figure shows just how much the demographic differences represented above matter with respect to student achievement, and specifically how much school demography continues to dictate the performance classification of schools under the NJDOE waiver plan.

As I pointed out on a recent post, NJDOE has basically flagged schools in low income neighborhoods for experimentation and substantial disruption (closure, etc.) with an option to override any/all local input.

Notably, this pattern is likely better than it would otherwise be because of New Jersey’s past efforts to target additional resources to high need settings, including pre-kindergarten programs, smaller class sizes and more competitive teacher salaries than might otherwise exist in these settings.

What about the teacher pay and teacher characteristics claim?

But what about those salaries? The NJDOE slides present a picture of teachers who – by their argument – are certainly paid enough. And, in fact, setting aside (ignoring entirely the demography of the schools), the implication of the NJDOE slides is that hey… we’re paying these teachers a few thousand more than the average teacher in the state, but clearly they just aren’t very good, or at least there are a bunch of them that aren’t and need to be fired! Further, they have slightly more experience than teachers in other schools… yet they still stink… indicating that experience clearly doesn’t matter. Notice that they didn’t present degree levels.

Okay… now let’s do a legitimate walkthrough of the most recent available data on NJ teachers with respect to the performance categories of schools. I use the 2011-12 Fall Staffing Reports and I fit a regression model of teacher salaries for all elementary and middle level classroom teachers (secondary later if I get a chance). In that model, my goal is to compare the salary a teacher would make:

  • at the same experience level
  • with the same degree level
  • having the same job code
  • working full time
  • in the same labor market (and type of district in that market)
  • in the same year

That is, I’m comparing apples with apples. This first graph shows the average difference in salary on the above comparison bases, statewide. Statewide, teachers in priority schools are earning a lower salary and teachers in reward schools a higher salary than teachers in “all other schools.” But these averages do mask some important differences across labor markets.

Here are the North Jersey/NY projected teacher salaries by experience level, where Newark carries significant weight in the model. Priority school salaries by experience are in blue, reward in red. On average, the differences are rather subtle. Reward schools salaries jump ahead in the mid-range, and priority rise again later, but fall behind in the mid range. But, it’s really important to understand, that simply having roughly the same salary does not mean that salary is actually competitive for recruiting and retaining teachers of comparable qualifications! In fact, to get teachers to work in a high need setting is likely to require a substantively higher wage!

As I explain in a recent review of the literature on this topic: With regard to teacher quality and school racial composition, Hanushek, Kain, and Rivkin (2004) note: “A school with 10 percent more black students would require about 10 percent higher salaries in order to neutralize the increased probability of leaving.”33 Others,however, point to the limited capacity of salary differentials to counteract attrition by compensating for working conditions.34 see: http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

  • Hanushek, Kain, Rivkin, “Why Public Schools Lose Teachers,” Journal of Human Resources 39 (2) p. 350
  • Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438
  • Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

Now let’s look at south jersey, which appears to be the source of most of the deficit that shows up statewide. In South Jersey/Philly metro, teachers in priority schools are making a much lower wage especially in the mid-range. Non-classified and reward schools lead the way on salaries across most of the experience range. Hey… is this chicken or egg? Do salaries matter – or are more advantaged schools simply able to pay higher salaries.

One issue that NJDOE appears to be ignoring entirely is that the classification of these schools may actually lead to additional teacher sorting – making it even harder to staff priority schools with high quality teachers down the line.

Here are the degree levels of classroom teachers in these schools – something notably absent in the NJDOE presentation. The differences between priority and reward schools are quite striking.

PRIORITY SCHOOLS HAVE FAR MORE TEACHERS WITH ONLY A BA AND FEWER WITH AN MA THAN REWARD SCHOOLS!

Finally, here are the concentrations of novice teachers, where a sizable body of research literature points to the problem of teacher churn in high need schools and the relationship between high novice teacher concentrations and lower student outcomes.

What about the performance of low income children in New Jersey?

Again, part of the message being presented in the state of the schools address is that New Jersey in particular has failed its low income children – as indicated by the suspect, over time proficiency rate graphs presented above. These graphs are presented as coupled with the funding/resource graphs to imply that funding is clearly unhelpful at best and harmful at worst when it comes to fixing the achievement gap.

As I’ve written on this blog before, New Jersey has made substantive gains in recent decades for low income children. Further, to make comparisons of achievement gaps, one must focus on the most comparable measures and most comparable settings. In one recent blog post, I compared Massachusetts, Connecticut and New Jersey – which in terms of income distributions and the characteristics of those above and below the Free/Reduced Income thresholds are most similar. The following graphs show that children of HS dropouts and low income children in NJ and MA have both higher levels of performance and have outpaced the gains in performance of similar children in Connecticut and Rhode Island (but especially CT!)

What has New Jersey done to improve performance of low income children?

I also elaborated in that previous that one key difference between these states is that NJ and MA, more than the others have shifted resources toward higher need districts. The first graph shows the disruption over time in the relationship between district income and district resources. MA and NJ have most significantly disrupted this relationship, providing systematically more resources per pupil in lower income districts.

This second graph shows the pattern across districts by poverty in each state. Note that in CT, while a few high poverty districts (Hartford and New Haven) have higher current spending, the CT pattern is less systematic. Further, in those few districts, much of the additional spending is granted through magnet school aid, and thus may have limited positive impact on the districts’ neediest students.

To the best of my understanding, teacher tenure laws are/were strong in each of these states. Few if any districts in these states base teacher evaluation heavily on student test scores – especially during the periods represented in the graphs above – which predate Race to the Top. That is, clearly the differences in low income achievement growth between these states have little/nothing to do with state teacher evaluation policy. To go even further, NJ and CT have relatively small charter school market share, so charter school market share likely is not a major factor either.

Further, as explained in this report, and in this article, substantive and sustained school finance reforms do matter! And the evidence on the effectiveness of these reforms far outweighs the more speculative reforms being suggested as replacements for funding in New Jersey.

What does NJDOE & the current administration propose to do about future funding?

Finally, as I noted previously, the current direction of policy initiatives is to attempt to reshuffle funding away from higher poverty/need districts and toward lower poverty/need ones. Here’s the graph from the previous post.

The Strange Logic of it All?

Coupling this DOOHNIBOR (uh… reverse robinhood) strategy with arguments for disruptive reforms in high poverty settings is illogical at best and reckless and irresponsible at worst.

Children in high poverty settings in New Jersey have made substantive gains over time.

It is quite likely that New Jersey’s investments in the schools and communities of these children have played a significant role in those gains.

Yet, even in New Jersey, where the state has made those efforts, poverty-related disparities do persist and require attention.

There is little or no evidence that expanded charter schooling is substantively improving the outcomes of our lowest income children, largely because those “successful charter schools” of which we most often speak are not serving our lowest income children in any significant numbers, and in some cases are increasing concentrations of disadvantaged children left behind in district schools.

And there’s little evidence that either New Jersey’s failures or gains are a function of an oversimplified good teacher/bad teacher dichotomy, suggesting a need for oversimplified reformy solutions like teacher deselection and/or pay-for-test scores.

Despite the state’s efforts to provide support to high poverty settings/schools, teacher wages still are not where they necessarily need to be in those districts to recruit and retain a high quality applicant pool year after year. There remain disparities in teacher qualifications, including novice teacher concentrations. Teacher quality disparities may be/are an issue – but not in the way they are presently being framed!

These are the basic issues that need to be addressed. They aren’t sexy. They aren’t reformy. They aren’t consistent with the current marketing/messaging of NJDOE.

But they are based on data, data, data, DATA, DATA and more freakin’ Data!

And there’s a lot more where that came from!

 
 

Teacher Salaries, Demographics & Financial Disparities in the Chicago Metro

No time to really write much here today, but I do have a few figures to share. I’m posting these mainly because I keep seeing so many ridiculous a-contextual… and in many cases simply wrong statements about Chicago teachers’ salaries.  As I understand it, salaries are not really the main issue in the contract dispute… but rather… the teacher evaluation system. I’ve already written extensively about the types of teacher evaluation frameworks that I believe are being deliberated here, but I’m not following the issue minute by minute.

This post may be most relevant! 

Someone has to just say no to ill-conceived teacher evaluation policies. Perhaps this is the time.

That aside, there are typically two ways one might choose to compare teacher salaries to determine how they fit into their competitive context. One is to compare teacher salaries to non-teachers of similar age and education level. The overall competitiveness of teacher salaries tends to influence the quality of entrants to the profession. The other is to compare teacher salaries – for similar teachers – across districts within the same labor market.

When taking the latter approach, it is also important to consider the demographic differences across settings. All else equal, teachers will gravitate toward jobs with more desirable working conditions. So, in high need urban settings, equal compensation alone would be insufficient.

Bear in mind that I’ve explained on numerous previous posts how Chicago is among the least well funded large urban districts in the nation!

So, here’s a quick run-down on salaries and student populations – and funding equity (or lack thereof) – in pictures and tables.

Figure 1. Concentration of Predominantly Black and Hispanic Schools and Low Income Districts (and resource inequity)

[this paper explains the model behind the funding disparity analysis]

Figure 2. Demographics of Selected School Districts

Figure 3. Salary by Experience Generated from Model of Teacher Level Data (publicly available here)

So, in the mix, Chicago salaries for the first several years of experience are relatively average – or even slightly above. But, they do trail off at higher levels of experience and eventually fall behind. Remember though that comparable salaries would be generally insufficient for recruiting/retaining comparable teachers in a higher need setting.

Other even higher poverty, higher minority concentration districts like Harvey and Dolton are even more disadvantaged in terms of teacher salary competitiveness.

For more on the importance of teacher salaries, see: http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Cheers!

ADDENDUM

I’ve been fielding a few random comments along the lines of “so what… Chicago’s outcomes still stink and they clearly spend more than they should, and pay their teachers more than they should for those stinky outcomes!”  Some of these comments point to higher graduation rates in Springfield, coupled with lower spending. Of course, this comparison assumes that it would cost the same in Springfield and Chicago to accomplish similar outcomes. So, I ran a check based on models I’ve run for recent academic papers. The models are fully elaborated here:

Baker.AEFP.NY_IL.Unpacking.Jan_2012

Specifically, I estimate models to adjust for the various costs faced by districts toward achieving common outcome goals. Those models account for differences in the student population served, differences in regional labor costs and differences in economies of scale (really only affects small districts).

These graphs show the relationship between need and cost adjusted operating expenditure per pupil and student outcome measures. The first uses the state assessment scores, centered around the average district – and averaging these centered scores across all grades and tests. It’s like a combined outcome index of all test scores.

Chicago falls pretty much in line here. It has very low need/cost adjusted spending… and, well… low outcomes. But they certainly don’t appear to have lower outcomes than expected given their resources!

The second uses graduation rates.

It’s a little harder to judge what’s going on here… but Chicago still does not appear to be substantially out of line. Graduation rates can vary for any multitude of reasons… including having lower standards for graduation.

In other words, I’m not buying the argument that “yeah… but… Chicago still ain’t cuttin’ it… even with what it has.” Is it possible to have what you need and still not cut it? Yes. It is certainly possible that a district would have far more adequate resources and still do a crummy job. But, Chicago does not appear to be such a case.

 

 

Ed Waivers, Junk Ratings & Misplaced Blame: Jersey Edition

I’ve been writing over the past few weeks about NCLB waivers and the schools that are being targeted by states under the waiver program as targets for federally endorsed state intervention. [all of which is built on highly suspect legal/governance assumptions]

My concerns here operate at a number of levels. First, the current Federal Administration has again used an “incentive” application process to coerce states to adopt really, really ill-conceived policy frameworks. These policy frameworks consist of two major parts:

  1. school and district performance classification schemes that are largely if not entirely built on misinterpretation and misrepresentation of generally low quality data; and
  2. poorly vetted, ill-conceived, aggressive/abrupt (closure, turnaround) intervention strategies as likely (if not more so) to do harm as they are to do any good.

So… yeah… it boils down to ramming bad, disruptive restructuring plans down the throats of schools/districts/communities that have been classified by biased, and unjustifiable measures. Further, much of this is being proposed without carefully evaluating whether there exists legal authority to do any of it.

Junk Classifications 101

So, let’s take a look at how the school classifications have played out in New Jersey. New Jersey, like other states proposed to classify its worst schools as Priority schools – subject to immediate disruptive intervention, the next lowest set as Focus schools – the you’re next/we’re watching you schools – and another set as “reward” schools – or you kick ass so we’re gonna give you a prize!

Matt Di                                  Carlo over at Shanker Blog has given considerable attention to the issue of state school grading systems and the extent to which they measure or even attempt to measure school effects on student test scores (not to be conflated with actual school “effectiveness”), or instead simply capture the compounded influence of a variety of student background factors on various accountability measures. In other words, are school ratings simply classifying poor minority schools as bad schools and thus branding their teachers and administrators as necessarily ineffective, while not even attempting to actually discern their effectiveness.

Further, in my last post on New York City schools I showed that while there were subtle differences in mean teacher percentile rank across schools rated as the worst (priority) versus those rated best (good standing), a) there were still many “best” schools where teacher average test score effect was much lower than in “worst” schools and b) schools that had lower income students and more minority students were still much more likely to be rated as among the “worst” even if their teacher “effects” were similar.

New Jersey Classifications

This first figure shows the demographic composition of schools by their classification. Perhaps the most astounding feature of this graph is that priority schools are nearly 100% black and Hispanic, while reward schools have very low levels of low income, black or Hispanic students.

Here are a few maps to illustrate the geographic distribution of priority, focus and reward schools, for those who know Jersey. We can see that the priority schools are concentrated in the larger, poorest urban centers and focus schools in and around other poor cities/towns.

Not surprisingly, the reward schools for the most part are scattered through the more affluent suburbs of northern Bergen County and out through the most affluent areas of north central NJ (Morris/Somerset/Hunterdon). Okay… I was actually surprised that they had concocted a rating system that was so absurdly biased. The second set of maps shows that there are some reward schools in the northern half of the city of Newark (the area with lower black population share).

Underlying Measures for Classification

It was assumed that states would be proposing ratings based on a mix of status and improvement measures… and that doing so would somehow mitigate the extent of demographic bias in the classifications. States could also use subgroup and achievement gap measures. States wouldn’t, for example, simply be proposing to step in and close down all of the majority low income and minority schools and turn them over to private management/or otherwise displace their entire teaching and administrative staffs.

Of course, the measures available in most states aren’t always that useful to sifting through the demographic biases.New Jersey’s are particularly bad. The following figure shows the racial and low income composition of schools by the types of measures that determined their status. Both the progress ratings and the performance level ratings are hugely biased! As it turns out, so are the achievement gap and subgroup measures. Notably, many affluent New Jersey districts (where the reward schools are) likely have too few low income or minority students to even report gaps.

Remedying Poverty by Deprivation?

In my analysis of New York State, I also showed that priority schools are far more likely to appear in school districts that have been most underfunded by the state of New York relative to its own promised school funding formula (the one the state adopted/proposed as a remedy to court order several years back).

Now, New York state has one of the worst state school finance systems in the nation. One in which districts with more needy students have systematically fewer resources. New Jersey is a far cry from New York in this regard. New Jersey has done better than most states with respect to funding equity and adequacy. 

And compared to demographically similar states, New Jersey has some positive results to show for its overall funding effort and for its targeting to high poverty districts.

But lately, New Jersey has started down a different road in state school finance policy. The state has chosen in recent and proposed for future years to significantly underfund their own legislatively adopted state school finance formula.

That in mind, the following slides present an analysis somewhat similar to that presented in New York State, but looking forward instead of back. I’m not proposing some lofty “what should be” funding levels based on academic analysis here. Rather, I’m simply looking at the extent to which New Jersey is currently, and proposed to fund districts under its own formula SFRA. This is the formula that was adopted by the legislature under the previous administration and was subsequently upheld by the state court. More on these issues in a later post.

I’ve not had time to reconstruct my own simulations of SFRA projected out over the next several years, so I’ve used data pulled together by the Education Law Center and SOS NJ in which they have projected (SOS NJ) out the SFRA funding shortfalls for each district for the next 5 years. The figure below shows that in the current year, funding shortfalls from the current legislated formula are smaller in districts that are home to priority and focus schools (note that the formula itself significantly reduced targeted effort to these districts when it was implemented).

But, over the next few years, it is expected that as these schools – priority and focus – are subjected to takeover/overhaul/closure – their districts will be increasingly shorted in their funding with respect to what the formula estimates.  That is, the overall strategy here appears to be to identify high need schools for takeover/closure and then systematically and substantially reduce their financial support over time.

Cumulatively, over the next five years, districts of priority schools stand to lose much more on a per pupil basis (relative to what the formula dictates they should receive) than districts of reward schools.

Put bluntly, the goal is to “reform”(?) priority and focus schools and close achievement gaps by taking all of that harmful money away from them and giving it to others who are far less needy! Yeah… that’ll learn-’em!

This is all strangely consistent with the framing of the commissioners report, that was not a report, on school funding and achievement gaps in New Jersey. In that report, Commissioner Cerf essentially proposed (via a series of bad and worse graphs) that the road toward closing New Jersey’s achievement gap should be paved by reducing funding to high need minority districts and shifting it to lower need, lower minority concentration districts. Strange logic indeed.

And these reductions presented above don’t account fully for the plethora of other alterations proposed to the state school funding formula that might further reduce funding to higher need districts – funding to districts that are home to priority and focus schools.

The following posts critique some of the proposed changes, and address other related issues:

Closing Thoughts

As I noted on my previous post, I can hear the reformy outcry now that this is all warranted because we’ve provided poor and minority kids the worst schools and worst teachers for so many years. This is merely an attempt to remedy this persistent, intractable disparity.  The problem with this logic is the placement of blame (in addition to the questionable legal authority and ill-conceived remedies).

We’re not measuring school performance here. There’s no basis in these classification schemes for implying that the teachers and administration are the ones who failed the children. These are junk, gerrymandered classification schemes. They are based on arbitrary distinctions being made with inadequate data/information.

Follow up on Ed Waivers, Junk Rating Systems & and Misplaced Blame – New York City’s “Failing” Schools

About a week ago, I put up a post explaining a multitude of concerns I have with the current NCLB waiver process and how it is playing out at the state level. To summarize, what we have here is the executive branch of the federal government coercing state officials to simply ignore existing federal statutes, by granting waivers to state officials who adopt the current administration’s preferred education reform strategies. Setting aside the legal/governance concerns, which are huge, few if any of these preferred strategies are informed by any sound research/analysis.

Equally if not more disturbing is how this waiver process is playing out at ground level, and the message it sends.  Once again (as in Race to the Top) the administration has encouraged the adoption of ill-conceived homogenized policy frameworks across states. States are encouraged, through the waiver application process, to propose how they will abuse data yielded by their generally inadequate data systems to inappropriately classify local public schools as a) in good standing, b) focus or c) priority.  States have been granted some flexibility as to how they will abuse their own data to gerrymander local schools into these categories.

Once schools are placed into these categories – regardless of any validity check on the meaning of those categories – those schools become subject to a prescribed set of largely unproven state intervention options – with yet another layer of complete disregard for statutory and constitutional authority (state statutes & constitution) of states to take such action.

In short, what we have is the federal executive branch  using authority it doesn’t have to grant states authority they don’t necessarily have, to unilaterally impose substantive changes on individual local public schools.

But I digress, yet again.

So, what about those categories? And how do they break down? In other words, which districts and which children are most likely to be subjected to these interventions/experimentation?

I started last week with the state of New York, pointing out that on average, New York State has (ab)used its currently available data to characterize as priority schools and focus schools, primarily those schools that are a) high in poverty, b) high in minority concentration, c) low in taxable property wealth and d) low in aggregate household income per capita.

To rub salt in the wound, even though back in 2003 New York State was ordered to correct deficiencies in school funding across districts, and even thought state itself proposed a relatively inadequate formula to address those disparities, the state has continued to ignore its own formula – shorting by the largest amounts, those districts that are home to the most priority schools. (for a thorough analysis, see: https://schoolfinance101.com/wp-content/uploads/2010/01/ny-aid-policy-brief_fall2011_draft6.pdf)

This is where we last left off:

But, my analysis last week largely left out New York City schools. Bear in mind that New York City like many other high need districts around the state continues to be substantially shorted under the state’s own proposed foundation funding formula.

Demographics in New York City

First, let’s do a walk-through of the demographic characteristics of New York City schools by their classification under the new state rating system. Bear in mind that the degree of variation in demography across schools in New York City is somewhat more limited than across the state as a whole.  On average, in New York City, more schools have higher average minority concentrations and higher average low income concentrations than the state as a whole.

The following figures play out largely as one might expect – with priority schools having a) the highest concentrations of low income students, b) elevated concentrations of black and Hispanic students, c) and the highest concentrations of LEP/ELL and special education students.

In other words, it would certainly seem that while reformy-rhetoric dictates that demography should not determine destiny, demography remains a pretty strong determinant of a school’s post-nclb-waivery-classification-status.

When I run statistical tests of the relationship between demographic factors and the likelihood that a school is identified as a “priority” school, I find:

  1. A 1% increase in % Free Lunch (controlling for grade level) is associated with a 4.6% (p<.01) increase in the likelihood of being classified as a “priority” school.
  2. A 1% increase in % Black (controlling for grade level) is associated with a 1.2% (p<.01) increase in the likelihood of being classified as a “priority” school.
  3. A 1% increase in % Hispanic (controlling for grade level) is associated with a 2.1% (p<.01) increase in the likelihood of being classified as a “priority” school.
  4. A 1% increase in % Special Education (controlling for grade level) is associated with a 13% (p<.01) increase in the likelihood of being classified as a “priority” school.

Resources by Demographics in New York City

My next cut at the NYC data explores the position of priority schools with respect to a) low income students, b) special education students and c) per pupil spending (school level).

In the first two figures, we see that priority secondary schools (or at least those serving students through the secondary grades) are somewhat spread out by % free or reduced lunch. Note that bubble/diamond/triangle size indicates school size. However, there don’t appear to be any priority high schools among the lowest poverty schools, and a relatively large share appear among the higher poverty schools. They are relatively average, compared to schools with similar % free or reduced lunch, in terms of their school site spending.

In the second figure, we can see that those high schools identified as priority all have at least a minimum threshold of children with disabilities. But, all high schools with few or no children with disabilities are in good standing. That includes numerous relatively small high schools that also have much higher per pupil spending even though they have far fewer children in special education.

Priority middle schools also have relatively average per pupil spending compared to schools with similar concentrations of low income or special education students – but they all tend to have relatively high concentrations of low income children and children with disabilities.

None of the middle schools with lower concentration of low income children or children in special education are classified as priority schools.

For elementary schools, priority schools (again in red… but somewhat hidden behind others) tend to be very high in concentrations of low income students and moderately high in concentrations of children with disabilities. Again, they are relatively average in their per pupil spending compared to similar schools.

Outcomes in New York City

Now, the findings in the previous section might… might… on the outside chance suggest that there really is something about these priority schools that warrants additional investigation. After all, they do have similar resources to other schools serving high need populations. And while the priority schools tend to have high need populations, other schools with similarly high need populations and similar resource levels are either “focus” schools or “in good standing.”

But, it is still important to remember that the state has NOT identified as priority schools, any schools that serve low need populations and do less will in terms of outcomes compared to other schools serving low need populations and with comparable resource levels.

For this next graph, I took the NYC teacher level value-added data and averaged them to the school level for all teachers in each school, as in my previous post on NYC charter school teachers. (caveats included on previous post). Note that I’ve removed quite a bit of the variation in these value-added scores by aggregating them to the school level prior to constructing this graph.

While the differences in mean teacher value-added do fall in the right rank order – highest mean for good standing, second for focus and lowest for priority, the variations among schools around these means certainly muddy the waters a bit. Yes, the means are different, but the boxes overlap quite substantially!  And the consequences of being in one group versus another are quite substantial.

Shouldn’t it be the case that no priority schools have “better” average teachers (by the limited noisy measure used here) than schools in “good standing?” How does it make sense that there are schools in “good standing” that fall well below the average for teacher value-added of “priority” schools, even if those cases are relatively rare?

For one last statistical shot at teasing out what’s going on here, I ran a logistic regression to figure out a) whether and to what extent these differences in value-added are significant predictors of landing in priority status and b) whether a school that has more low income and minority students is more likely to land in priority status – even if it has the same teacher value-added scores?  That is, even if those statistically “bad” teachers aren’t to blame!

In other words, is there racial/socio-economic bias in the school ratings among schools with similar teacher value added?

Here it is:

For interpretation, I used the average value-added percentile of teachers here (in place of the standardized value-added score).

What this output shows is that for a 1 percentile rank increase in the school average teacher value added, a school is about 8.4% less likely (91.6%% as likely) to be classified as priority. Having a higher aggregate value-added teacher percentile rank significantly reduces the likelihood that a schools is identified as priority. That makes sense… but certainly isn’t the whole story… and as the graph above shows, there’s a fair amount of variation within each category.

Here’s the problem. Even among schools that have the same aggregate teacher value added percentile:

  1. schools with 1% higher free or reduced price lunch are 4.4% more likely to be classified as priority
  2. schools with 1% more black children are 8.7% more likely to be classified as priority
  3. schools with 1% more Hispanic children are nearly 8% more likely to be classified as priority
  4. schools with 1% more children with disabilities are 9% more likely to be classified as priority.

And each of these biases is significant, and of non-trivial magnitude, as well.

Let’s review what this means in simple, blunt terms.

These findings mean that even in schools where the teachers have the same average value added rank/percentile, schools with more low income, minority and special education children are more likely to face anti-democratic intervention!!!!!

Now that doesn’t seem to make a whole lot of sense when the supposed reason for the need for these interventions is that these poor, minority schools simply have all of the “bad teachers,” and when a central strategy to be employed is to replace/displace the school staff.

Closing Thought

I can hear the reformy outcry that this whole multi-level coercive illegal power grab to impose reformy intervention is in fact a critical step toward guaranteeing that demography isn’t destiny… to make widely known the fact that we’ve continued to provide minority and low income students the worst teachers and the worst schools. And that those teachers and schools must go [even if many of them outperform teachers in schools in “good standing” on their noisy/biased VAM estimate?]! Even if that means complete disregard for our current system of government. And even if that means that the parents, children and employees in these schools have to be forced to forgo additional constitutional and statutory protections (depending on the imposed reform).

It would be one thing if the measurement systems we were using to classify these schools as “failing” were valid for making such decisive declarations. That is, valid for making the assertion that it is the quality of the school and its teachers – and other controllable factors – that are primarily responsible for the performance.

Such arguments would be more reasonable if the disparate impact shown here was actually a disparate impact of school quality variation rather than a disparate impact in the application biased of school ratings, constructed from inadequate measures and inappropriate analysis (utter disregard for error, bias, etc., etc., etc.).

Ed Waivers, Junk Rating Systems & Misplaced Blame: Case 1 – New York State

I hope over the next several months to compile a series of posts where I look at what states have done to achieve their executive granted waivers from federal legislation. Yeah… let’s be clear here, that all of this starts with an executive decision to ignore outright, undermine intentionally and explicitly, federal legislation. Yeah… that legislation may have some significant issues. It might just suck entirely. Nonetheless, this precedent is a scary one both in concept and in practice. Even when I don’t like the legislation in question, I’m really uncomfortable having someone unilaterally over-ride or undermine it. It makes me all the more uncomfortable when that unilateral disregard for existing law is being used in a coercive manner – using access to federal funding to coerce states to adopt reform strategies that the current administration happens to prefer. The precedent at the federal level that legislation perceived as inconvenient can and should simply be ignored seems to encourage state departments of education to ignore statutory and constitutional provisions within their states that might be perceived similarly as inconvenient.

Setting all of those really important civics issues aside – WHICH WE CERTAINLY SHOULD NOT BE DOING – the policies being adopted under this illegal (technical term – since it’s in direct contradiction to a statute, with full recognition that this statute exists) coercive framework are toxic, racially disparate and yet another example of misplaced blame.

States receiving waivers have generally followed through by using their assessment data in contorted and entirely inappropriate ways to create designations of schools and districts, where those designations then permit state officials to step in and take immediate actions to change the governance, management and whatever else they see fit to change in these schools (and whether they have such legal authority or not).

Priority schools are the bottom of the heap, or bottom 5% and are subject to the most aggressive, and most immediate unilateral interventions (seemingly with complete disregard for existing state statutory or constitutional rights of attending children, their parents or local taxpayers, as well as explicit disregard for existing federal law).

Implicit in these classifications – and the proposed response interventions – is the assumption that priority schools are simply poorly run schools – schools with crummy leaders and lots of bad, lazy, pathetic and uncaring teachers… who have thus caused their school to achieve priority status. They clearly must go… or at least deserve one heck of a shaking up! Couldn’t possibly be anyone else’s fault. After all, the state must have clearly already done its part to provide sufficient financial resources, etc. etc. etc. It must be the bad teachers and crappy principals. That’s all it can be! Therefore, we must have immediate wide-reaching latitude to step in and kick out the bums – and heck – just close those schools and send those kids elsewhere, or convert those schools to “limited public access, privately governed and managed institutions” (privately manged charters) where layers of constitutional rights for employees and students may be sacrificed.

New York State’s Waiver Hit List

New York State Education Department released their hit list of schools recently.

http://www.p12.nysed.gov/accountability/ESEADesignations.html

Here’s quick run down of some key characteristics of schools and districts under each designation.

Demographics of Hit List Schools Statewide

I did a quick merge of the above classification data with data from the 2011 NYSED school report cards to generate the graph below, weighted by school enrollment.

https://reportcards.nysed.gov/

Notably, schools in “good standing” are lowest BY FAR in % of children qualified for free lunch, percent of children who are black, or Hispanic, and are also generally lower in percent of children who are limited in their English Language Proficiency. Race and poverty differences are particularly striking!

In short, the Obama/Duncan administration has given NY State officials license to experiment disproportionately on low income and minority children – or for that matter – simply close their schools. No attempt to actually legitimately parse “blame” or consider the possibility that the state itself might share in that blame.

AFTER ALL, NEW YORK STATE CONTINUES TO MAINTAIN ONE OF THE MOST REGRESSIVE STATE SCHOOL FINANCE SYSTEMS IN THE COUNTRY! 

And the underlying disparities in that system are quite striking!

But perhaps, if we just require all of these priority schools to be turned over to charter operators from New York City, they can work their miracles statewide – for less money – with the same kids – and generating decisively better outcomes????? And of course, dramatically reducing labor costs??? Okay… so the data don’t really support any of that.

Economics of Hit List School Districts Statewide

Here’s another perspective on the districts that house schools across these categories. New York State’s school funding model derives a combined wealth ratio based on two key factors that seem to be strong determinants of the local revenue those districts can raise on their own – Income per Pupil (aggregate income of residents divided by district enrollment) and Taxable Assessed Property Values per Pupil. Here’s how these two measures play out across categories (excluding New York City schools, where income and property wealth are a) difficult to accurately compare with the rest of the state and b) disproportionately weigh on the comparisons).

Say it ain’t so? Really, are the districts of schools that are in “good standing” that much higher income and that much higher in taxable property wealth than the schools that are identified as priority schools? Couldn’t be. The economics of these local communities clearly has nothing to do with it? Does it? It’s those damn lazy, apathetic teachers and their greedy overpaid administrators!

State Funding of Hit List School Districts Statewide

A while back, I wrote this brief:  NY Aid Policy Brief_Fall2011_DRAFT6

In the brief, I explain the layers of problems of the New York State foundation aid formula. I also wrote this blog post: https://schoolfinance101.wordpress.com/2011/11/08/more-inexcusable-inequalities-new-york-state-in-the-post-funding-equity-era/

In the brief, I explain how the current New York State school foundation aid formula is hardly equitable or adequate for meeting the needs of children attending the state’s highest need districts. But to rub salt in the wound – FOR THE PAST SEVERAL YEARS, THE GOVERNOR AND LEGISLATURE HAVE CHOSEN TO DISREGARD ENTIRELY THEIR OWN WOEFULLY INADEQUATE STATE AID FORMULA.

Even worse, when the Governor and Legislature have levied CUTS TO THAT FORMULA, they have levied those cuts such that they disproportionately cut more state aid per pupil from the higher need districts. As of 2011-12, some high need districts including the city of Albany had shortfalls in state funding (from what would be expected if the foundation formula was actually funded) that were greater than the total foundation aid they were actually receiving.

So, here’s one last graph for my statewide analysis – in which I summarize the foundation formula shortfalls per pupil by district, for the schools in each class. The foundation formula shortfall compares:

  1. What should be: full foundation formula aid that would be received per total aidable foundation pupil unit, using the 2011-12 Foundation Level (on Page 21, here: http://www.oms.nysed.gov/faru/PDFDocuments/Primer12-13A.pdf) and multiplying that foundation level times the regional cost index and pupil need index (and then determining the state share per the formula as described in the above linked document).  To be concise, this is the “what should be” target, but is based on last year’s target. So I’m being generous to the state, because the foundation level should have gone up again from 2011-12 to 2012-13 (see p. 21)
  2. What is: actual state foundation aid to be received from 2012-13 Aid Runs, with foundation aid adjusted for Gap Elimination Adjustment and partial restoration of GEA.

So… here are the funding gaps by status, with respect to the state’s own inadequate funding targets:

So, with respect to its own formula, the state is pretty much underfunding everyone. Of course, as I’ve noted on many previous occasions, the state is also dumping disproportionate unneeded tax relief aid into the wealthiest districts, which also happen to have the schools in “good standing.”

What we have here is that the state is most substantially underfunding – WITH RESPECT TO ITS OWN FUNDING FORMULA – the districts that house the majority of children enrolled in schools that the state has identified as “priority” schools.

Hey… here’s an idea for ya’…WHY NOT MAKE IT A PRIORITY TO ACTUALLY FUND THESE SCHOOL DISTRICTS AT AT LEAST THE LEVEL THAT YOU, THE STATE HAVE DECLARED ADEQUATE FOR THEM?

Is it reasonable to play this subversive “blame the teachers and administrators” game and kick them out and close their school when it is you… the STATE that has shorted them disproportionately on funding – FOR YEARS – with respect to your own funding formula benchmarks.

We can discuss the adequacy of those funding benchmarks another day. (read the brief: NY Aid Policy Brief_Fall2011_DRAFT6)

Additional analysis of New York City schools hopefully in the near future!

Helicopters can improve minority college attendance & other misguided policy implications: Comments on the Brookings “Voucher” Study

Here’s my quick response to the Brookings report released yesterday on the long term effects of vouchers on a randomized pool of participants in New York City.

Let’s say I conducted a study in which I rented a fleet of helicopters and used those helicopters to, on a daily basis, transport a group of randomly selected students from Camden, NJ to elite private day schools around NJ and Philadelphia. I then compared the college attendance patterns of the kids participating in the helicopter program to 100 other kids from Camden who also signed up for the program but were not selected and stayed in Camden public schools. It turns out that I find that the helicopter kids were more likely to attend college – therefore I conclude logically that “helicopters improve college attendance among poor, minority kids.” The simple policy solution then is to rent more helicopters and use them to send kids, well, wherever. After all, it’s the helicopters that matter????? Clearly, that would be a ridiculous assertion.

Similarly, the “major” findings of the new Brookings study were that, in particular, black participants in the voucher program seemed to have an increased likelihood of attending college. The study involved a randomized pool of individuals who qualified, applied and received the vouchers (and attended a private school) and those who qualified, applied and didn’t receive the vouchers.

The study purports to find [or at least the media spin on it] that “vouchers” as a treatment, worked especially for black students. I won’t  spend my time quibbling over design and statistical issues here, because I think the simplest issue to address – the big one – is the definition of the treatment itself.

This is not necessarily a study of whether “vouchers” as a treatment affect long run outcomes. Just like my hypothetical above had little to do with helicopters! Rather, it is a study of using “vouchers” as a funding mechanism to place a relatively small sample of low income minority children into a set of schools with fewer low income and minority peers. Schools that happen to be private schools. So it’s not really a study of whether “private schools” are more (or less) effective either. As such, the study really has little or no policy implications for “voucher systems” themselves, or private schooling.

Personally, I was struck to find that the only reference to peer group or peer composition was in a single sentence at the end of the report – but this sentence really says it all:

To the extent that student learning is dependent on peer quality the impacts reported here could easily change.

Yeah… that’s no throwaway line here! In fact it has the potential to completely re-frame the entire paper.

So what is the treatment?

Well, the use of a “voucher” system to alter the educational setting for a group of kids is most certainly not the treatment. Voucher is merely the mechanism used here to achieve the treatment.  It may be a policy mechanism that is useful under limited circumstances to achieve changes in educational setting. But the “voucher” is NOT the treatment.

And the use of vouchers in this narrow context may have few if any policy implications for new voucher programs in Indiana or Louisiana if they do not result in low income minority students being better integrated with higher income peers predisposed to have college aspirations.

The sector of schooling is most certainly not the treatment either – public or private school – catholic or non-religious school. While the sector of schooling is a variable in this analysis, it also may or may not have anything to do with the characteristics of the educational setting that most influenced college going behaviors. There’s a whole separate body of literature on that topic that is notably absent in this report. And it’s quite possible that we could find a policy mechanism which has nothing to do with either vouchers or private, or catholic schools which shifts more low income minority students into educational settings that promote college going behaviors.

So before we get in some huge tizzy about “vouchers” and “private school” effects, lets go back and define the treatment in this study for what it actually is and then try to figure out what it means in terms of effective policies for increasing college attendance among low income and minority students.

Tangent:  I found this paragraph particularly interesting:

The voucher offer also has a much larger impact than does exposure to a more effective teacher. Elementary school teachers who are one standard deviation more effective than the average teacher are estimated to lift their students’ probability of going to college by 0.49 percentage points at age 20, relative to a mean of 38 percent, an increment of 1.25 percent (Chetty et al. 2011b). If one extrapolates that finding (as the researchers do not) to three years of effective teaching, the impact is 3.75 percent. The impacts identified here for African American students—an increase of 24 percent—are many times as large.

Basically, what this paragraph/extrapolation boils down to is the distinct possibility that the variations in setting (largely peer group) achieved in this voucher study yield what appear to be stronger effects than the measured (really noisy measured – which may matter here) variations in teacher effect in the Chetty study. In other words, quite possibly… peer composition is actually the strongest in-school effect on long term student outcomes!?? [note that this is speculative based on the somewhat questionable comparison made in the above paragraph].

Second Tangent: Quite honestly, the cost comparison comments in this paragraph are so shoddy, poorly documented, etc. that they do much to undermine the report and should be cut.

These impacts are somewhat larger than the long-term impacts of the much more costly class-size intervention in Tennessee. Dynarski et al. (2011) estimate that being assigned to a smaller class in the early elementary grades increased college enrollment rates among African Americans by 19 percent (5.8 percentage points on a base of 31 percent). Reduction of class size in Tennessee was estimated to cost $12,000 per student (Dynarski et al. 2011), whereas the social cost of the SCSF intervention was about $4,200 per student to the foundation and reduced costs to the taxpayer by reducing the number of students who would require instruction within the public sector. If the government had paid for the voucher, the expenditure could have taken the form of a simple transfer from the public sector to the private sector, which in the long run need not add to the per-pupil cost of education. In fact, it could decrease costs because Catholic schools spend less on average than public schools. Around the time of the SCSF evaluation, New York City public schools spent more than $5,000 per student, as compared to $2,400 at Catholic schools (Howell and Peterson 2006, 92).

First, this paragraph and the sources it cites provide little if any solid evidence regarding “costs” and “expenditures” citing only that the Archdiocese of NY at the time said so (p. ii) regarding Catholic school costs being about $2,400 per pupil (and ballparking out of nowhere the $5k public district figure). This paragraph also compares estimated expenditures on one strategy (class size reduction) in one context (TN) and point in time to partial subsidies on another strategy in another context at another point in time.  A while back, I criticized another report, also by Matt Chingos (nothin’ personal, I generally like his work) in which he referred to Class Size Reduction as the “Most Expensive School Reform” without legitimately comparing the costs of CSR to any other reform.  The above paragraph is strikingly similar in its gaping holes of logic and evidence. I could go on and on. The authors also make no attempt to provide reasonable assumptions & estimates for the full cost of operating and scaling up a large scale voucher system. As such, this stuff really has not place in this paper.  For a more thorough discussion/analysis of public/private school spending, see: http://nepc.colorado.edu/publication/private-schooling-US

Since the authors didn’t actually conduct any real analysis of schooling resources/finances, they really shouldn’t have gone there in their conclusions. This kind of back of the napkin, half-baked cost savings assertion really cheapens a study that does have some interesting findings to offer.