Thoughts on “Randomized” vs. Randomized Charter School Studies

There’s much talk in education research about Randomized Control Trials and truly “experimental” research being the “gold standard” for determining whether a specific intervention “works” or not. Thus is the basis for the Institute for Education Sciences What Works Clearing House. It is often argued that randomized, or experimental studies are “good” and decisive, and that other approaches simply don’t match up. Therefore, if someone really wants to know what works or doesn’t with regard to a specific intervention or set of interventions, one need only review those randomized, experimental studies to identify the consensus finding.

There’s so much to discuss on these issues, including the extent to which truly randomized experiments can actually shed light on how interventions might play out in other settings or at scale. But I’ll stick to a much narrower focus in this post, and that is, just how randomized is randomized? Most recently, this question came to mind after reading this post addressing “experimental” vs. “non-experimental” studies of charter schools by Matt Di______Carlo at Shanker blog, and this post over at Jay P. Greene’s blog on RIGOROUS charter research (meaning experimental, or randomized).

There tend to be two types of studies done to determine the relative effectiveness of “charter schools” versus traditional “district schools.” The basic idea of either type of study is to determine the effect that “charter schooling” or some specific set of policies/practices and instructional models and strategies about “charter schooling”, has on students’ outcomes, when compared to kids who don’t receive those strategies. That is, exposure to “charter schooling” is assumed to be a treatment, and non-exposure, whatever that constitutes, is the control.

One type of study tries to identify after the fact, otherwise similar kids (matched pairs) attending a set of charter schools and a set of district schools in the same city, and then compares their achievement growth over time. These studies often fall short in two important ways.

The other type of study is often referred to as meeting the gold standard – as being a randomized study – or lottery-based study. It is assumed, since these studies are declared golden, that they therefore necessarily resolve both above concerns. And it is possible, that if these studies truly were randomized (or even could be) that they could resolve the above concerns. But they don’t (resolve these concerns), because they aren’t (really randomized).

First, what would a randomized study look like? Well, it would have to look something like this – where we randomly take a group of kids – with consent or even against their will – and assign them to either the charter or traditional school option. The mix of kids in each group is truly random and checked to ensure that the two groups are statistically representative (using better than the usual measures) of the population.  Then, we have to make sure that all other “non-treatment” factors are equivalent, including access to facilities, resources, etc. That is, anything that we don’t consider to be a feature of the treatment itself. This is especially important if we want to know whether expanding elements of the treatment are likely to work for a representative population.  This is a randomized, controlled trial.

Slide1

So then, what’s randomized in a randomized charter school study? Or lottery-based study?  One might sketch out a lottery-based study as follows:

Slide2

Here, the study is really only randomized at one point in a long complicated sequence – the lottery itself. Students and families have to decide they want to enter the lottery – that they are interested in attending a charter school, which will ultimately affect the composition of the charter school enrollments. Then, among those selecting into the pool, students are randomly chosen to attend the charters along side others randomly chosen to attend (from a non-random pool of lottery participants), and the others randomly selected, to go, well, somewhere else… with a group of peers non-randomly chosen to end up in that same somewhere else.

So, while the studies compare the achievement of kids randomly chosen to those randomly un-chosen (thus comparing only those who tried to get a charter slot), the kids are shuffled into settings that are anything but randomly assigned, containing potentially vastly different peer groups and a variety of other differences in setting. Add to this the likelihood of non-random student attrition, further altering peer group over time.

As such, I very much prefer these studies to be referred to as “lottery-based” rather than randomized or experimental. These studies are randomized at only one step in this process, potentially conflating setting/peer effects with treatment effects, thus substantially compromising policy implications.

As with those matching studies, the types of variables used to check and/or correct for peer composition and non-randomness of attrition are often too imprecise to be useful.

One fun alternative would be to pull a switch, whereby the charter teachers, their model, instructional strategies etc. would be traded with the district schools’ teachers, model and strategies, as a confirmatory test to see whether the charter model effects are actually transferable (assuming there were effects to begin with).

Slide5

Clearly, I’m asking way too much to assume that charter school, or most other program/intervention research in education be based on real RCTs. That’s not going to happen. And I’m not convinced it would be that useful for informing policy anyway. But, my point in this post is to make it clear that the difference between the types of matched student studies done by CREDO, for example, and the studies being (mis)characterized as “gold standard” randomized studies is far more subtle than many are willing to admit and NEITHER ARE WHAT THEY’RE REALLY CRACKED UP TO BE!

Dumbest “School Finance” Tweet Ever?

Critics say only public systems can focus 100% on the children, but vast majority of K-12 $$ goes to employees not kids bit.ly/SLrNUn

— AEI Education(@AEIeducation) December 18, 2012

Twisted Truths & Dubious Policies: Comments on the NJDOE/Cerf School Funding Report

Yesterday, we were blessed with the release of yet another manifesto (as reported here on NJ Spotlight) from what has become the New Jersey Department of Reformy Propaganda.  To be fair, it has become increasingly clear of late, that this is simply the new model for State Education Agencies (see NYSED Propaganda Here), with the current US Dept of Education often leading the way.

Notably, there’s little change in this report from a) the last one or b) the Commissioner’s state of the schools address last spring.

The core logic of the original report remains intact:

  1. That NJ has a problem – and that problem is  the achievement gap between low income and non-low income kids;
  2. That spending money on these kids doesn’t help – in fact it might just hurt – but it’s certainly a waste;
  3. Therefore, the logical solution to improving the achievement gap is to reduce funding to districts serving low income and non-English speaking kids and shift that funding to others.

Here’s a quick walk-through…

The Crisis?

The new report, like the previous, zeros in on the problem of New Jersey’s achievement gap between low income and non-low income kids. Now, the reason that the recent reports have focused so heavily on the achievement gap is that in the early days of this administration, the rhetoric was focused on the system as a whole being academically bankrupt. The simple response was to point out that NJ schools, by nearly any outcome measure stack up quite favorably against nearly any other state. So, they had to back off that rhetoric, and move to the achievement gap thing. Here’s one of the justifying statements in the current report.

“Likewise, on the 2011 administration of the National Assessment of Educational Progress, New Jersey ranked 50th out of 51 states (including Washington, D.C.) in the size of the achievement gap between high- and low-income students in eighth grade reading.”

Of course, as I’ve pointed out again and again, and will reiterate below, this is an entirely bogus comparison.

The Proposed Solution?

Like the previous funding report from last Winter, the primary recommendations in this new manifesto are to reduce funding adjustments for low income and non-English speaking kids, because we know they don’t need that funding and certainly couldn’t and obviously haven’t used it well. The report did back off from proposing one of the oldest tricks in the book for cutting aid to the poor – funding on average daily attendance – but likely backed off because they simply lack the legal authority to propose this change in this context and not out of any moral/ethical principle.

The Rationale?

The most bizarre section of the new report appears on the bottom of the second page. Here, the report’s author makes several bold, outlandish and unjustified and mostly factually incorrect statements. Further, little or no justification is provided for any of the boldly stated points. It’s nearly as ridiculous as The Cartel.

Here are two of my favorite paragraphs:  

 The conclusion is inescapable: forty years and tens of billions of dollars later, New Jersey’s economically disadvantaged students continue to struggle mightily. There are undoubtedly many reasons for this policy failure, but chief among them is the historically dubious view that all we need to do is design an education funding formula that would “dollarize” a “thorough and efficient system of free public school” and educational achievement for every New Jersey student would, automatically and without more, follow.” (emphasis added)

“Of course, schools must have the resources to succeed. To the great detriment of our students, however, we have twisted these unarguable truths into the wrongheaded notion that dollars alone equal success. How well education funds are spent matters every bit as much, and probably more so, than how much is spent. New Jersey has spent billions of dollars in the former-Abbott districts only to see those districts continue to fail large portions of their students. Until we as a state are willing to look beyond the narrow confines of the existing funding formula – tinkering here, updating there – we risk living Albert Einstein’s now infamous definition of insanity: doing the same thing over and over again and expecting a different result.”

First, I would point out that starting with the line “the conclusion is inescapable” is one of the first red flags that most of what follows will be a load of BS. But that aside… let’s take a look at some of these other statements.  I’m not sure who the Commissioner thinks is advancing the “historically dubious view that all we need…blah…blah… blah… dollarize … blah… blah” but I would point out that the central issue here is that a well organized, appropriately distributed, sufficiently funded state school finance system provides the necessary underlying condition for getting the job done – achieving the desired standards, etc. (besides nothing could ever equal the reformy dubiousness of this graph… or these!) .

This isn’t about arguing that money in and of itself solves all ills. But money is clearly required. It’s a prerequisite condition. More on that below. This claim that others are advancing such an historical dubious view is absurd. Nor is it the basis for the current state school finance system, or the court order that led to the previous (not current) system! [background on current system here]

Equally ridiculous is the phrase about these “unarguable truths.” Again, when I see a phrase like this, my BS detector nearly explodes. Again, I’m not sure who the commissioner thinks is advancing some “wrongheaded notion” that “dollars alone equal success,” but I assure you that while dollars alone don’t equal success, equitable and adequate resources are a necessary underlying condition for success.

Indeed, the current state school finance system is built on attempts to discern the dollars needed to provide the necessary programs and services to meet the state outcome objectives [I’ll set aside the junk comparisons to Common Core costs listed in the report for now]. But the focus isn’t/wasn’t on the dollars, but rather the programs and services – which, yes… ultimately do have to be paid for with… uh… dollars.

Under the prior Abbott litigation and resulting funding distributions, the focus was entirely on the specific programs and services required for improving outcomes of children in low income communities (early childhood education programs, adequate facilities, etc.). In fact, that was one of the persistent concerns among Abbott opponents… that the programs/services must be provided under the court mandate, regardless of their cost (not that the dollars must be provided regardless of their use) and in place of any broader, more predictable systematic formula. So, perhaps the answer is to go back to the Abbott model?

Ultimately, to establish a state school finance formula (which is a formula for distributing aid), you’ve got to “dollarize” this stuff. But that doesn’t by any stretch of the imagination lead to the assumption that the dollars create – directly – regardless of use – the outcomes. That’s just ridiculous. And the report provides no justification behind its attack on this mythical claim.

In fact, these statements convey a profound ignorance of even the recent history of school finance in New Jersey.

The Reality!

Now that I’m done with that, let’s correct the record on a few points.

New Jersey has an “average” achievement gap given its income gap

I’m not sure how many times I’ll have to correct the current NJDOE and its commissioner on their repeated misrepresentation of NAEP achievement gap data. This is getting old and it’s certainly indicative that the current administration is unconcerned with presenting any remotely valid information on the state of New Jersey schools. Given what we’ve seen in previous presentations I guess I shouldn’t be surprised.

In any case, here’s my most recent run of the data comparing income gaps and NAEP outcome gaps. Across the horizontal axis in this graph is the difference in income between those above the reduced lunch income threshold and those below the free lunch income threshold. New Jersey and Connecticut have among the largest gaps in income between these two groups. Keep in mind that the same income thresholds are used across all states, despite the fact that the cost of comparable quality of life varies quite substantially (nifty calculator here). On the vertical axis are the gaps in NAEP scores between the two groups.

 Figure 1. Income Gaps and Achievement Gaps

Slide1

As we can see, states with larger gaps in income between the groups also have larger gaps in scores between the two groups. Quite honestly, this is not astounding. It’s dumb logic. And that’s why it’s so inexcusable for Cerf & Co. to keep returning to this intellectually & analytically dry well.

Most importantly, NJ’s gap is right on the line. That is, given its income gap, NJ falls right where we would expect- on the line. NJ’s income related achievement gap is right in line with expectations!

Is that good enough? Well, not really. There’s still work to be done. But the bogus claim that NJ has the 2nd largest achievement gap has to stop.

New Jersey has posted impressive NAEP gains given its spending increases

Now let’s take a look at how disadvantaged kids in NJ have actually done on a few of the NAEP tests in recent years when compared to disadvantaged kids in similar states in the region.  The pictures pretty much tell the story.

Figure 2. NAEP 8th grade Math for Children Qualified for Free Lunch

Slide3

Figure 3. NAEP 4th grade Reading for Children Qualified for Free Lunch

Slide4

Figure 4. NAEP 8th Grade Math for Children of Maternal HS Dropouts

Slide5

Even Eric Hanushek’s recent data make NJ look pretty darn good in terms of NAEP gains achieved relatively to additional resources provided!

Figure 5. Relationship between Change in Per Pupil Spending and Overall NAEP Gain

Slide6

Figure 6. Relationship between Change in % Spending per Pupil and Overall NAEP Gain

Slide7

Figure 7. Relationship between Starting Point and Gain over Time

Slide8

For more on these last few slides and the data from which they are generated, see this post.

Arguably, given these results, doing the same thing over and over again and expecting the SAME result might be entirely rational!

Money Matters & Equitable and Adequate Funding is a Necessary Underlying Condition for Success

Finally, a substantial body of literature exists to refute the absurd rhetoric and policy preferences of the NJDOE school funding report – most specifically the veiled assertion that reducing funding to low income children is the way to reduce the achievement gap.

In a recent report titled Revisiting the Age Old Question: Does Money Matter in Education? I review the controversy over whether, how and why money matters in education, evaluating the current political rhetoric in light of decades of empirical research.  I ask three questions, and summarize the response to those questions as follows:

Does money matter? Yes. On average, aggregate measures of per pupil spending are positively associated with improved or higher student outcomes. In some studies, the size of this effect is larger than in others and, in some cases, additional funding appears to matter more for some students than others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent – in other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.

Do schooling resources that cost money matter? Yes. Schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes. Again, in some cases, those effects are larger than others and there is also variation by student population and other contextual variables. On the whole, however, the things that cost money benefit students, and there is scarce evidence that there are more cost-effective alternatives.

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more adequate funding with more accountability for its use may be most promising.

While there may in fact be better and more efficient ways to leverage the education dollar toward improved student outcomes, we do know the following:

Many of the ways in which schools currently spend money do improve student outcomes.

When schools have more money, they have greater opportunity to spend productively. When they don’t, they can’t.

Arguments that across-the-board budget cuts will not hurt outcomes are completely unfounded.

In short, money matters, resources that cost money matter and more equitable distribution of school funding can improve outcomes. Policymakers would be well-advised to rely on high-quality research to guide the critical choices they make regarding school finance.

Regarding the politicized rhetoric around money and schools, which has become only more bombastic and less accurate in recent years, I explain the following:

Given the preponderance of evidence that resources do matter and that state school finance reforms can effect changes in student outcomes, it seems somewhat surprising that not only has doubt persisted, but the rhetoric of doubt seems to have escalated. In many cases, there is no longer just doubt, but rather direct assertions that: schools can do more than they are currently doing with less than they presently spend; the suggestion that money is not a necessary underlying condition for school improvement; and, in the most extreme cases, that cuts to funding might actually stimulate improvements that past funding increases have failed to accomplish.

To be blunt, money does matter. Schools and districts with more money clearly have greater ability to provide higher-quality, broader, and deeper educational opportunities to the children they serve. Furthermore, in the absence of money, or in the aftermath of deep cuts to existing funding, schools are unable to do many of the things they need to do in order to maintain quality educational opportunities. Without funding, efficiency tradeoffs and innovations being broadly endorsed are suspect. One cannot tradeoff spending money on class size reductions against increasing teacher salaries to improve teacher quality if funding is not there for either – if class sizes are already large and teacher salaries non-competitive. While these are not the conditions faced by all districts, they are faced by many.

It is certainly reasonable to acknowledge that money, by itself, is not a comprehensive solution for improving school quality. Clearly, money can be spent poorly and have limited influence on school quality. Or, money can be spent well and have substantive positive influence. But money that’s not there can’t do either. The available evidence leaves little doubt: Sufficient financial resources are a necessary underlying condition for providing quality education.

There certainly exists no evidence that equitable and adequate outcomes are more easily attainable where funding is neither equitable nor adequate. There exists no evidence that more adequate outcomes will be attained with less adequate funding. Both of these contentions are unfounded and quite honestly, completely absurd.

Related sources:

Baker, B.D. (2012) Revisiting the Age Old Question: Does Money Matter in Education. Shanker Institute. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Baker, B.D., Welner, K. (2011) School Finance and Courts: Does Reform Matter, and How Can We Tell? Teachers College Record 113 (11) p. –

How Modern School Finance/Education Policy Works: Lessons from New York

I’ll admit that the more I do this stuff, the more I write about today’s education policy environment and especially the environment around school funding, I do get more cynical. And few states have done more to encourage my cynicism than New York, of late. But I suspect that the tales from the trenches in many other states might be quite similar. So let me use New York as a prototype of the twists and turns and warped logic of modern state education policy.  New York education policy has followed a four step process:

Step 1: Slither out from court order by rigging low-ball foundation aid formula

As I noted on another recent post, several years back the New York Court  of Appeals ordered that the state legislature provide sufficient funding (specifically to New York City) to achieve a “sound basic education” which was ultimately equated with a “meaningful high school education.”  The city and governor’s office presented to the court alternative estimates of what that would cost. The state (governor/legislature/regents), as might be expected sought a “less expensive” option. And the court largely took their side. That is, the court ordered that the system be fixed, but largely (uncritically, but for some dissenting minority opinion) accepted the state’s proposal to fix it.

The state achieved their low-ball estimate by pulling a few classic tricks, some of which have been used in other states. First, the state based their minimum funding level on average spending of existing districts meeting the state standards – but had set a relatively low bar for those standards (a bar most were already surpassing anyway). Then they chose to look only at the “instructional” spending share of current spending (lopping off a large chunk of spending that’s actually needed to operate a school).  Rhode Island recently pulled the same garbage, but instead of looking at instructional spending for districts within Rhode Island they used instructional spending in the neighboring states of Massachusetts, Connecticut and New Hampshire (okay… NH doesn’t border RI… does it… but don’t tell their Commissioner… ‘cuz including NH allowed them to bring the average down! See link above).

The final step in their low-ball analysis was to look only at the average spending of the lower half spending districts that meet the state standards – assuming those districts to be the “efficient” ones, better reflecting minimum “costs.” Of course, what this does in New York State is to eliminate from the calculation nearly every district in the Rockland, Westchester, NYC and Long Island regions. So… base level of funding is essentially the average instruction-only spending of the lower half spending districts that have at least somewhat below current average outcomes, and lie somewhere between Syracuse and Buffalo. That makes sense right? That should give us a reasonable ballpark cost for New York City, Mount Vernon or Yonkers, right?

Even for my 2012-13 analyses below, the foundation level per pupil is set to only $6,570, where it is assumed that the average instructional spending per pupil needed in a New York State to achieve state standards.  So then, how does that stack up against alternative cost estimates of what would actually be needed to achieve specific state outcome targets?

I don’t have time to explain the chart below in great detail, but I do provide complete analysis/explanation in this report on New York State school finance.

In short, what Figure 1 shows us is in PURPLE, the foundation level, or target funding calculated to be needed by districts in each poverty quintile under the state’s own proposed remedy to their constitutional violation.  The PURPLE is the amount of money a district would have under the foundation aid formula, as a combination of state aid and levying the minimum required local effort.

The blue bars come from a cost model produced a few years back by William Duncombe of Syracuse University in which he used that model to estimate the average spending actually required to achieve a 90% proficiency rate on state assessments (where the average had drifted over time, making the 80% standard relatively meaningless – again, see report).  The red arrows show the gap between estimated costs of reasonable outcome goals and guaranteed funding under the foundation formula.

Figure 1:

Slide1

The point here is simply to show a) how much the state low-balled the target funding using their approach vs. a more rigorous approach, and b) how those funding gaps increase quite dramatically for higher poverty districts. In fact, the target funding level is not that far off for low poverty districts, but it’s only slightly better than half of the cost of comparable outcomes for high poverty districts.

Step 2: Conjure annual excuses for why the state can’t afford to fund even its own low-balled targets for local districts

Given figure 1 above, it might be bad enough if the state did follow through and fund its formula. The formula itself was/is grossly insufficient, determined by bogus calculations and filtrations (exclusions) of data all toward the end goal of generating the lowest possible politically palatable estimate of the cost of providing a sound basic education in New York.

But no… no… low-balling the cost wasn’t nearly far enough for the NY legislature and Gov(ernors) to go. The next step was to say – We can’t afford it (they were saying this even before the economy tanked, and they set out a multiyear phase in)!  We can’t afford our own low-ball estimate (while decrying that the estimate was somehow actually overly generous?).

Did they cut back just a little from their target? Oh… say… give districts about 90% or 80% (uh… that would actually be a lot of cut) of what the formula said they needed? Nope. They went much deeper than that. In fact, as I showed in one recent post, as student population needs escalate (according to the state’s own Pupil Need Index) under-funding with respect to foundation targets grows in some cases to over $4,000 per pupil and in New York City to over $3,000 per pupil.

Figure 2.

Slide1

As I showed in that same post, among the most screwed large districts in the state, several receive from the state in general foundation aid only about half (or less) of what they should receive under the STATE’S OWN LOW-BALL FORMULA!

Figure 3.

Slide2

Let’s be clear here. I’m not talking about shortfalls from the relatively high cost targets in that first graph. I’m talking about state aid shortfalls relative to the STATE’S OWN LOW-BALL Foundation Aid model – the model represented by the purple bars in the first graph.  Note also that the state in proposing this foundation model that they’ve subsequently underfunded, essentially declared that low-ball model to be the empirical manifestation of their own state constitutional obligation. It’s their own freakin’ definition of their constitutional obligation…. And they’ve chosen to ignore it.

Step 3: Pretend that it’s all the teachers’ fault and use that as a basis for holding hostage additional funding that should have gone to high need districts years ago!

Oh… but it doesn’t end there!

Riding the national, Duncanian wave of new normalcy (which I’ve come to learn is an extreme form of innumeracy) & reformyness, the only possible cause of lagging achievement in New York State  is bad teachers –greedy overpaid teachers with fat pensions – and protectionist unions who won’t let us fire them. Clearly, the lagging state of performance in low income and minority districts in New York State has absolutely nothing at all to do with lack of financial resources under the low-balled aid formula that the state has chosen to not even half fund for the past 5 years? Nah… that couldn’t have anything to do with it. Besides, money certainly has nothing to do with providing decent working conditions and pay which might leveraged to recruit and retain teachers.

And we all know that if New York State’s average per pupil spending is high, or so the Gov proclaims, then spending clearly must be high enough in each and every-one of the state’s high need districts! (right… because averages always represent what everyone has and needs, right? Reformy innumeracy rears its ugly head again!).

So it absolutely has to be the fact that no teacher in NY has ever been evaluated at all, or fired for being bad even though we know for sure that at least half of them stink. The obvious solution is that they must be evaluated by egregiously flawed metrics – and we must ram those metrics down their throats.

In fact, the New York legislature and Governor even found it appropriate to hold hostage additional state aid if districts don’t adopt teacher evaluation plans compliant with the state’s own warped demands and ill-conceived policy framework.

As I understand it, legislation passed this past year actually tied receipt of state general aid to compliance with the state teacher evaluation mandate. That, in order to receive any increase in state general/foundation aid over prior year, a districts would have to file and have accepted their teacher evaluation plan.

That’s it – we’ll take away their general state aid – their foundation aid – the aid they are supposed to be getting in order to comply with that court order of several years back. The aid they are constitutionally guaranteed under that order. I’m having some trouble accepting the supposed constitutional authority of a state legislature and governor to cut back general aid on this basis – where they’ve already failed to provide most of the aid they themselves identified as constitutionally adequate under court order? But I guess that’s for the New York Court system to decide.

If nothing else, it is thoroughly obnoxious, arbitrary and capricious and grossly inequitable treatment. I hear the reformers (who understand neither math nor school finance) whine… But why… why is it inequitable to require similarly that poor and rich districts follow state teacher and principal evaluation guidelines. Setting aside the junk nature of that evaluation system and the bogus measures on which it rests (and the fact that the reformers’ fav-fab-charters have largely rightfully ignored the eval mandate), it is inequitable because districts serving higher poverty children stand to lose more money per child as a result of non-compliance. And they’ve already been squeezed.

And here’s how that plays out. As I understand it, if districts don’t comply by January, they face the threat of losing the small increase in state aid they received for the current year (compared to 11-12). So, they’d lose it retro-actively, part way through this year. And guess what? Because higher need districts received a marginally greater increase in state aid, they’d lose more per pupil. But the gaps shown above actually already include that oh-so-generous increase! That’s right, the poorer you are, the bigger the financial penalty for non-compliance with the teacher evaluation mandate – and the bigger the financial hole the state has put you in to begin with!

Figure 4. State aid Per Pupil Before and After Non-Compliance Penalty by Student Need

Slide4

Figure 5. Compliance Penalty by Student Need

Slide5This recent article explains that Hempstead, already underfunded by the largest per pupil amount of any large district in the state, stands to lose another $3.5 million in aid if it does not come to agreement on a teacher evaluation plan. State general aid is for the general provision of education to these kids – to pay for enough teachers, classrooms etc. It’s about the day to day operations of schools to ensure the provision of a sound basic education.  This funding shouldn’t be held hostage over reformy whims.

Note that for many districts I have likely understated the amount of aid they would lose because I have counted only changes to general, foundation aid, including “gap elimination adjustment” and partial restoration of those funds. (it would appear, for example, that the potential losses to Hempstead reported in the news are closer to that districts total aid change, not just foundation/GEA change).

Step 4: Protect billions in state aid still being allocated to districts with far fewer additional student needs/costs

And let us not forget that New York State was one of the shining stars – a poster child – of my report with Sean Corcoran for the Center for American Progress where we chronicled how states actually use their aid systems to make equity worse, not better.  While the NY Gov and Legislature have continued to shed elephant tears (in purely political terms) about their fiscal dire straits, the state persists in protecting billions in state direct aid and indirect tax relief subsidies that largely support the states lower  and lowest need local public school districts.

Figure 6 shows that if we look at state general aid, based on initial calculations to local districts by poverty (left hand panel), even after allocating state general aid, there remains an $1,100 per pupil gap in state and local revenue between high and lower poverty districts. But, after the state “tweaks” the  state general aid distribution to provide minimum aid to the wealthiest districts and increase aid to middle/upper middle class districts, and then adds on “tax relief” subsidies, the gap between higher and lower poverty districts increases to $2,300 per pupil. Yep – NY state is actually using billions in state funding to make the system less equitable!  Read the report below for more thorough explanation/analysis!

Figure 6. School Finance Pork in New York!

Slide6

Baker, B.D., Corcoran, S.P.(2012) The Stealth Inequalities of School Funding: How Local Tax Systems and State Aid Formulas Undermine Equality. Washington, DC. Center for American Progress. http://www.americanprogress.org/wp-content/uploads/2012/09/StealthInequities.pdf

And that is how modern state education policy works!

It’s good to be King: More Misguided Rhetoric on the NY State Eval System

Very little time to write today, but I must comment on this NY Post article on the bias I’ve been discussing in the NY State teacher/principal growth percentile ratings. Sociologist Aaron Pallas of TC and economist Sean Corcoran of NYU express appropriate concerns about the degrees of bias found and reported in the technical report provided by the state’s own consultant developing the models. And this article overall raises concern that these problems were simply blown off. I would/and have put it more bluntly. Here’s my replay of events – quoting the parties involved:

First, the state’s consultants designing their teacher and principal effectiveness measures find that those measures are substantively biased:

Despite the model conditioning on prior year test scores, schools and teachers with students who had higher prior year test scores, on average, had higher MGPs. Teachers of classes with higher percentages of economically disadvantaged students had lower MGPs. (p. 1) https://schoolfinance101.com/wp-content/uploads/2012/11/growth-model-11-12-air-technical-report.pdf

But instead of questioning their own measures, they decide to give them their blessing and pass them along to the state as being “fair and accurate.”

The model selected to estimate growth scores for New York State provides a fair and accurate method for estimating individual teacher and principal effectiveness based on specific regulatory requirements for a “growth model” in the 2011-2012 school year. p. 40 https://schoolfinance101.com/wp-content/uploads/2012/11/growth-model-11-12-air-technical-report.pdf

The next step was for the Chancellor to take this misinformation and polish it up as pure spin as part of the power play against the teachers in New York City (who’ve already had the opportunity to scrutinize what is arguably a better but still substantially flawed set of metrics). The Chancellor proclaimed:

The student-growth scores provided by the state for teacher evaluations are adjusted for factors such as students who are English Language Learners, students with disabilities and students living in poverty. When used right, growth data from student assessments provide an objective measurement of student achievement and, by extension, teacher performance. http://www.nypost.com/p/news/opinion/opedcolumnists/for_nyc_students_move_on_evaluations_EZVY4h9ddpxQSGz3oBWf0M

Then send in the enforcers…. This statement came from a letter sent to a district that did decide to play ball with the state on the teacher evaluation regulations. The state responded that… sure… you can adopt the system of multiple measures you propose – BUT ONLY AS LONG AS ALL OF THOSE OTHER MEASURES ARE SUFFICIENTLY CORRELATED WITH OUR BIASED MEASURES… AND ONLY AS LONG AS AT LEAST SOMEONE GETS A BAD RATING.

The department will be analyzing data supplied by districts, BOCES and/or schools and may order a corrective action plan if there are unacceptably low correlation results between the student growth subcomponent and any other measure of teacher and principal effectiveness… https://schoolfinance101.wordpress.com/2012/12/05/its-time-to-just-say-no-more-thoughts-on-the-ny-state-tchr-eval-system/

So… what’s my gripe today? Well, in this particular NY Post article we have some rather astounding quotes from NY State Commissioner John King, given the information above. Now, last I talked about John King, he was strutting about NY with this handy new graph of completely fabricated information on how to improve educational productivity. So, what’s King up to now? Here’s how John King explained the potential bias in the measures and how that bias a) is possibly not bias at all, and b) even if it is, it’s not that big a problem:

“It’s a question of, is this telling you something descriptive about where talent is placed? Or is it telling you something about the classroom effect [or] school effect of concentrations of students?” said King.

“This data alone can’t really answer that question, which is one of the reasons to have multiple measures — so that you have other information to inform your decision-making,” he added. “No one would say we should evaluate educators on growth scores alone. It’s a part of the picture, but it’s not the whole picture.”

So, in King’s view, the bias identified in the AIR technical report might just be a signal as to where the good teachers really are. Kids in schools with lower poverty – kids in schools with higher average starting scores and kids in schools with fewer children with disabilities simply have the better teachers. While there certainly may be some patterned sorting of teachers by their actual effect on test scores a) this proposition is less likely than the expectation of classroom effect and b) making this assumption when not really being able to tease out cause is a highly suspect approach to teacher evaluation (reformy thinking at its finest!).

The kicker is in how King explains why the potential bias isn’t a problem. King argues that the multiple measures approach buffers against over-reliance on the growth percentiles.  As he states so boldly – “it’s part of the picture, but it’s not the whole picture.

The absurdity here is that KING HAS DECLARED TO LOCAL OFFICIALS THAT ALL OTHER MEASURES THEY CHOOSE TO INCLUDE MUST BE SUFFICIENTLY CORRELATED WITH THESE GROWTH PERCENTILE MEASURES!  That’s precisely what the letter quoted above and sent to one local official says! Even this wasn’t the case, the growth percentiles which may wrongly classify teachers for factors outside their control, might carry disproportionate weight in determining teacher ratings (merely as a function of the extent of variation – most of which is noise & much of the remainder is biased).  But, when you require that all other measures be correlated with this suspect measure – you’ve stacked the deck to be substantially if not entirely built on a flawed foundation.

THIS HAS TO STOP. STATE OFFICIALS MUST BE CALLED OUT ON THIS RIDICULOUS CONTORTED/DECEPTIVE & OUTRIGHT DISHONEST RHETORIC!

 

Note: King also tries to play up the fact that at any level of poverty, there are some teachers  getting higher or lower ratings. This explanation ignores the fact that much of the remaining variation in teacher estimates is noise.  Some will get higher or lower ratings in a given year simply because of the noise/instability in the measures. These variations may be entirely meaningless.

Forget the $300m Deal! Let’s talk $3.4 billion (or more)!

Sometime last week or so, Sockpuppets for Ed Reform marched on City Hall in NY demanding that the city and teachers union come to a deal on a teacher evaluation system compliant with the state’s new regulations for such systems, so that the district could receive an approximately $300 million grant payment associated with the implementation of that system. Well, actually, it was more about trying to enrage the public that the evil teachers union in particular was at fault for holding hostage and potentially losing this supposedly massive sum of funding.

As one can see by the signs the SFER protesters were displaying, the protest was much less clearly articulated than I’ve described above. On would think, from looking at stuff like this: http://nyulocal.com/wp-content/uploads/2012/11/DSC_0841.jpg that this protest was actually about obtaining funding for the district – funding that would provide for substantive and sustained improvement to district programs/services.

But hey, far be it for SFER to actually carry placards that are in any way accurate or precise (or to have any clue what they are talking about). At this particular event in NYC, they even convinced a 15 year old that the fight was really about funding.

So, we’ve got a protest that is presented as being about funding, but is really about a teacher evaluation system driven by student test scores, being carried out by a group that clearly has little or no understanding of either.

You know, I would typically give a group of undergrads a break on stuff like this.  Hey, they’re undergrads and have time to learn/develop the discipline/understanding of these complex topics. Heck, I was anything but a disciplined undergrad myself.  But unfortunately, this group has thus far displayed to me the worst attributes of the most intellectually lazy of today’s college students – a persistent pattern of copying and pasting low quality content from web sites and presenting it as novel content of their own. It’s as if their placards, and their entire website was generated by lifting content from “reformy-pedia.”

So then, what is the real story on what’s goin’ on with Teacher Evaluation and School Funding in New York State?

The State Evaluation System/Guidelines

I’ve written several posts recently about the state metrics for teacher evaluation and the state department of education push to get districts on board. I also wrote about the letter from the Chancellor of the Board of Regents which appeared in the NY Post, encouraging NYC in particular to get on board with that $300m RAW deal!

In my humble opinion, no-one should sign on to a deal to implement a teacher evaluation system under the current NYSED guidelines, given the evidence I’ve laid out over the past few weeks. No-one. Just say NO.

First, the state’s consultants designing their teacher and principal effectiveness measures find that those measures are substantively biased:

Despite the model conditioning on prior year test scores, schools and teachers with students who had higher prior year test scores, on average, had higher MGPs. Teachers of classes with higher percentages of economically disadvantaged students had lower MGPs. (p. 1) https://schoolfinance101.com/wp-content/uploads/2012/11/growth-model-11-12-air-technical-report.pdf

But instead of questioning their own measures, they decide to give them their blessing and pass them along to the state as being “fair and accurate.”

The model selected to estimate growth scores for New York State provides a fair and accurate method for estimating individual teacher and principal effectiveness based on specific regulatory requirements for a “growth model” in the 2011-2012 school year. p. 40 https://schoolfinance101.com/wp-content/uploads/2012/11/growth-model-11-12-air-technical-report.pdf

The next step was for the Chancellor to take this misinformation and polish it up as pure spin as part of the power play against the teachers in New York City (who’ve already had the opportunity to scrutinize what is arguably a better but still substantially flawed set of metrics). The Chancellor proclaimed:

The student-growth scores provided by the state for teacher evaluations are adjusted for factors such as students who are English Language Learners, students with disabilities and students living in poverty. When used right, growth data from student assessments provide an objective measurement of student achievement and, by extension, teacher performance. http://www.nypost.com/p/news/opinion/opedcolumnists/for_nyc_students_move_on_evaluations_EZVY4h9ddpxQSGz3oBWf0M

Then send in the enforcers…. This statement came from a letter sent to a district that did decide to play ball with the state on the teacher evaluation regulations. The state responded that… sure… you can adopt the system of multiple measures you propose – BUT ONLY AS LONG AS ALL OF THOSE OTHER MEASURES ARE SUFFICIENTLY CORRELATED WITH OUR BIASED MEASURES… AND ONLY AS LONG AS AT LEAST SOMEONE GETS A BAD RATING.

The department will be analyzing data supplied by districts, BOCES and/or schools and may order a corrective action plan if there are unacceptably low correlation results between the student growth subcomponent and any other measure of teacher and principal effectiveness… https://schoolfinance101.wordpress.com/2012/12/05/its-time-to-just-say-no-more-thoughts-on-the-ny-state-tchr-eval-system/

This is a raw deal, whether attached to what appears to be a pretty big bribe or not. And quite honestly, while $300 million is nothing to sneeze at, it pales in comparison to what the city schools are actually owed under the state’s own proposal for how it would fund its schools to comply with a court order of nearly a decade ago.

THE REAL ISSUE in NY State

Meanwhile, at the other end of the state – well sort of – a different protest was going on. This protest in Albany actually was about funding and the fact that the state of New York has repeatedly cut state aid from local public school districts each of the past few years, has systematically cut more per pupil funding from districts serving needier student populations and has never once come close to providing the funding levels that the state’s own funding formula suggest are needed (actually, were needed back in 2007!).

Here’s a quick run-down on the state of school funding in New York:

  1. New York continues to maintain one of the least equitable school finance systems in the country, where districts serving higher concentrations of children in poverty have systematically less state and local revenue per pupil.
  2. New York State accomplishes these patterns of egregious disparity not merely by lack of effort, but by actually allocating substantial state resources – disproportionate state resources – toward buying down the tax rates of the state’s wealthiest districts and making other politically convenient state aid allocations to economically advantaged districts, at the expense of children in poverty.
  3. Even though the state was ordered by the NY court of appeals nearly a decade ago to provide adequate resources to children attending high need districts, and even though the court accepted the state’s own proposed funding formula to meet that goal (which was much lower than more rigorously determined spending targets), the state has chosen to not even come close to funding those targets and in recent years has systematically cut more funding from children with greater needs.

So, how does this all affect districts across New York State and NYC in particular? I’m going to set a really low bar here for my comparisons. In response to court order in the Campaign for Fiscal Equity case the state of New York proposed a new school finance formula – a foundation aid formula – to begin implementation in 2007. It was actually a pretty lame, relatively low-balled funding formula to begin with, as explained here!

But even that low-balled estimate of what districts were supposed to get has never been close to fully funded. Several large districts, including Albany, for example, receive in 2012-13, less than half of the state aid they are supposed to receive if the formula was implemented.

The formula provides a target level of funding for each district based on student needs and regional costs. Then, the formula determines the share of that target funding that should come from the state. Then, the formula as actually implemented, ignores all of that and provides a marginal increase or decrease (over what districts have historically received) maintaining the persistent inequities of the system.

The first figure below shows the difference between actual state foundation aid per pupil (after applying this trick they refer to as gap elimination adjustment) and the aid calculated to be needed according to THE STATE’S OWN FORMULA for addressing regional costs and student needs. Districts are organized from low need (left) to high need (right) using the state’s own pupil need index. Bubble size indicates district enrollment size. NYC is the BIG ONE! And, we can see, by eyeballing the middle of that bubble, that NYC is being shorted between $3,000 and $4,000 per pupil. At 1 million kids, that’s about $3.4 billion … each year… every year… over time.  No, not a $300m implementation grant, but $3.4 billion in annual operating funds. Yeah… the stuff that actually provides for smaller class sizes, decent teacher pay, up to date materials, supplies and equipment, and arts, music and all that other stuff!

Slide1

The table below provides a closer look at districts with the largest funding gap between what the formula calculates is needed and what districts actually receive in state aid.

Slide2

So, instead of talking about a one shot $300m bribe to implement a bad system based on bad data, at a cost that may exceed the amount of grant to begin with, perhaps it would make more sense to focus on that $3.4 billion deal! You know, the one state officials themselves promised in response to that court order all those years ago.

And when we do start taking more seriously this much bigger funding issue, don’t forget to send me a cool lookin’ knit protest hat!

Readings

Policy Brief on State Aid in New York (Summer 2011) NY Aid Policy Brief_Fall2011_DRAFT6

Baker, B.D., Welner, K.G. (2012) Evidence and Rigor: Scrutinizing the Rhetorical Embrace of
Evidence-based Decision-making. Educational Researcher 41 (3) 98-101

Baker, B.D., Welner, K. (2011) School Finance and Courts: Does Reform Matter, and How Can We
Tell? Teachers College Record 113 (11) p. –

Baker, B.D., Corcoran, S.P.(2012) The Stealth Inequalities of School Funding: How Local Tax
Systems and State Aid Formulas Undermine Equality. Washington, DC. Center for American
Progress. http://www.americanprogress.org/wp-content/uploads/2012/09/StealthInequities.pdf

Baker, B.D., Sciarra, D., Farrie, D. (2012) Is School Funding Fair? Second Edition, June 2012.
http://schoolfundingfairness.org/National_Report_Card_2012.pdf

Baker, B.D. (2012) Revisiting the Age Old Question: Does Money Matter in Education. Shanker
Institute. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Baker, B.D., Welner, K.G. (2011) Productivity Research, the U.S. Department of Education, and
High-Quality Evidence. Boulder, CO: National Education Policy Center. Retrieved [date] from
http://nepc.colorado.edu/publication/productivity-research.

Friday Thoughts on Data, Assessment & Informed Decision Making in Schools

Some who read this blog might assume that I am totally opposed, in any/all circumstances to using data in schools to guide decision-making. Despite my frequent public cynicism I assure you that I believe that much of the statistical information we collect on and in schools and school systems can provide useful signals regarding what’s working and what’s not, and may provide more ambiguous signals warranting further exploration – through both qualitative information gathering (observation, etc.) and additional quantitative information gathering.

My personal gripe is that thus far – especially in public policy – we’ve gone about it all wrong.  Pundits and politicians seem to have this intense desire to impose certainty where there is little or none and impose rigid frameworks with precise goals which are destined to fail (or make someone other than the politician look as if they’ve failed).

Pundits and politicians also feel the intense desire to over-sample the crap out of our schooling system – taking annual measurements on every child over multiple weeks of the school year when strategic sampling of selected testing items across samples of students and settings might provide more useful information at lower cost and be substantially less invasive (NAEP provides one useful example). To protect the health of our schoolchildren, we don’t make them all walk around all day with rectal thermometers hanging out of…well… you know?  Nor do political pollsters attempt to poll 100% of likely voters.  Nor should we feel the necessity to have all students take all of the assessments, all of the time, if our goal is to ensure that the system is getting the job done/making progress.

In my view, a central reason for testing and measurement in schools is what I would refer to as system monitoring,  where system monitoring is best conducted in the least intrusive and most cost-effective way – such that the monitoring itself does not become a major activity of the system!  We just need enough sampling density in our assessments to generate sufficient estimates at each relevant level of the system.

I know there are those who would respond that testing everyone every year ensures that no kids fall through the cracks. If we did it my less intrusive way… kids who weren’t given all test questions in math in a given year might fall through some hypothetical math crack somewhere. But it is foolish to assume that NCLB-every-student-every-year testing regimes actually solve that problem. Further, high stakes testing with specific cut scores either for graduation or grade promotion violates one of the most basic tenets of statistical measurement of student achievement – that these measures are not perfectly precise. They can’t identify exactly  where that crack is, or which kid actually fell through it! One can’t select a cut score and declare that the child one point above that score (who got one more question correct on that given day) is ready (with certainty) for the next grade (or to graduate) and the child 1 point below is not. In all likelihood these two children are not different at all in their actual “proficiency” in the subject in question. We might be able to say – by thoughtful and rigorous analysis – that on average, students who got around this score in one year, were likely to get a certain score in a later year, and perhaps even more likely to make it beyond remedial course work in college. And we might be able to determine if students attending a particular school or participating in a particular program are more or less likely (yeah… probability again) to succeed in college.

Thoughtful analysis and more importantly thoughtful USE of testing data in schools requires a healthy respect for what those numbers can and cannot tell us… and nuanced understanding that the numbers typically include a mix of non-information (noise/unexplainable, non-patterned information), good information (true signal) and perhaps misinformation (false signal, or bias, variation caused by something other than what we think it’s caused by).

These issues apply generally to our use of student assessment data in schools and also apply specifically to an area I discuss often on this blog – statistical evaluation of teacher influence on tested student outcomes.

I was pleased to see the Shankerblog column by Doug Harris a short while back in which Doug presented a more thoughtful approach to integrating value-added estimates into human resource management in the schooling context. Note that Doug’s argument is not new at all, nor is it really his own unique view. I first heard this argument in a presentation by Steve Glazerman (of Mathematica) at Princeton a few years ago. Steve also used the noisy medical screening comparison to explain the use of known-to-be-noisy information to assist in making more efficient decisions/taking more efficient steps in diagnosis. That is, with appropriate respect for the non-information in the data, we might actually find ways to use that information productively.

Last spring, I submitted an article (still under review) in which I, along with my coauthors Preston Green and Joseph Oluwole explained:

As we have explained herein, value-added measures have severe limitations when attempting even to answer the narrow question of the extent to which a given teacher influences tested student outcomes. Those limitations are sufficiently severe such that it would be foolish to impose on these measures, rigid, overly precise high stakes decision frameworks.  One simply cannot parse point estimates to place teachers into one category versus another and one cannot necessarily assume that any one individual teacher’s estimate is necessarily valid (non-biased).  Further, we have explained how student growth percentile measures being adopted by states for use in teacher evaluation are, on their face, invalid for this particular purpose.  Overly prescriptive, overly rigid teacher evaluation mandates, in our view, are likely to open the floodgates to new litigation over teacher due process rights, despite much of the policy impetus behind these new systems supposedly being reduction of legal hassles involved in terminating ineffective teachers.

This is not to suggest that any and all forms of student assessment data should be considered moot in thoughtful management decision making by school leaders and leadership teams. Rather, that incorrect, inappropriate use of this information is simply wrong – ethically and legally (a lower standard) wrong. We accept the proposition that assessments of student knowledge and skills can provide useful insights both regarding what students know and potentially regarding what they have learned while attending a particular school or class. We are increasingly skeptical regarding the ability of value-added statistical models to parse any specific teacher’s effect on those outcomes. Further, the relative weight in management decision-making placed on any one measure depends on the quality of that measure and likely fluctuates over time and across settings. That is, in some cases, with some teachers and in some years, assessment data may provide leaders and/or peers with more useful insights.  In other cases, it may be quite obvious to informed professionals that the signal provided by the data is simply wrong – not a valid representation of the teacher’s effectiveness.

Arguably, a more reasonable and efficient use of these quantifiable metrics in human resource management might be to use them as a knowingly noisy pre-screening tool to identify where problems might exist across hundreds of classrooms in a large district. Value-added estimates might serve as a first step toward planning which classrooms to observe more frequently. Under such a model, when observations are completed, one might decide that the initial signal provided by the value-added estimate was simply wrong. One might also find that it produced useful insights regarding a teacher’s (or group of teachers’) effectiveness at helping students develop certain tested algebra skills.

School leaders or leadership teams should clearly have the authority to make the case that a teacher is ineffective and that the teacher even if tenured should be dismissed on that basis. It may also be the case that the evidence would actually include data on student outcomes – growth, etc. The key, in our view, is that the leaders making the decision – indicated by their presentation of the evidence – would show that they have used information reasonably to make an informed management decision. Their reasonable interpretation of relevant information would constitute due process, as would their attempts to guide the teacher’s improvement on measures over which the teacher actually had control.

By contrast, due process is violated where administrators/decision makers place blind faith in the quantitative measures, assuming them to be causal and valid (attributable to the teacher) and applying arbitrary and capricious cutoff-points to those measures (performance categories leading to dismissal).   The problem, as we see it, is that some of these new state statutes require these due process violations, even where the informed, thoughtful professional understands full well that she is being forced to make a wrong decision. They require the use of arbitrary and capricious cutoff-scores. They require that decision makers take action based on these measures even against their own informed professional judgment.

My point is that we can have thoughtful, data informed (NOT DATA DRIVEN) management in schools. We can and should! Further, we can likely have thoughtful data informed management (system monitoring) through far less intrusive methods than currently employed – taking advantage of advancements in testing and measurement, sampling design etc. But we can only take these steps if we recognize the limits of data and measurement in our education systems.

Unfortunately, as I see it, current policy efforts enforcing the misuse of assessment data (as illustrated here, here and here) and misuse of estimates of teacher effectiveness based on those data (as illustrated here) will likely do far more harm than good.  Unfortunately, I don’t see things turning corner any time soon.

Until then, I may just have to stick to my current message of Just say NO!

It’s time to just say NO! More thoughts on the NY State Tchr Eval System

This post is a follow up on two recent previous posts in which I first criticized consultants to the State of New York for finding substantial patterns of bias in their estimates of principal (correction: School Aggregate) and teacher (correction: Classroom aggregate) median growth percentile scores but still declaring those scores to be fair and accurate, and next criticized the Chancellor of the Board of Regents for her editorial attempting to strong-arm NYC to move forward on an evaluation system adopting those flawed metrics – and declaring the metrics to be “objective” (implying both fair and accurate).

Let’s review. First, the AIR report on the median growth percentiles found, among other biases:

Despite the model conditioning on prior year test scores, schools and teachers with students who had higher prior year test scores, on average, had higher MGPs. Teachers of classes with higher percentages of economically disadvantaged students had lower MGPs. (p. 1)

In other words… if you are a teacher who so happens to have a group of students with higher initial scores, you are likely to get a higher rating, whether that difference is legitimately associated with your teaching effectiveness or not. And, if you are a teacher with more economically disadvantaged kids, you’re likely to get a lower rating. That is, the measures are biased – modestly – on these bases.

Despite these findings, the authors of the technical report chose to conclude:

The model selected to estimate growth scores for New York State provides a fair and accurate method for estimating individual teacher and principal effectiveness based on specific regulatory requirements for a “growth model” in the 2011-2012 school year. p. 40

I provide far more extensive discussion here!  But even a modest bias across the system as a whole can indicate the potential for substantial bias for underlying clusters of teachers serving very high poverty populations or very high or very low prior scoring students. In other words, THE MEASURE IS NOT ACCURATE – AND BY EXTENSION – IS NOT FAIR!!!!! Is this not obvious enough?

The authors of the technical report were wrong – technically wrong – and I would argue morally and ethically wrong in providing NYSED their endorsement of these measures!  You just don’t declare outright, when your own analyses show otherwise, that a measure [to be used for labeling people] is fair and accurate!  [setting aside the general mischaracterization that these are measures of “teacher and principal effectiveness”]

Within a few days after writing this post, I noticed that Chancellor Merryl Tisch of the NY State Board of Regents had posted an op-ed in the NY POST attempting to strong-arm an agreement on a new teacher evaluation system between NYC teachers and the city. In the op-ed, the Chancellor opined:

The student-growth scores provided by the state for teacher evaluations are adjusted for factors such as students who are English Language Learners, students with disabilities and students living in poverty. When used right, growth data from student assessments provide an objective measurement of student achievement and, by extension, teacher performance.

As I noted in my post the other day, one might quibble that Chancellor Tisch has merely stated that the measures are “adjusted for” certain factors and she has not claimed that those adjustments actually work to eliminate bias – which the technical report indicates THEY DO NOT. Further, she has merely declared that the measures are “objective” and not that they are accurate or precise. Personally, I don’t find this deceitful propaganda at all comforting! Objective or not – if the measures are biased, they are not accurate and if they are not accurate they, by extension are not fair.

Sadly, the story of misinformation and disinformation doesn’t stop here. It only gets worse! I received a copy of a letter yesterday from a NY school district that had its teacher evaluation plan approved by NYSED. Here is a portion of the approval letter:

NYSED Letter

Now, I assume this language to be boilerplate. Perhaps not. I’ve underling the good stuff. What we have here is NYSED threatening that they may enforce a corrective action plan on the district if the district uses any other measures of teacher or principal effectiveness that are not sufficiently correlated WITH THE STATE’S OWN BIASED MEASURES OF PRINCIPAL AND TEACHER EFFECTIVENESS!

This is the icing on the cake!  This is sick- warped- wrong!  Consultants to the state find that the measures are biased, and then declare they are “fair and accurate.” The Chancellor spews propaganda that reliance on these measures must proceed with all deliberate speed! (or ELSE!!!!!!!). Then the Chancellor’s enforcers warn individual district officials that they will be subjected to mind control – excuse me – departmental oversight – if they dare to present their own observational or other ratings of teachers or principals that don’t correlate sufficiently with the state imposed, biased measures.

I really don’t even know what to say anymore??????????

But I think it’s time to just say no!

 

 

When Dummy Variables aren’t Smart Enough: More Comments on the NJ CREDO Study

This is  a brief follow up on the NJ CREDO study, which I wrote about last week when it was released. The major issues with that study were addressed in my previous post, but here, I raise an additional non-trivial issue that plagues much of our education policy research. The problems I raise today not only plague the CREDO study (largely through no real fault of their own…but they need to recognize the problem), but also plague many/most state and/or city level models of teacher and school effectiveness.

We’re all likely guilty at some point in time or another – guilty of using dummy variables that just aren’t precise enough to capture what is that we are really trying to measure. We use these variables because, well, they are available, and often, greater precision is not. But the stakes can be high if using these variables leads to misclassification/misidentification of schools for closure, teachers to be dismissed, or misidentification of supposed policy solutions deserving greater investment/expansion.

So… what is a dummy variable? Well, a dummy variable is when we classify students as Poor or Non-poor by using a simple, single income cut-off and assigning, for example, the non-poor a value 0f “0” and poor a value of “1.” Clearly, we’re losing much information when we take the entire range of income variation and lump it into two categories. And this can be consequential as I’ve discussed on numerous previous occasions. For example, we might be estimating a teacher effectiveness model and comparing teachers who each have a class loaded with 1s and  few 0s.  But, there’s likely a whole lot of variation across those classes full of 1s – variation between classrooms with large numbers of very low income, single parent & homeless families versus the classroom where those 1s are marginally below the income threshold.

For those who’ve not really pondered this, consider that for 2011 NAEP 8th grade math performance in New Jersey, the gap between non-low income and reduced lunch kids (185% income threshold for poverty) is about the same as the gap between free (130% income level) & reduced!

Slide4

The NJ CREDO charter school comparison study is just one example. CREDO’s method involves identifying matched students who attend charter schools and districts schools based on a set of dummy variables. In their NJ study, the indicators included an indicator for special education status and an indicator for children qualified for free or reduced priced lunch (as far as one can tell from the rather sketchy explanation provided). If their dummy variable matches, they are considered to be matched – empirically THE SAME. Or, as stated in the CREDO study:

…all candidates are identical to the individual charter school student on all observable characteristics, including prior academic achievement.

Technically correct – Identical on the measures used – but identical? Not likley!

The study also matched on prior test score, which does help substantially in providing additional differentiation within these ill-defined categories. But, it is important to understand that annual learning gains  – as well as initial scores/starting point – are affected by a child’s family income status. Lower income, among low income, is associated with increased mobility (induced by housing instability). Quality of life during all those hours kids spend outside of school (including nutrition/health/sleep, etc.) affect childrens’ ability to fully engage in their homework and also likely affect summer learning/learning loss (access to summer opportunities varies by income/parental involvement, etc.). So – NO – it’s not enough to only control for prior scores. Continued deprivation influences continued performance and performance growth. As such, this statement in the CREDO report is quite a stretch (but is typical, boilerplate language for such a study):

The use of prior academic achievement as a match factor encompasses all the unobservable characteristics of the student, such as true socioeconomic status, family background, motivation, and prior schooling.

Prior scores DO NOT capture persistent differences in unobservables that affect the ongoing conditions under which children live, which clearly affect their learning growth!

Now, one problem with the CREDO study is that we really don’t know which schools are involved in the study, so I’m unable here to compare the demographics of the schools actually included among charters with district schools. But, for illustrative purposes, here are a few figures that raise significant questions about the usefulness of matching charter students and district students on the basis  of “special education” as a single indicator, and “free AND reduced” lunch qualification as a single indicator.

First, here are the characteristics of special education populations in Newark district and charter schools.

Slide1As I noted in my previous post, nearly all special education students in Newark Charter schools have mild specific learning disabilities and the bulk of the rest have speech impairment.  Yet, students in districts schools who may have received the same dummy variable coding are far more likely to have multiple disabilities, mental retardation, emotional disturbance, etc. It seems rather insufficient to code these groups with a single dummy variable… even if the classifications of the test-taker population were more similar than those of the total enrolled population (assuming many of the most severely disabled children were not in that test-taker sample?).

Now, here are the variations by income status – first for district and charter schools in the aggregate:

Slide2

Here, charters in Newark as I’ve noted previously, generally have fewer low income students, but they have far fewer students below the 130% income threshold than they do between the 130% and 185% thresholds. It would be particularly interesting to be able to parse the blue regions even further as I suspect that charters serve an even smaller share of those below the 100% threshold.  Using a single dummy variable, any child in either the red or blue region was assigned a 1 and assumed to be the same (excuse me… “IDENTICAL?”). But, as it turns out, there is about twice the likelihood that the child with a 1 in a charter school was in a family between the 130% and 185% income thresholds. And that may matter quite a bit, as would additional differences within the blue region.

Here’s the distribution of free vs. reduced price lunch across NJ charter schools – among their free/reduced populations.

Slide3

While less than 10% of the free/reduced population in NPS is in the upper income bracket, a handful of Newark Charter schools – including high flyers like Greater Newark, Robert Treat and North Star, have 20% to 30% of their (relatively small) low income populations in the upper bracket of low income. That is, for the “matched child” who attended Treat, North Star or Greater Newark there was a 2 to 3 times greater chance than for the their “peer” in NPS that they were from the higher (low) income group.

Again… CREDO likely worked with the data they have. However, I do find inexcusable the repeated sloppy use of the term “poverty” to refer to children qualified for free or reduced price lunch, and the failure of the CREDO report to a) address any caveats regarding their use of these measures or b) provide any useful comparisons of the differences in overall demographic context between charter schools and district schools.

Ed Schools – The Sequel: Rise of the Intellectually Dead

Warning: The following post contains the elitist musings of an ivory tower professor who has only professed at major research universities, who attended a selective liberal arts college & received his doctorate from an Ivy league institution (well… a branch of one… Teachers College at Columbia).

A while back, I wrote a post on “ed schools” the point of which was to show the shift in production of degrees that had occurred between the early 1990s and late 2000s. When I wrote that first post, ed schools were coming under fire from DC think tanks like the National Center on Teaching Quality (NCTQ), which seemed largely unable to understand the most basic issues of degree production in education (I’m unsure they’ve learned much since then!). And now, it would appear that our esteemed U.S. Secretary of Education has decided that ed schools and teacher preparation will be of primary interest in the second term of this administration.

The problem as I previously indicated, was that most of this rhetoric about ed schools and their supposed failure of society and production of generations of ill-equipped American youth, is that the rhetoric of “ed school” assumes a static definition of ed school – rooted in a 1950s to 1970s characterization of the regional public teachers college, and built on an assumption that teachers obtain their training and a teaching credential – for the one thing they teach – through a single institution as the core of their undergraduate education. Being “teachers colleges,” these schools are obviously lax on admission standards, have curriculum that is neither academically rigorous nor practical, etc. etc. etc. (the conflicting rhetoric in this regard is fun to follow – too much theory… no practical application… but not academically rigorous, etc.), and well… simply must be replaced by a vast set of alternative routes/pathways/programs!

In short, the vast majority of the critique of teacher education assumes this monolithic AND STATIC entity of teacher preparation housed in state colleges and universities. Emporia state in Kansas – that’s you! Monclair in NJ – that’s you! West Georgia – you too! And those state flagships with teacher prep programs? Damn you Rutgers, Michigan, Illinois for producing increasing numbers of underqualified teachers! The wrath of NCTQ and now Arne Duncan will be upon you!

But degree & credential production in education has not entirely been static over time. In fact, anything but! There are clearly emerging trends. And if we believe that there really has been a decline in the academic quality of those receiving credentials in education, it would behoove us to take a close look at those trends. But since no-one else seems to be doing that – especially not NCTQ – I figured I should take another shot at it.

A couple of key points are in order. FIRST – it is important to understand that these days, many initial teaching credentials are already granted through alternate routes outside of undergraduate programs and to individuals with degrees in fields other than education. In addition to non-degree alternate routes which I cannot even capture with the data in this post, many initial teaching credentials are granted through graduate programs – at the masters degree level and an even larger share of additional – second/third credentials received by practicing teachers are obtained through graduate programs. Individual teachers may have collected a handful of different credentials, all from different institutions.

So, let’s take a look at undergraduate and masters degree production trends.

Undergraduate Training

Undergraduate degree production in “education” fields generally (most of which involves teacher preparation) has been most stable over time. Using 1994 Carnegie Classifications (the most stratified system of Carnegie classifications of the past few decades: see end of post for definitions), we see that the percent of degrees being produced by what were the public “teachers colleges” (Comprehensive 1… as opposed to those labeled as “Teachers Colleges”) still hold the lions share, but have declined over time. Research Universities which produced around 14% in 1990 now produce closer to 10% (those are your state flagships & major private universities). So… the major traditional public college and university role is declining slightly in market share.

That loss is being picked up by what is actually a very small subset of colleges – that also tend to be relatively small, and not so prestigious colleges. These are the “LA – Liberal Arts 2” colleges. It’s quite striking that growth in this subset is sufficient to shift the market shares of major state universities and comprehensive regional colleges. Incidentally, LA 2s were among the first to expand rapidly their production of online and distance MBAs… around the same time they started tapping the ed market. (this period overlaps with a trend among financially strapped, less selective colleges making the move to change their name to “university.“)

Slide19

Patterns are also relatively stable by the Barrons’ competitiveness ratings. Notably, colleges right in the middle of the competitiveness ratings have the largest market share. I know this conflicts with reformy ideas that all ed degrees are produced by the worst colleges – but at the undergrad level, it’s a pretty normal distribution. Competitive colleges have a consistent 50% market share. Indeed, they are not the top third. They are also not the bottom! They are… the middle… as one would expect for a profession with modest (at best) earnings expectations.

The next two categories out from there – one up (very) and one down (less), have just under 20%. But, the “less competitive” group seems to be showing an uptick (they are also heavy on those LA2s!). Highly Competitive and Non-Competitive are also relatively comparable, but with non-competitive slightly outpacing highly-competitive.

 

Slide20

Masters Degrees

It’s in the production of masters degrees where the real fun stuff is happening. First, let’s take a look at what’s been happening across institutions by type. Note that Comprehensive colleges were, in large part, designed to deliver bachelors and masters degree programs and many from early on had large education programs and teacher preparation programs in particular. But we see in the figure below that the market share of masters degree production for Comp1s has declined over time. So too has the market share for masters degrees for Research Universities (including state flagship universities).

Amazingly, it’s those LA2s again that have risen dramatically in degree production. These lower tier liberal arts colleges (we’re not talkin’ Williams, Haverford, etc… which are LA1s. Those schools aren’t crankin’ up masters in Ed… and they’re also not changing their name to Williams University, etc.), have become the second largest producers of masters degrees in education. Bear in mind that liberal arts colleges, as classified in the 1990s, were never really intended to be handing out graduate degrees – no less massive numbers of them.  LA2s have gone from only about 1% of ed masters production in 1990 to over 10% by 2011.

Slide23

The next figure reclassifies these schools by the competitiveness of their undergraduate programs (since we lack competitiveness measures for graduate programs). What we see here is that masters programs housed in “LESS COMPETITIVE” undergraduate colleges are the ones that are creeping up in market share. To a significant extent, these are online, credential granting programs run through LA2s.

Slide24

So, what we have here, is a rather dramatic expansion of graduate credentials in education being handed out by what some (including myself) might characterize as relatively low quality, non-selective undergraduate institutions that were never meant to be handing out graduate degrees to begin with.  But perhaps that’s just my ivory tower, Research I perspective.

Now lets take a look at the top 20 Masters degree producers in the early 1990s and then in the most recent three years. In the early 1990s, the largest producers were crankin’ out a few thousand over a three year period. These included some early entrants – pre-online era – to the degree mass-production game like Lesley College and National Louis U. But, there were also many programs housed in brick and mortar public universities in the mix, including both state flagships (UT Austin, Ohio State) and other pretty solid academic schools (Harvard, Columbia/TC).  Arguably, these [the public colleges in particular] are the schools now taking the brunt of the blame for the state of teacher preparation – Northern Arizona, Northern Colorado, Eastern Michigan, etc.

Slide26

But who has actually been crankin’ out the masters degrees and credentials in recent years? And, if there is a decline and pending crisis in education training/preparation, who might instead be to blame? Below is the more recent production of graduate degrees/credentials. First and foremost, we’ve now got schools crankin’ out over 3,000 per year – or 9k per 3 years. Phoenix, Waldon and Grand Canyon together produce more masters degrees than many of the next several combined.  There is a substantial gap in production before one reaches the first traditional teacher preparation program on the list.

Is it possible that the emphasis on traditional “ed schools” within state boundaries as the obvious source of our problems is misplaced?

Slide25

Graduate Degree Production in Educational Leadership/Administration

I’ve got one last bit to address here and that’s training in educational leadership/administration, a topic I’ve written about in my academic publications (see below). Degree production in educational leadership has followed many of the same trends we see in education more generally. And there has been comparable push to provide more “alternatives” for gaining access to principal, supervisor and district leadership credentials. NOTE- if you think some of what I’m displaying here makes education grad degree production look like a cesspool, I assure you that when it comes to the production of MBAs, the picture is equally if not even more ugly! (One can buy an MBA almost anywhere… perhaps even more easily than a degree in ed admin… and in many cases which I have observed directly, the level of academic rigor, even within major universities, is hardly different!)

The figure below shows that major research universities have played a declining role in the production of graduate degrees (all levels) in educational administration. Again, it’s those entrepreneurial LA2s that are crankin’ up the production – moving into 2nd place among institution types.

Slide7

Now lets take a look specifically at doctoral degrees. One can almost kind of understand the mass production masters degrees which in education are often tied to obtaining specific certifications perhaps in additional fields of specialization (special education, etc.). Yes, in many states, administration degrees are structured such that the masters is coupled with building level certification and doctorate with district level certification. Even then, how many doctorates does any one institution need to be cranking out? And who should be granting that level of degree?

By 1990s Carnegie classifications, doctorates should be (have been) largely granted out by Research and Doctoral Universities. Comprehensive colleges were generally masters producing schools, not doctoral granting institutions. These strata were, in fact, intended to reflect the capacity of institutions to grant certain types/levels of degrees.

Already by the early 1990s, Nova Southeastern had pioneered mass production of the education doctorate. But outside of the Nova model, most major producers of doctorates were actual universities (okay… a bit harsh… since NOVA actually is a university, and has a pretty well defined, conventional curriculum for their graduate programs).

Slide12

In the most recent years, Nova Southeastern has remained strong… but now right up there are such stellar academic powerhouses as Walden, Capella and Phoenix! (and Argosy)… many of which probably occasionally show up as side-bar advertisements on my blog! (as they do when I log into facebook).

A notable change in the past few years is the entrance of USC and Penn to this mix, with their new practitioner preparation programs which apparently crank out a sizable number of doctorates per year.  This raises the interesting question of whether leading universities should try to get into the mass production game? Is the system overall better for it, even if those institutions have to sacrifice some quality in order to mass produce? We’ll have to see if they can keep up with the Waldens and Capellas over the next several years.

Slide14

Closing Thoughts

To me, these trends are pretty astounding, and serious consideration of these trends must play into any discussion that alarmists might have about the supposed decline in the quality of teacher and administrator preparation (to the extent these alarmists give serious consideration to anything).  Those ringing these alarm bells seem more than happy to suggest that the obvious problem lies with traditional “ed schools” (read, regional and state flagship public colleges and universities) and that the obvious solution is to provide more alternative routes, online options – teacher preparation by MOOC…  (and likely not a MOOC delivered by Stanford U. faculty… but rather through Walden, Capella and the like) & expansion of schools relying on imported, short term labor supply.

I also find it strange to say the least that those who argue that the problem is that our teachers don’t come from the upper third of college graduates seem to believe that the solution is to expand the types programs that tend to grow most rapidly among colleges that cater to the bottom third (less & non-competitive).  To those reformy alarmists who feel they’ve identified the obvious problems and logical solutions, the above data should make sufficiently clear that we’ve already gone down that road.

Further, I’m thoroughly unconvinced that new models purporting to be more selective in the teachers they prepare, but relying largely on a self-credentialing model (we use our teachers to credential our teachers… and only accept as graduate students those who work in our schools?) focused primarily in ideological & cultural indoctrination   are a step in the right direction.  I have little doubt they’ll find a captive audience to self-credential and maintain a viable “business model,” (by requiring their own teachers to take courses delivered by their peers & bosses to achieve the credentials needed to keep their jobs) but this endogenous, back-patting self-validating model is no way to train the future teacher workforce.*

All of this begs the question of what next? Where do we go from here? How to we achieve integrity and quality in the production of degrees and credentials, and more broadly training and preparation of future teachers and administrators? I really don’t have any answers for these questions right now. But I’m pretty sure that the last two decades have taken us the wrong direction!

Related Research

Baker, B.D, Orr, M.T., Young, M.D. (2007) Academic Drift, Institutional Production and Professional Distribution of Graduate Degrees in Educational Administration. Educational Administration Quarterly  43 (3)  279-318

Baker, B.D., Fuller, E. The Declining Academic Quality of School Principals and Why it May Matter. Baker.Fuller.PrincipalQuality.Mo.Wi_Jan7

Baker, B.D., Wolf-Wendel, L.E., Twombly, S.B. (2007) Exploring the Faculty Pipeline in Educational
Administration: Evidence from the Survey of Earned Doctorates 1990 to 2000. Educational
Administration Quarterly 43 (2) 189-220

Wolf-Wendel, L, Baker, B.D., Twombly, S., Tollefson, N., & Mahlios, M.  (2006) Who’s Teaching the Teachers? Evidence from the National Survey of Postsecondary Faculty and Survey of Earned Doctorates.  American Journal of Education 112 (2) 273-300

1994 Carnegie Classifications

  • Research Universities I: These institutions offer a full range of baccalaureate programs, are committed to graduate education through the doctorate, and give high priority to research. They award 50 or more doctoral degrees1 each year. In addition, they receive annually $40 million or more in federal support.
  • Research Universities II: These institutions offer a full range of baccalaureate programs, are committed to graduate education through the doctorate, and give high priority to research. They award 50 or more doctoral degrees1 each year. In addition, they receive annually between $15.5 million and $40 million in federal support.
  • Doctoral Universities I: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the doctorate. They award at least 40 doctoral degrees1 annually in five or more disciplines.
  • Doctoral Universities II: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the doctorate. They award annually at least ten doctoral degrees—in three or more disciplines—or 20 or more doctoral degrees in one or more disciplines.
  • Master’s (Comprehensive) Universities and Colleges I: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the master’s degree. They award 40 or more master’s degrees annually in three or more disciplines. [Includes typical regional, within-state public normal schools/teachers colleges]
  • Master’s (Comprehensive) Universities and Colleges II: These institutions offer a full range of baccalaureate programs and are committed to graduate education through the master’s degree. They award 20 or more master’s degrees annually in one or more disciplines.
  • Baccalaureate (Liberal Arts) Colleges I: These institutions are primarily undergraduate colleges with major emphasis on baccalaureate degree programs. They award 40 percent or more of their baccalaureate degrees in liberal arts fields4 and are restrictive in admissions.
  • Baccalaureate Colleges II: These institutions are primarily undergraduate colleges with major emphasis on baccalaureate degree programs. They award less than 40 percent of their baccalaureate degrees in liberal arts fields4 or are less restrictive in admissions. [Includes many cash-strapped, relatively non-selective, smaller private liberal arts colleges]

*I still like to believe that the most important background attribute of a “good teacher” or school leader is someone who is enthusiastic about their own learning, constantly seeking intellectual growth and challenge and that this attribute is often revealed in the types of advanced studies an individual chooses to pursue. To me, even if the Relay model does tap into a set of graduates of more selective colleges, if the Relay program itself is little more than a workshop on “no excuses” classroom disciplinary practices and typical inspiring edu-guru staff development fodder, then the Relay model is antithetical to developing truly good teachers. A workshop or two and perhaps some practical guidance from peers or teacher leaders – okay. But a graduate degree based on this stuff? Are you kidding? (just watch the RELAY GSE Videos here: http://www.relayschool.org/videos?vidid=5)