Some statistical context for Central Falls

Pundits have been tweeting and blogging the Central Falls Rhode Island High School story this week, with many cheering the bold “turnaround” strategy of firing all of the school’s teachers. Essentially, the district Superintendent has dismissed all teachers in the school with the option for them to re-apply. The reason for the dismissal is that the school has performed very poorly in recent years on state assessments and when teachers were asked to work extra time, their union resisted – or so the reports go.

Pundits seem to think this is a great idea: http://www.usatoday.com/news/education/2010-02-24-all-educators-fired_N.htm

But Joe Williams of Democrats for Education Reform, a political action group, says: “This is what real political cover can do for public education. You see very clear signals coming from Washington that the Obama administration is serious about turning around our worst schools.”

Central Falls has long been one of the worst-performing in Rhode Island. Just 7% of 11th-graders tested last fall were proficient in math and 55% were proficient in reading. In 2008, 52% of students graduated within four years.

And: http://www.southcoasttoday.com/apps/pbcs.dll/article?AID=/20100224/NEWS/100229948

U.S. Secretary of Education Arne Duncan applauded the plan, saying students only have one chance for an education.

“When schools continue to struggle we have a collective obligation to take action,” he said in a written statement.

The U.S. Department of Education does not play a role in deciding which model schools choose and did not know Wednesday whether Central Falls was the first to opt to get rid of its teachers, said Sandra Abrevaya, a department spokeswoman.

The decision won praise from Republican Gov. Don Carcieri, a former math teacher who supports Gist.

“We can no longer stand by as our schools underperform,” Carcieri said in a written statement. “While we have some excellent individual teachers, our students continue to be held back by a lack of a quality education and by union leadership that puts their self-interests above the interests of the students.”

So, according to the above cast of characters, the way to fix low performing schools is simply to shake them up, get rid of the status quo current crop of teachers and others will be anxiously waiting in line to fill their shoes and fix these schools with the same or fewer resources and the same kids who’ve been there for years. I often point out that getting rid of your current teachers can really only lead to improvement if you are able to replace them with better ones. Are there 74 better teachers waiting in line outside Central Falls HS?

That issue aside, let’s take a look at where Central Falls fits in among Rhode Island High Schools in terms of a) student characteristics, b) spending per pupil and c) outcomes (based on 2006 data – which hasn’t shifted much over time). First, let’s look at the relationship between school level free/reduced lunch rates and combined (summed) proficiency rates across RI HS:

Yes, Central Falls is in tough shape – very high poverty and relatively low performing. But, not really off the trendline (above it, if anything) for performance given its poverty level and better than other high schools of similar poverty.

Now lets look at Central Falls performance with respect to school site spending from Rhode Island’s IN$ITE database:

Central Falls spending is somewhat above average. But again, its student needs are far greater than average – in fact, they are on the outer edge of the entire distribution. So, it is unlikely that “somewhat above average” per pupil spending is going to fully compensate for their high needs.

Here is an oversimplified, but still ugly enough statistical model of the relative costs, given student populations, economies of scale and current outcomes, of Rhode Island High Schools in 2006 (I had run this previously with 2003 to 2005 with similar results):

Admittedly this is no way to do “real” rigorous research. This is a single year of data on 30 high schools and too many covariates for the data. But, these data do reconcile with the model of the previous 3 years. I should also note that such analysis – those that try to pin down the relative efficiency of school district performance and/or cost – are generally unstable/unreliable. All that fun stuff aside, we have this crude model that tells us that at constant outcomes, per pupil costs are higher in higher poverty high schools and high schools with more special education children. And the model can produce for us an indicator of the extent to which each high school spends more or less than expected for the outcomes it receives – that is, the relative efficiency of the school. For the other stat geeks who might be reading, in this particular case, the estimates from the Stochastic Frontier Model and from the OLS regression model were identical.

Here’s the “relative efficiency” of Rhode Island High Schools with respect to cost, from lower to higher poverty high schools.

As it turns out, the relative efficiency of Central Falls HS stacks up pretty well with other Rhode Island High Schools. That is, the actual spending per pupil in Central Falls is not far off from the predicted amount to achieve their current outcomes, with their current population.

Here’s the above data, rescaled (from the OLS version of the model), and with the relative efficiency measures sorted from lowest to highest per pupil spending. The “0” line is the line where the district spends what it is predicted to spend to achieve its current outcome levels given its current students. Central Falls actually spends less than it is predicted to spend given its students and current outcome levels.  Many other high schools spend far more than expected, given their students and current outcomes.

This seems like a fairer comparison than simply casting stones at Central Falls teachers for their miserable test scores.

This is not to excuse low performance or to simply set a lower bar for this school because it serves a very high need population. But it is to point out that given their resources and their kids, they are doing as well as can be expected and better than many other Rhode Island high schools.

This analysis is far from definitive, but is illustrative. If it turns out, through more rigorous multi-year analyses, that Central Falls is efficiently producing its current (miserably low) level of outcomes at its current (relatively inadequate) spending level (at least by comparison with all other Rhode Island High Schools), then one answer here might actually be that Central Falls needs more resources to achieve better outcomes. Why is there no talk of this possibility?

The figures above suggest that Central Falls is doing as well with what it has as any other Rhode Island High School, after accounting for student needs. Where is the outcry over the amount Westerly High is spending to achieve its current outcomes (far less efficient that even Central Falls, in this quick analysis)? Yes, Westerly gets better outcomes, but with a much less needy student population (under 20% free or reduced compared to over 80%) and nearly $1,000 per pupil more in spending.

=====

Here’s the position of Central Falls based on my previous 2004-05 analysis. The overall cost inefficiencies are higher because the high schools were included in a model with middle and elementary schools (but with a dummy variable identifying them as high schools). This model included math outcomes only (language arts outcomes were non-significant). Again, Central Falls is not the standout inefficient school (Higher up in the graph is less efficient).

Common Standards and the Capacity to Achieve Them

It would appear that the Common Standards movement has picked up some momentum this week, with the administration’s pitch that Title I aid should be tied to states adopting common college readiness standards. This is all good talk, but standards alone, on paper and/or in state policies or proclamations don’t achieve themselves. It is inappropriate for state policymakers, federal policymakers, pundits or the general public to simply assume that local public school districts all have sufficient resources to achieve any reasonable common standards.

Perhaps if those standards are set obscenely low they will be broadly attainable at current state and local spending levels. Even then, there will be significant inequities in the ease with which those standards are attained.

Noticeably absent in the current policy conversations is any discussion of the relative capacity of state education systems and local school systems  to achieve any reasonable common standards. It would be far more logical for the federal government to tie Title I funding not to some vacuous statement of endorsement of toothless common standards, but rather to a guarantee that the state will ensure that all local public school districts (and charter schools) have sufficient financial resources to achieve common standards – whatever they are.  In this paper, I, along with Lori Taylor, explain how we approach the measurement of cost and its implications for common standards.

To see just how far our nation has to go in order to move toward common capacity to achieve common standards, let’s take a look at some national maps. Let’s start with a map of the projected relative state and local revenue per pupil levels across states, corrected for a variety of “cost” factors (regional wage variation, economies of scale, population density, poverty):

After correcting for a variety of factors, some stats like Tennessee, Mississippi, Utah and Oklahoma simply spend far less than most others on schools and only slightly above half as much as some states.

Here’s a different view, down to the district level based on an alternative set of cost adjustments. This second map shows that not only are some states much lower spending overall, but within those states, after adjusting for various cost differences, there also exist significant differences in spending (in this case, the map uses current operating expenditures per pupil with Title I funding). Again, Tennessee and Mississippi have overall very low spending. So do many areas of eastern central Washington, much of California and Texas major urban centers. Estimates are not provided for non-unified districts (large expanses of white bkgd).

So, by this point, you’re probably saying – yeah… but money doesn’t really matter that much. It’s how you use it. Maybe Tennessee, for example, is just really, really efficient at producing great outcomes on little expenditure.

Let’s now take a look at state assessment outcomes by districts, nationally. In this map, I’ve taken the proficiency levels for each district, based on the 3 year data set compiled by the New America Foundation (Thanks NAF) and I’ve expressed them as standard deviations from the national mean proficiency rate. Blue areas are those with relatively high proficiency rates and brown areas have relatively low proficiency rates. Check it out:

Wow, Tennessee does do great, despite its low spending! So does Oklahoma. These are model states, right? Low spending, yet really high performance on their own state tests! Check out Missouri. What’s going on there? Well, as it turns out, Tennessee is doing great on its own self-validation exercise – state tests – because it has really easy state tests – or, in other words, really low proficiency cut-points for its state tests. This is the game that states have been playing since the adoption of NCLB. We don’t have to spend much, or actually fix our education system as long as we set low enough standards to make it look like we’re awesome. This is well documented in a series of NCES reports which map state cut-points to NAEP cut-points. By the way – Missouri has a very hard test (in contrast with Kansas, right next door, which has relatively easy tests.)

Finally, here is a scatterplot of the relationship between an overall index of the relative equity and adequacy of state and local revenues per pupil, and state mean 4th and 8th grade, math and reading NAEP assessments for 2007. The funding equity/adequacy ratings are based on national, district level data from 2005 to 2007, and account for a) relative effort by states to fund schools (% of gross state product), b) shares of children in the public school system (and ratio of family income of those not in the system to those in the system), c) predicted state and local revenue level at average poverty, and d) extent to which funding is targeted based on poverty differences across districts.

If we really plan to get serious about Common Standards, then states like Louisiana, Tennessee, Alabama and Oklahoma are going to need to step things up a bit. Notably, low fiscal capacity states like Mississippi and Alabama will need significant federal assistance to pull this off. But, Tennessee and Louisiana are two states which spend less by choice – having among the lowest “effort” among states (% of Gross State Product allocated to schools).

Alternatively, we could just set standards as low as Tennessee standards, spend as little as Tennessee and pat ourselves on the back for a job well done. I don’t believe that this is the intent of the common standards movement, but I may be wrong. Nonetheless, if we continue to throw around the rhetoric of common standards without ever discussing the capacity to achieve them, then we should not expect much to ever come of this movement. Without sufficient capacity, there can be no substantive reform.

For more on whether school finance reforms actually can help, see: https://schoolfinance101.wordpress.com/2009/12/14/finance_reforms/

Stossel & Coulson Misinformation on Private vs. Public School “Costs”

Last summer, I had an interesting exchange with Andrew Coulson regarding the issue of private school costs. That discussion can be found here: https://schoolfinance101.wordpress.com/category/private-school-costs/

I had the displeasure this evening, while channel surfing, to catch a few minutes of John Stossel’s latest episode on the failures of the public education system and low-cost wonders of private education markets.  Here’s a link that summarizes some of the content of Stossel’s latest: http://www.washingtonexaminer.com/opinion/columns/John-Stossel–84692012.html

I happen to be a strong supporter and admirer of private schools, having worked as a middle school science teacher for years in one of NYC’s elite private independent day schools. In my ideal world, I would provide every child the opportunity to attend such a school. And that would be an expensive proposition!

I’ve realized over time, having studied teacher characteristics and finances of various types of private and public schools, that what you get on the private marketplace for education is simply a wider range than what you get in the more regulated public system. That just makes sense. Less regulation broadens the range of options – at both ends.

Private schools do not, by any stretch of the imagination, spend uniformly less than public schools per child. In fact, private independent day schools (a non-trivial segment of the private school marketplace) typically spend much more per pupil than public school districts operating in the same labor market. The school in NYC where I used to work continues to charge in tuition alone, nearly double the average expenditure (operating expense per pupil) of NYC public schools. Further, it is especially important to understand that private school tuition covers only a portion of actual cost.

How can this happen on the open marketplace for education? Shouldn’t market competition  drive these prices into line – bring them down well below public bureaucracy spending on education – and still yield better quality? Or, perhaps the tuition these schools charge and amount they spend to operate actually represent the competitive price of providing the excellent education many parents demand. Perhaps better quality (hard to compare), but at a higher, not lower cost!

Quite simply, when it comes to public or private education, you get what you pay for. Many private schools spend far more than the public schools in the same labor market – many spend roughly the same and some do spend less. And the quality of that schooling varies accordingly. I document this thoroughly in this lengthy report: http://www.greatlakescenter.org/docs/Policy_Briefs/Baker_PvtFinance.pdf

Here are a few highlight figures from the report:

Finally, and perhaps most interesting:

The punchline here is, as I’ve said above – You get what you pay for.  And on the private marketplace for education, high quality comes at high cost – much higher than the average public school cost.

  • Private independent day schools which provide small class sizes with highly academically qualified teachers spend well above nearby public schools.
  • Catholic schools, where they report their finances (not the crude survey summary data of tuition and expense compiled annually by the National Catholic Education Association) spend marginally less than nearby public schools (but charge much lower tuition than cost), perform about the same if given the same kids and have comparably qualified teachers in terms of academic preparation. Note that Catholic schools in trying to operate on a shoestring have been financially failing at an alarming rate. That is how markets work when you try to hard to price your product below the cost of maintaining quality (a more friendly spin being that the social service mission of urban Catholic schools has outpaced church philanthropy). I discuss this extensively in the report.
  • Conservative Christian schools (to the extent they can be lumped together) operate at much less per pupil than traditional public schools and have lower outcomes given the same students and have disturbingly academically weak teaching staff based on national survey data.

Again, you get what you pay for. The open market place for private schooling is simply more diverse than the more regulated public marketplace, on price to consumers, average overall spending and ultimately on quality.

It is entirely inappropriate to argue that the public would be better served by wholesale shifting of current public school students into only those private schools which presently do spend less than the public schools.  It is particularly twisted to suggest that for the price of Conservative Christian or Catholic education, we can provide kids with the equivalent of private independent school education. While pundits like Stossel or Coulson may not make this claim in this particular form, they imply as much by lumping all of these schools together as “private” and proclaiming that “private schools simply do more with less.” The reality – Some do more with more. Some do less with less. And yes, there are exceptions that do more with less, and less with more – just as in any analysis which involves thousands of points.

Private Public Schools – OK Idea – Bad Calculations

I read with curiosity today, the Fordham Institute’s new report on “Private Public” schools, or elementary schools where fewer than 5% of children qualify for free or reduced lunch and middle or secondary schools where fewer than 3% qualify. Not a bad idea on their part, but some of the numbers just didn’t match up with my reasonably sound knowledge of the NCES common core Public School Universe data.

For example, on Page 6, State Findings, the report indicates 70 private public schools in Illinois, and on page 7, the report indicates 402 such schools in New Jersey. These are two states where I have used the most recent available 3 years of NCES CCD data quite often. While the New Jersey numbers seem close to reasonable, the Illinois numbers are undoubtedly low – and quite honestly, wrong.

First, here’s my tally of total schools by grade level in Illinois and New Jersey:

|                 schlevel08
state_name | 1-Primary   2-Middle     3-High    4-Other |     Total
———–+——————————————–+———-
Illinois |     2,562        777        788        203 |     4,330
New Jersey |     1,547        453        425        158 |     2,583
———–+——————————————–+———-
Total |     4,109      1,230      1,213        361 |     6,913

(sorry for the messy layout)

Next, here’s my tally for total schools by grade level with less than 5% free or reduced lunch in 2007-08:

. tab state  schlevel08 if  pct_freereduced08<.05

|                 schlevel08
state_name | 1-Primary   2-Middle     3-High    4-Other |     Total
———–+——————————————–+———-
Illinois |       281         89         65          4 |       439
New Jersey |       273         98         94          7 |       472
———–+——————————————–+———-
Total |       554        187        159         11 |       911

And again in 2006-07:

. tab state  schlevel08 if  pct_freereduced07<.05

|                 schlevel08
state_name | 1-Primary   2-Middle     3-High    4-Other |     Total
———–+——————————————–+———-
Illinois |       302         90         60          3 |       455
New Jersey |       194         74         73          6 |       347
———–+——————————————–+———-
Total |       496        164        133          9 |       802

In either year, and many other years, Illinois has far more than 70 elementary schools alone (280 to 300) under 5% free or reduced price lunch. The New Jersey numbers, while close, can’t be reconciled that easily either.

Fordham includes a footnote which suggests that schools reporting no children qualifying for free or reduced lunch were dropped on an assumption of non-reporting/non-participation. This is likely an incorrect assumption and one that should be checked across multiple years.

For example, here is a map of district level low income rates (background shading, based on state data source) and schools reporting “0” Free Lunch children (to NCES CCD) in North Shore areas near Chicago. Note that most of the “0” flags by schools are in very low poverty districts. These likely are “Private Public” schools by the Fordham definition, but were inappropriately excluded by their count method:

Here are some New Jersey “0” values for free or reduced lunch, against a backdrop of median family income. The “0” value schools here are in Franklin Lakes and Saddle River. I suspect that those “0” values for kids in poverty are real – not errors in the data.  There were likely better ways to handle these values than to simply exclude all “0” values. For example, checking across multiple years, or identifying “0” value schools in districts that show higher versus lower U.S. Census poverty rates (not subject to district level reporting). “0” values in low poverty districts are more likely to be correct where as “0” values for schools in high poverty districts are more likely reporting error.

  • Note: I just ran a quick test of this latter approach using Census Small Area Income and Poverty Estimates. About 198 of 341 Illinois “0” value schools are in districts with less than 5% poverty. About 303 of those 341 are in districts with less than 10% poverty – using Census poverty tabulations. These schools likely should have been included as “Private Publics” rather than excluded outright.

===== On a related note:

I should also point out that Fordham is among those organizations that has frequently pointed the finger at school districts as being the primary causes of persistent inequities, and not state education systems. Fordham does not point out in their report that  almost invariably, these private public schools are clustered in private public school districts. The low poverty schools are in low poverty districts often immediately adjacent to high poverty schools in high poverty districts. That is, the inequity the authors reveal in this report is largely a between not within district inequity and one that cannot be resolved by reshuffling resources within districts, as many of their previous reports have argued (most notably Fund the Child).

Here’s a fun map of New Jersey Private Public Schools in the Newark metro area:

Data for NJ and IL available on request.

Today’s fun maps: NYC charter school free lunch rates

Just for fun, here are a few maps of New York City traditional public, special public and public charter schools. Charter schools are indicated with an asterisk. School level rates of children qualifying for free lunch are indicated by circle color.  Deep red circles have free lunch shares over 83.6%. Blue circles have very low free lunch shares. Free lunch shares and school locations (lat / lon) are from the National Center for Education Statistics Common Core – Public School Universe 2007-08. Note that as with my previous NJ charter slides, NYC charter schools tend to serve somewhat lower shares of children qualifying for free lunch than are served by many of the surrounding traditional public schools.

Another “You Cannot Be Serious!”

Saw this today:

http://www.washingtonpost.com/wp-dyn/content/article/2010/01/29/AR2010012903405.html

Huffman opines:

I’m picking on New Jersey not because it has the worst plan (it doesn’t) but because it so perfectly embodies the old way of applying for federal education funding — lots of promises and ideas; little chance of change on the ground.

By contrast, Louisiana submitted a clear, concise, actionable plan to reform a large swath of its public schools.

The beauty of Louisiana’s reform model lies in its simplicity. The state has taken critical baseline steps, it proposes expanding projects that have shown promising results, and it has ensured that participating school districts will actually do the things that are in the application.

Louisiana already built and uses a data system that ties students’ test scores to the teachers who taught them and to the universities and programs that trained the teachers. In its application, Louisiana proposes expanding the use of data and using test-score results to count for 50 percent of teacher evaluations and to help drive decisions of hiring, retaining, and promoting teachers and principals.

Thankfully, because I have little time this morning, I’ve already addressed this issue in at least two posts.

I discuss Louisiana specifically here:

https://schoolfinance101.wordpress.com/2009/12/18/disg-race-to-the-top/

And the issue of whether state data systems alone can save a state that has generally abandoned its public education system here:

https://schoolfinance101.wordpress.com/2009/12/15/why-do-states-with-best-data-systems/

One might make the simple argument that New Jersey’s old way of doing things, including sufficient financial support for schools and wider participation in the public education system – actually works – at least when compared to many other states – and certainly when compared to Louisiana. That said, Louisiana is in far greater need of stimulating improvement- but until Louisiana actually makes a substantial state commitment to its public K-12 and higher education system  that’s not likely to happen.

You cannot be serious bonus clip on link above!

NCTQ Teacher Policy Ratings: Where’s the quality?

First, to the media – the National Council on Teacher Quality Ratings are NOT ratings of actual differences in Teacher Quality across states. They are ratings of supposed steps which can be taken in state policy in order to improve teacher quality. Here, the blame goes on the media spin, not on NCTQ.

NCTQ does make some reasonable attempts to explain the research basis for their policy elements.  However, NCTQ fails miserably at understanding the importance of context within which policies are applied. For example, under AREA 2, NCTQ cites the importance of increasing numbers of teachers from more competitive colleges, and cites expanding the teacher pool as a way to accomplish this, through policies such as alternative certification. My own work a few years back on charter school hiring in states with more and less relaxed teacher certification requirements provides some support for this notion. But, my research also shows that in some cases, expanding the pool weakens, on average, the academic credentials of teachers. Some states and some regions of the country simply don’t have more competitive colleges and universities.

As many of these rating/grading systems which strongly favor deregulatory policies (and the power of state data systems) do, the NCTQ policy ratings favor those states that in fact have the weakest overall public education systems including the academically weakest teachers – of all things. NCTQ only handed out Cs and Ds for grades (and a few Fs). A quick tally based on my prior analyses of Schools and Staffing Survey Data finds that 6 of the 8 states that got a C (the high grade) fall in the bottom half of states in the percentage of teachers who attended highly or most competitive colleges (a factor acknowledged by NCTQ as important, and as a factor that would supposedly improve as a function of expanding the teacher pool). Louisiana, Alabama and Arkansas are all in the bottom 10. Most of these states also fall in the bottom half of states, and 3 in the bottom 10 states for the change in percent of teachers (03-04 to 07-08) who attended highly or most competitive colleges. None of the states that received the high grade were even in the top 20 in change in % of teachers from highly or most competitive colleges.

You know – it’s possible that teacher salaries might also be a factor here (there’s some pretty good research on this-see link), and a limiting condition might actually be the available funding for schools which is sadly lacking in many of these states. So too might the supply of high quality public colleges and universities for preparing teachers. States like Louisiana have been taking the axe to their public higher education systems of late. Deregulatory strategies cannot trump these conditions, and in fact, may worsen teacher quality and ultimately school quality under these conditions.

Increased regulatory strategies like improved data for teacher evaluation systems (also advocated by NCTQ, and quite reasonably so) are simply window dressing for states that are choosing to avoid the more difficult and more expensive problems facing their public education systems.

On numerous occasions on this blog, I’ve discussed the systemic failures of the public education systems in states like Louisiana – their failure to serve even 80% of school-aged children – or their failure to provide reasonable overall funding or target any funding to higher need districts (across most of these states).

So, if the Teacher Quality Policy ratings have little to do with actual teacher academic preparation in a state, or overall quality of the state’s education system, then what do they tell us? Apparently not much!

Education Week Does it Again: Please STOP!

Education Week has again posted the problematic QUALITY COUNTS indicator system including grades for school finance across the states. And again, Education Week has paid little attention to producing high quality indicators for measuring …quality? Why doesn’t that surprise me? But, they’ve made my life easier because I can simply refer you to my critique of last year’s Quality Counts School Finance Indicators:

https://schoolfinance101.wordpress.com/2009/01/08/education-week-quality-lacks/

Here are a few quick summary points on issues that occur year to year:

  • Ed Week uses “range” measures and “coefficient of variation” measures in its equity analysis – measures which capture overall variations and high to low variations in current expenditures across school districts. The way that Education Week calculates these measures actually penalizes states which target funds to higher need school districts, including higher poverty school districts or very small remote districts. That is, if a state actually makes efforts to accommodate cost differences across districts, they get a lower equity grade from Education Week. THAT’S JUST WRONG! Education Week uses some “cost adjustments” including a regional wage index and a nominal (and completely arbitrary) poverty adjustment. But, states like New Jersey actually provide more poverty-based support than the Ed Week adjustment, resulting in a reduction in the Ed Week equity measures. Ed Week makes no adjustment for costs associated with economies of scale or population density – major factors affecting spending variation across school districts within states.
  • Ed Week continues to use peculiar (though traditional) school finance measures like the McLoone Index to evaluate the share of children within the state who are in districts near the median spending level. This was originally conceived as a within state relative adequacy measure. But, without appropriate consideration for needs or costs, a state can score well on the Ed Week McLoone index by simply having all of its low income children clustered together in one or a handful of districts that spend at the edge of the lower half of the distribution.

Education Week staffers – Please Stop! Quality Counts is very unhelpful because of the extent to which it misinforms. There may be, and in fact are, some good and useful indicators in the report, but there are at least equal numbers of indicators that are entirely misleading. One cancels out the other.

These indicators can have a serious negative policy impact because of the way in which and extent to which they misinform. Drawing from a forthcoming technical report (referring to both Ed Week and Ed Trust indicators):

To illustrate the potential negative impact of these two reports, in 2003 in the context of state school finance litigation in Kansas, attorneys defending the State submitted in defense of the school funding formula, both the Education Trust finding that higher poverty districts had higher revenue per pupil and the Education Week finding that Kansas showed a good McLoone index. The state’s attorneys and local news outlets did not understand why Kansas received good ratings on these indices nor did they care as long as those indices were from highly publicized, publicly recognized sources. Plaintiffs pointed out that Education Trust finding was not a function of systematic poverty related support, but rather a function of small rural school support which left out the poorer urban and large town districts and that the “good” McLoone index was a function of having nearly half of the state’s children and nearly all of the state’s poor minority children attending six districts with below average revenues. These points were difficult to make in the face of media accolades for state’s supposed achievements regarding school funding equity and adequacy. The district court and eventually Supreme Court of Kansas declared the state school finance system unconstitutional, but not without at least a few vocal critics chastising the judges who would give the legislature a failing grade for a school finance system that had received a grade of “B” from a leading national media outlet.

Update – Do School Finance Reforms Matter?

Here’s an excerpt from a forthcoming article on whether school finance reforms have made any difference for students. The article is partly in response to claims by Eric Hanushek and Alfred Lindseth that school finance reforms have resulted in massive increases in funding to public schools which have not helped and may have in fact harmed children. My forthcoming work on this topic is co-authored with Kevin Welner of U. of Colorado.

=====

In terms of quality and scope, the most useful single study of judicially induced state finance reform was published by Card and Payne in 2002. They found that court declarations of unconstitutionality in the 1980s increased the relative funding provided to low-income districts. And they found that these school finance reforms had, in turn, significant equity effects on academic outcomes:

Using micro samples of SAT scores from this same period, we then test whether changes in spending inequality affect the gap in achievement between different family background groups. We find evidence that equalization of spending leads to a narrowing of test score outcomes across family background groups. (p. 49)

To evaluate distributional changes in school finance, Card and Payne estimated the partial correlations between current expenditures per pupil and median family income, conditional on other factors influencing demand for public schooling across districts within states and over time. Card and Payne then measured the differences in the change in income-associated spending distribution between states where school funding systems had been overturned, upheld, or where no court decision had been rendered. Importantly, they also evaluated whether structural changes to funding formulas (that is, the actual reforms) were associated with changes to the income-spending relationship, conditional on the presence of court rulings.

To make the final link between income-spending relationships and outcome gaps, Card and Payne evaluated changes in gaps in SAT scores among individual SAT test-takers categorized by family background characteristics.[1] Put in terms of our Figure 1, Card and Payne (2002) appear to have taken the greatest care in a multi-year, cross-state study, to establish appropriate linkages between litigation, reforms by type, changes in the distribution of funding, and related changes in the distribution of outcomes.

Notwithstanding the generally acknowledged importance of this study,[2] Hanushek and Lindseth (2009) never mention it in their book, including the chapter in which they conclude that school finance reforms have no positive effects.

This omission – as well as the other omissions noted below – is telling of a larger point. The development of, and reliance upon, a research base should depend on relatively objective criteria. Readers depend on authors of literature reviews to come forward with the best and most applicable research bearing on the issues under consideration. While Hanushek and Lindseth might argue that this particular omission is because Card and Payne (2002) are speaking to equity (not adequacy) litigation, we have already described how the line between equity and adequacy is not so simple. Moreover, the research that Hanushek and Lindseth do choose to include goes far beyond that directly focused on adequacy – including the Cato study of a Kansas City desegregation order discussed below.

Another key study not mentioned by Hanushek and Lindseth (2009) concerned the effects of reforms implemented under the Kansas court’s pre-ruling in 1992 (Deke, 2003). The reforms leveled up funding in low-property-wealth school districts, and Deke found as follows:

Using panel models that, if biased, are likely biased downward, I have a conservative estimate of the impact of a 20% increase in spending on the probability of going on to postsecondary education. The regression results show that such a spending increase raises that probability by approximately 5% (p. 275).

The Kansas reforms addressed by Deke (2003) came as a result of a judicial pre-order, advising the legislature that if the pending suit made it to trial, the judge would declare the school finance system unconstitutional (Baker and Green, 2006).

Hanushek and Lindseth (2009) also omitted from their discussion two additional studies, both peer-reviewed, that explore the effects of Michigan’s school finance reforms, known as “Proposal A,” implemented in the mid-1990s. Michigan’s reforms were implemented without ruling or high level of litigation threat, but the reforms were nonetheless comparable in many ways to reforms implemented following judicial rulings[3] (see Leuven et al., 2007; and Papke, 2001). In the first study, Papke (2001) finds:

Focusing on pass rates for fourth-grade and seventh grade math tests (the most complete and consistent data available for Michigan), I find that increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance. (Papke, 2001, p. 821.)

Leuven and colleagues (2007) find no positive effects of two specific increases in funding targeted to schools with elevated at-risk populations, a convenient conclusion for Hanushek and Lindseth to have included.

A third Michigan study (available online since 2003 as a working paper from Princeton University, and now accepted for publication in Education Finance and Policy, a peer-reviewed journal) directly estimates the relationship between implemented reforms and subsequent outcomes (Roy, 2003). Roy, whose work was not cited by Hanushek and Lindseth, finds:

Proposal A was quite successful in reducing inter-district spending disparities. There were also significant gains in achievement in the poorest districts, as measured by success in state tests. However, as yet these improvements do not show up in nationwide tests like NAEP and ACT. (Roy, 2003, p. 1.)

Most recently, a study by Choudhary (2009) “estimate[s] the causal effect of increased spending on 4th and 7th grade math scores for two test measures—a scale score and a percent satisfactory measure” (p. 1). She “find[s] positive effects of increased spending on 4th grade test scores. A 60% percent increase in spending increases the percent satisfactory score by one standard deviation” (p. 1).

Perhaps because there was no judicial order involved in Michigan, researchers were able to avoid the tendency to focus on or classify the judicial order. Moreover, single-state studies generally avoid such problems because there is little statistical purpose in classifying litigation. Importantly, each of these studies focuses instead on measures of the changing distribution and level of spending (characteristics of the reforms themselves) and resulting changes in the distribution and level of outcomes. Each takes a different approach, but attempts to appropriately align their measures of spending change and outcome change, adhering to principles laid out in our Figure 1.

Other high-quality but non-peer reviewed empirical estimates of the effects of specific school finance reforms linked to court orders have been published for Vermont and Massachusetts. For example, Downes (2004), in an evaluation of Vermont school finance reforms that were ordered in 1997 and implemented in 1998, found as follows:

All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post–Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance. (p. 312)

Hanushek and Lindseth (2009) never acknowledge this positive finding (although they do briefly cite the Downes evaluation, for a different point). Again, one might attribute this omission to the argument that the Vermont reforms were equity reforms, not adequacy reforms. However, similar to the 1992 Kansas reforms, the overall effect of the Vermont Act 60 reforms was to level up low-wealth districts and increase state school spending dramatically, thus addressing both adequacy and equity.

For Massachusetts, two independent sets of authors (in addition to Hanushek and Lindseth) have found positive reform effects. Most recently — after the Hanushek and Lindseth book was written — Downes, Zabel and Ansel (2009) found:

The achievement gap notwithstanding, this research provides new evidence that the state’s investment has had a clear and significant impact. Specifically, some of the research findings show how education reform has been successful in raising the achievement of students in the previously low-spending districts. Quite simply, this comprehensive analysis documents that without Ed Reform the achievement gap would be larger than it is today. (p. 5)

Previously, Guryan (2003) found:

Using state aid formulas as instruments, I find that increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students. The magnitudes imply a $1,000 increase in per-pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures. (p. 1)

Although Hanushek and Lindseth concede that Massachusetts reforms appear successful,[4] they failed to cite Guryan’s NBER working paper, the inclusion of which would have (like most other omitted studies) weakened their overall conclusions about the non-impact of these reforms.

Turning to New Jersey, two recent (though not yet peer-reviewed) studies find positive effects of that state’s finance reforms. Alexandra Resch (2008), in a study published as a dissertation for the economics department at the University of Michigan, found evidence suggesting that New Jersey Abbott districts “directed the added resources largely to instructional personnel” (p. 1) such as additional teachers and support staff. She also concluded that this increase in funding and spending improved the achievement of students in the affected school districts. Looking at the statewide 11th grade assessment (“the only test that spans the policy change”), she found “that the policy improves test scores for minority students in the affected districts by one-fifth to one-quarter of a standard deviation” (p. 1).

The second recent study was originally presented at a 2007 conference at Columbia University, and a revised, peer-reviewed version was recently published by the Campaign for Educational Equity at Teachers College, Columbia University (Goertz and Weiss, 2009). This paper offered descriptive evidence that reveals some positive test results of recent New Jersey school finance reforms:

State Assessments: In 1999 the gap between the Abbott districts and all other districts in the state was over 30 points. By 2007 the gap was down to 19 points, a reduction of 11 points or 0.39 standard deviation units. The gap between the Abbott districts and the high-wealth districts fell from 35 to 22 points. Meanwhile performance in the low-, middle-, and high-wealth districts essentially remained parallel during this eight-year period (Figure 3, p. 23).

NAEP: The NAEP results confirm the changes we saw using state assessment data. NAEP scores in fourth-grade reading and mathematics in central cities rose 21 and 22 points, respectively between the mid-1990s and 2007, a rate that was faster than the urban fringe in both subjects and the state as a whole in reading (p. 26).

The Goertz and Weiss paper (which was, as designed and intended by the paper’s authors, the statistically least rigorous analysis of the ones presented here) does receive mention from Hanushek and Lindseth multiple times, but only in an effort to discredit and minimize its findings.

Card, D. and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

Choudhary, L. (2009). Education Inputs, Student Performance and School Finance Reform in Michigan. Economics of Education Review, 28(1), 90-98.

Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284.

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (ed), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

Downes, T. A., Zabel, J., Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

Goertz, M., and Weiss, M. (2009). Assessing Success in School Finance Litigation: The Case of New Jersey. New York City: The Campaign for Educational Equity, Teachers College, Columbia University.

Guryan, J. (2003). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

Leuven, E., Lindahl, M., Oosterbeek, H., and Webbink, D. (2007). The Effect of Extra Funding for Disadvantaged Pupils on Achievement. The Review of Economics and Statistics, 89(4), 721-736.

Resch, A. M. (2008). Three Essays on Resources in Education (dissertation). Ann Arbor: University of Michigan, Department of Economics. Retrieved October 28, 2009, from http://deepblue.lib.umich.edu/bitstream/2027.42/61592/1/aresch_1.pdf

Roy, J. (2003). Impact of School Finance Reform on Resource Equalization and Academic Performance: Evidence from Michigan. Princeton University, Education Research Section Working Paper No. 8. Retrieved October 23, 2009 from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=630121 (Forthcoming in Education Finance and Policy.)


[1] Card and Payne provide substantial detail on their methodological attempts to negate the usual role of selection bias in SAT test-taking patterns. They also explain that their preference was to measure more directly the effects of income-related changes in current spending per pupil on income-related changes in SAT performance, but that the income measures in their SAT database were unreliable and could not be corroborated by other sources. As such, Card and Payne used combinations of parent education levels to proxy for income and socio-economic differences between SAT test takers.

[2] As one indication of its prominence among researchers, as of the writing of this article, Google Scholar identified 153 citations to this article.

[3] There is little reason to assume that the presence of judicial order would necessarily make otherwise similar reforms less (or more) effective, though constraints surrounding judicial remedies may.

[4] Hanushek and Lindseth attribute the success of the Massachusetts reforms not to spending, but to the fact that the “remedial steps passed by the legislature also included a vigorous regime of academic standards, a high-stakes graduation test, and strict accountability measures of a kind that have run into resistance in other states, particularly from teachers unions” (p. 169). That is, it was not the funding that mattered in Massachusetts, but rather it was the accountability reforms that accompanied the funding.

Smarter School Leaders

Smarter School Leaders: Enough to reverse the trend?

http://www.nytimes.com/2009/12/05/opinion/05herbert.html?_r=1

This recent New York Times article highlights a new doctoral program for educational leaders that is a joint venture of Harvard Graduate School of Education, Kennedy School of Government and Harvard Business School. An interesting approach indeed and one that will hopefully generate some top quality leaders for public schools and school districts. But, there are about 100,000 public schools out there, spread across 16,000 or so school districts and public charter schools. In the best of cases, each of these schools and districts would get the best and brightest possible leader. My guess, however, is that this new Harvard program will barely make a dent in our national needs.

Perhaps the new Harvard program can serve as a model for making a bigger and better dent.  Now, when I say that, I should clarify that I’m not taking the pop-policy position that this program is a model simply because it involves a business school and public policy school and the education school, but rather because it involves a GOOD business school, HIGH QUALITY public policy school and TOP NOTCH education school. There are as many, if not more intellectually vacuous b-school programs as comparably vacuous ed-school programs. You see, it’s not about b-school versus ed-school. It’s about high quality schools with highly self-selective pools of degree-seekers and top notch faculty deciding to play a more significant role in public school leadership. However, it’s going to be an uphill battle!

A few years back, Michelle Young, Terry Orr and I explored changing patterns of degree production in educational administration. With other colleagues, I explored the characteristics of faculty in educational administration programs, their pipeline and their qualifications. More recently, I’ve been exploring the effects of the changing principal preparation pipeline on schools in states like Missouri. AND IT’S NOT A PRETTY PICTURE!

Michelle, Terry and I found in our degree production study, that:

“The largest number and greatest increase were among master’s degrees. In 2003, there were 15,720 master’s degrees conferred in educational leadership, a 90 percent increase since 1993.”

And:

“Even more striking are the increases in master’s degree granting programs at Comprehensive II and Liberal Arts II institutions. Such program increases reflect a dramatic growth in the availability of programs in local and regional institutions.”

And further, that:

“The percentage of all master’s degrees produced by higher status institutions, the Research I through Doctoral II institutions dropped from 42 percent in 1993 to 36 percent in 2003.”

That is, master’s degree production in particular has mushroomed over the past decade-and-a-half and many of the new masters degrees produced are from institutions that previously had minimal involvement in educational administration and are generally considered lower status institutions.

The figure below shows the top Educational Administration masters’ granting institutions in 1990 and then again for the period from 2002 to 2005, based on data from my study with Michelle Young and Terry Orr. The data are from the National Center for Education Statistics, Integrated Postsecondary Education Data System – Degree Completion files. In 1990, Harvard made the list. But by the later period (and perhaps even worse by now), the list had changed – a lot. The list now includes mass-producers of graduate degrees like Nova Southeastern University and William Woods (Missouri) pumping out about 500 masters degrees per year in educational administration (and related degree codes). Other standout newcomers include Lindenwood University (also Missouri), National Louis University (Illinois) and St. Peters College (New Jersey).

From 2002 to 2005, Harvard continued production at its 1990 levels, like many major research universities. But by 2002 to 2005, Harvard had dropped to 68th in production, right behind Mid-America Nazarene University in Kansas (their radio jingle still sticks in my head from my Kansas years… MNU, not Harvard… who I doubt has a radio jingle).

If trends in Masters’ Degree production weren’t bad enough, similar if not more disturbing trends have occurred in the production of doctoral degrees in educational administration. In 1990, Harvard reported about 40 doctoral degrees in Educational Administration and Nova Southeastern about 100. Bad enough already. By 2005, Harvard was no-longer listing or reporting doctoral degrees granted under program codes for Educational Administration, and the biggest producers nationally were: Nova Southeastern (368), Argosy University – Sarasota (196), St. Louis University (62).  Even if these programs were/are credible, managing the quality control on 200 to 400 doctoral candidates per year seems problematic at best. Simply finding, enrolling and retaining 200 to 400 high quality candidates willing to pursue this type of degree seems a bit of a stretch! How many applied? How many, if any were rejected?

The damage done by these institutions and the diversified production of educational leaders is astounding in some states. In 1999, only a few principals of Missouri public schools held graduate degrees from the state’s emerging degree-mills. By 2006, 185 held their Masters’ degrees from Lindenwood University and 205 from William Woods out of a data set having just over 2,000 completely matched records over time. Nearly 400 of 2,000 or nearly 20% of Missouri principals held degrees from institutions which are arguably hardly qualified to grant them.

Principals who attended these graduate programs are substantially more likely to have attended the least competitive undergraduate colleges. For William Woods University, 80% of Masters Degree recipients who became Missouri principals attended undergraduate colleges in the bottom 3 (of 6) categories of competitiveness (based on Barrons’ Guide ratings) compared to 68% of principals statewide.

And further, the shares of teachers who also attended the least competitive colleges hired into schools headed by these principals have grown dramatically – from 65% to 75% from bottom two categories of Barrons’ ratings in 7 years – and faster than for other schools statewide.

This shift would be inconsequential were it not for strong and consistent evidence from a multitude of studies that the academic caliber of the teacher workforce is highly relevant to student success. While many sources highlight this issue (see for example, Baker & Cooper, 2005), Loeb and colleagues provide a particularly striking in the work in New York City. They report that:

“ . . . almost half of the teachers in the most effective quintile (based on student outcomes) graduated from a college ranked competitive or higher by Barron’s, compared to only ten percent of the teachers in the least effective quintile.”(p. 23)

This is a serious issue and one state policy makers seem unwilling to address. National accrediting agencies are comparably unwilling and/or incapable of addressing this educational leadership brain drain.

A graduate program in educational leadership or any field is only as good as the quality of its students and faculty, but criteria for program accreditation pay little attention to either the academic quality of students or qualifications of faculty.

Altering the quality of school leadership requires greater involvement of leading public and private universities, pursuing endeavors like the new Harvard program. But equally important, altering the quality of school leadership requires that state policymakers step up and shut down institutions that by the quality of their average student and qualifications of their faculty have no business preparing school leaders.

While this argument might easily be construed as academic elitism, it is important to acknowledge that this argument relates to the preparation of leaders for academic institutions –namely public schools. It is difficult to conceive of a rational argument for ignoring the relevance of academic credentials for individuals wishing to lead academic institutions.

Relevant research readings:

Baker, B., & Cooper, B. (2005). Do principals with stronger academic backgrounds hire better teachers? Policy implications for improving high-poverty schools. Educational Administration Quarterly, 41(3), 413-448.

Baker, B.D, Orr, M.T., Young, M.D. (2007) Academic Drift, Institutional Production and Professional Distribution of Graduate Degrees in Educational Administration. Educational Administration Quarterly 43 (3)  279-318

Baker, B.D., Wolf-Wendel, L.E., Twombly, S.B. (2007) Exploring the Faculty Pipeline in Educational Administration: Evidence from the Survey of Earned Doctorates 1990 to 2000. Educational Administration Quarterly 43 (2) 189-220