School Finance 101: Gaming Adequacy by Creating a Veneer of Empirical Validity

This post comes from a work in progress… and addresses games states play to validate their choices to spend less than might actually be needed in order to achieve desired outcome standards.  This post will be followed by another which reviews three major smokescreens commonly  used to argue that none of this matters anyway.

=====

Over the past two decades in particular, states and advocacy groups have engaged with greater frequency in attempting to define the amount of funding that would be necessary for achieving adequate educational outcomes. One might characterize the period as one of the rise of empiricism in school finance, which coincided with a shift in litigation strategies from emphasis on funding equity to emphasis on funding adequacy – specifically whether funding was adequate to either provide specific programs and services or to achieve specific measured educational outcomes.  In some cases, states have adopted their empirical strategy in response to judicial orders that the legislature comply with state constitutional mandate for the provision of an adequate education. In other cases, states have proactively set out to validate spending targets they know they can already meet (or have already been met), in order to claim school finance reform political victory.

Prior to this new “empirical era,” total state budgets would be set based on political preferences of governors and legislators regarding state tax policy and the revenues expected to be produced by the state tax system. Revenue projections, based on politically palatable tax policy, divided by the numbers of children to be served, generate the average per pupil amount of available aid.  And then the tug of war over shifting distributions toward one constituency and thus away from another, ensues.  The biggest difference between this approach and current approaches, if any, is that now, state policymakers are more likely to attempt to justify that the amount backed into via the same steps, is in fact an empirically valid estimate of the funding needed for children to achieve adequate outcomes.

Baker, Taylor and Vedlitz (2005) provide an explanation of early gaming of estimates of the costs of providing an adequate education in Illinois and Ohio in the 1990s.

Augenblick and Colleagues provide multiple cost estimates for Illinois based on different outcome standards, using single or multiple years of data and including some or all outcome standards. The higher of the two figures in Table 5 represents the average expenditures of Illinois school districts which, using 1999-2000 data, had 83% of students meeting or exceeding the standard for improvement over time. The lower of the two figures is based on the average expenditure of districts which, using 2000 data only, had 67% of pupils meet or exceed the standards, and 50% meeting standards on all tests.

Similar issues exist in a series of successful schools cost estimates produced in Ohio a year earlier. In Ohio, however, estimates were derived and proposed amidst the political process, with various constituents picking and choosing their data years and outcome measures to yield the desired result. Two Ohio estimates are provided in the table, but multiple estimates were actually prepared based on different subsets of districts meeting different outcome standards. The Governor’s office chose 43 districts meeting 20 of 27 1999 standards, the Senate selected 122 districts meeting 17 of 18 1996 standards, the House chose 45 districts meeting all 18 original standards in 1999, and the House again in an amended bill used 127 districts meeting 17 of 18 1996 standards in 1996 and 20 of 27 standards in 1999.(Baker, Taylor, Vedlitz, 2005, p. 15)

Put simply, legislators in Ohio backed into outcome standards to identify that subset of school districts that on average were spending what the state was willing to spend within its current budget.

New York’s Numbers Game

More recent school finance reforms in New York State reveal that similar games persist.  In response to court order in Campaign for Fiscal Equity v. State, the legislature adopted a foundation aid formula to be phased in from 2007 to 2011 where the basic funding level in that formula would be set as follows:

The Foundation Amount is the cost of providing general education services. It is measured by determining instructional costs of districts that are performing well. (NYSED, Primer on State Aid, 2011-12)

The state defined “performing well” as a standard of 80% of children scoring proficient or higher on state assessments, a performance level marginally lower than the statewide mean at the time.

In constructing their baseline cost estimates, state officials adopted a handful of additional steps to ensure a politically palatable, low basic cost estimate. First, state officials chose only to consider the average spending of those districts that were both “performing well” and in the lower half of spending among those performing well. By taking this step, nearly all districts in the higher cost regions of the state are excluded and thus have limited effect on the basic cost estimate. Figure 1 shows that across regions, about 60 to 80% of districts meet the “successful” standard. In Western New York and the Finger Lakes region about 73% of districts are both “successful” and low spending. But, while 75 to83% of Hudson Valley and Long Island districts are “successful”, only 20 to 25% are in the lower half of spending (even after applying the state’s regional cost adjustment, which is clearly inadequate).

Thus, basic costs for districts statewide are measured largely against the average spending of districts lying somewhere in the triangle between Ithaca, Buffalo and Syracuse.  Spending behavior of these districts has little relevance to costs of providing adequate education in and around New York City.

Figure 1

Slide2

Another step in the process further deflates basic cost estimates. Instead of adopting a comprehensive measure of annual operating expenditures, the state chose a pruned down “general instructional spending” figure.  In particular, the pruned general instructional spending figure is substantively lower than the state’s approved operating expense figure for downstate districts, as shown in Figure 2.

Figure 2

Slide3

The combined a) setting of  a low outcome bar, b) filtered exclusion of districts in higher cost regions of the state and c) selection of a partial spending figure rather than a more comprehensive one guarantees a more politically palatable minimum cost estimate, while still provide a veneer of empirical validity.

Despite taking such care to generate such a low estimate of adequate spending under-girding the state foundation aid formula, in recent years, the state has failed to come even close to funding the targets established by the formula – providing less than half of the target levels of aid required for many of the state’s highest need districts.

Rhode Island’s Numbers Game

Perhaps most ludicrous of all are Rhode Island public officials’ attempt to validate empirically their selected spending levels for recent school finance reforms.  Rhode Island’s school finance reforms gained significant attention among policy think tanks as a model of proactive political collaboration leading to progressive, empirically based but elegantly simple reform (Wong, 2013). As described in official documents, the basic funding level for the Rhode Island formula is set as follows:

(1) The core instruction amount shall be an amount equal to a statewide per pupil core instruction amount as established by the department of elementary and secondary education, derived from the average of northeast regional expenditure data for the states of Rhode Island, Massachusetts, Connecticut, and New Hampshire from the National Center for Education Statistics (NCES) that will adequately fund the student instructional needs as described in the basic education program and multiplied by the district average daily membership as defined in section 16-7-22. (RIDE, 2010)

As articulated by State Education Commisioner Deborah Gist:

“Our core instructional amount was based on national research, using data from the NCES, is sufficient to fund the requirements of the Rhode Island Basic Education Program, and it in no way focused on states with low per-pupil expenditures. In fact, we looked particularly carefully at our neighboring states, which have some of the highest per-pupil expenditures in the nation, and we included only those states that have an organizational structure and staffing patterns similar to ours.” (Gist, 2010)

Several points here are worthy of note.

  • That like New York officials, Rhode Island officials chose to focus on a reduced spending figure – core instructional spending – rather than a complete current operating spending figure.
  • Average core spending of other states is hardly to be considered “national research” and average spending based on national data sources in other states is hardly indicative of what might be required to achieve Rhode Island’s required outcomes unless the state’s outcomes are also contingent on standards set in other states.
  • The data used to set funding targets for school year 2010-11 and beyond come from several years prior;
  • New Hampshire is not a neighboring state of Rhode Island.

Table 1 shows the effect of including New Hampshire among Rhode Island’s “neighbors” when calculating the basic spending levels. Spending in New Hampshire is substantively lower than in Massachusetts or Connecticut, and thus brings down the average. Notably, spending in Vermont which is much higher than in New Hampshire is not included.

Table 1

RI

Eventually, in accordance with their “analyses,” Rhode Island officials proposed a foundation level for 2010-11 and beyond to be set at $8,295 (RIDE, 2010, Wong, 2013).  Notably, however, the average spending in Connecticut, Massachusetts and New Hampshire which most closely approximates that figure comes from 2006-07.  Further, the 2007-08 Rhode Island average core instructional spending per pupil was already over $8,500, and a more comprehensive measure of current operating spending per pupil exceeded $13,000 per pupil.

References

Baker, B. D. (2012). Revisiting the Age-Old Question: Does Money Matter in Education?. Albert Shanker Institute.

Baker, B. D. (2009). Private schooling in the US: Expenditures, supply, and policy implications. Boulder and Tempe: Education and the Public Interest Center & Education Policy Research Unit.

Baker, B. D., & Corcoran, S. P. (2012). The Stealth Inequities of School Funding: How State and Local School Finance Systems Perpetuate Inequitable Student Spending. Center for American Progress.

Baker, B., & Green, P. (2008). Conceptions of equity and adequacy in school finance. Handbook of research in education finance and policy, 203-221.

Baker, B. D., Libby, K., & Wiley, K. (2012). Spending by the Major Charter Management Organizations: Comparing Charter School and Local Public District Financial Resources in New York, Ohio, and Texas. National Education Policy Center.

Baker, B. D., Sciarra, D. G., & Farrie, D. (2012). Is School Funding Fair?: A National Report Card. Education Law Center. http://schoolfundingfairness.org/National_Report_Card_2012.pdf

Baker, B. D., Sciarra, D. G., & Farrie, D. (2010). Is School Funding Fair?: A National Report Card. Education Law Center. http://schoolfundingfairness.org/National_Report_Card.pdf

Baker, B. D., Taylor, L., & Vedlitz, A. (2005). Measuring educational adequacy in public schools (Report prepared for the Texas Legislature Joint Committee on Public School Finance, The Texas School Finance Project).

Baker, B., & Welner, K. G. (2012). Evidence and Rigor Scrutinizing the Rhetorical Embrace of Evidence-Based Decision Making. Educational Researcher, 41(3), 98-101.

Baker, B.D. & Welner, K.G. (2011a). Productivity Research, the U.S. Department of Education, and High-Quality Evidence. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/productivity-research.

Baker, B. D., & Welner, K. G. (2011b). School Finance and Courts: Does Reform Matter, and How Can We Tell?. Teachers College Record, 113(11), 2374-2414.

Baker, B., & Welner, K. G. (2010). Premature celebrations: The persistence of inter-district funding disparities. education policy analysis archives, 18, 9.

Bifulco, R. (2005) District-Level Black-White Funding Disparities in the United States 1987 to 2002. Journal of Education Finance 31 (2) 172-194.

Buras, K. L. (2011). Race, charter schools, and conscious capitalism: On the spatial politics of whiteness as property (and the unconscionable assault on black New Orleans). Harvard Educational Review, 81(2), 296-331.

Clune, W. H. (1994). The shift from equity to adequacy in school finance. Educational Policy, 8(4), 376-394.

Cuomo, A (2011) State of the State. Albany, NY. http://www.governor.ny.gov/sl2/stateofthestate2011transcript

Deslatte, A. (2011) Scott: Anthropology and journalism don’t pay, and neither do capes. Orlando, FL: Orlando Sentinal. October 11, 2011

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In

Yinger, J. (ed), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity.

Cambridge, MA: MIT Press.

Duncan, A. (November 17, 2010) The New Normal: Doing More with Less — Secretary Arne Duncan’s Remarks at the American Enterprise Institute. Washington, DC:

http://www.ed.gov/news/speeches/new-normal-doing-more-less-secretary-arne-duncans-remarks-american-enterprise-institut

Duncombe, W.D., and Johnston, J. (2004). Helping Children Left Behind: State Aid and the Pursuit ofEducational Equity. Cambridge, MA: MIT Press.

Freeman, J. (2011) New Jersey’s ‘Failed Experiment’ The new governor is on a mission to make his state competitive again in attracting people and capital. New York, Wall Street Journal. http://online.wsj.com/article/SB10001424052702303348504575184120546772244.html

Gates, W. (2011) Flip the Curve: Student Achievement vs. School Budgets. Huffington Post. http://www.huffingtonpost.com/bill-gates/bill-gates-school-performance_b_829771.html

Gist, D. (2010) National Journal. R.I. Formula Funds Children, Not Systems. http://education.nationaljournal.com/2010/06/a-funding-formula-for-success.php

Imazeki, J., and Reschovsky, A. (2004). Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

McClure, P., Wiener, R., Roza, M., and Hill, M. (2008). Ensuring equal opportunity in public education: How local school district funding policies hurt disadvantaged students and what federal policy can do about it. Washington, DC: Center for American Progress. Retrieved December 20, 2009 from http://www.americanprogress.org/issues/2008/06/pdf/comparability.pdf

Public Impact; The University of Dayton, School of Education and Allied Professions; and Thomas B. Fordham Institute. (2008, March). Fund the Child: Bringing Equity, Autonomy and Portability to Ohio School Finance How sound an investment? Washington, DC: Thomas B. Fordham Institute. Retrieved December 20, 2009 from http://www.edexcellence.net/doc/fund_the_child_ohio_031208.pdf

New York State Education Department (2011). Fiscal Analysis & Research Unit. Primer on State Aid 2011-2012. http://www.oms.nysed.gov/faru/PDFDocuments/Primer11-12D.pdf

New York State Education Department (2011). Fiscal Analysis & Research Unit. Successful Schools Analysis Technical Report. http://www.oms.nysed.gov/faru/documents/technical_final.doc

Oliff, P., Mai, C., Leachman, M. (2012) New School Year Brings More Cuts in State Funding for Schools. Washington, DC: Center on Budget and Policy Priorities. http://www.cbpp.org/cms/?fa=view&id=3825  Accessed July 23, 2013

RIDE (Rhode Island Department of Education) Division of School Finance (2010) http://www.ride.ri.gov/Finance/Funding/FundingFormula/Docs/H8094Aaa_FINAL_6_10_10.pdf

Roza, M. (2006) “How Districts Short Change Low Income and Minority Students,” in Funding Gaps 2006. Washington, DC: The Education Trust.

Rubenstein, R., Schwartz, A. E., Stiefel, L., and Bel Hadj Amor, H. (2007). From districts to schools: The distribution of resources across schools in big city school districts. Economics of Education Review, 26(5), 532-545.

Stiefel, L, Rubenstein, R., and Berne, R. (1998). Intra-District Equity in Four Large Cities: Data, Methods and Results.” Journal of Education Finance, 23(4), 447-467.

U.S. Department of Education, For Each and Every Child—A Strategy for Education Equity and Excellence, Washington, D.C., 2013. http://www2.ed.gov/about/bdscomm/list/eec/equity-excellence-commission-report.pdf

Wong, K. K. (2013). The Design of the Rhode Island School Funding Formula: Developing New Strategies on Equity and Accountability. Peabody Journal of Education, 88(1), 37-47.

An Illustrative Case of the Numbskullery of Evaluating Teacher Preparation by Student Growth Scores

Assumption:  A good teacher preparation program is one that produces teachers whose students achieve high test score gains

Relay Graduate School of Education is housed in North Star Academy in Newark, and its course modules are largely provided by relatively inexperienced “champion” teachers within its own network (and in from the school itself).  The program is designed to train its own future teachers [and others in network] – and to actually credential them (and grant them graduate degrees) in the specific methods used in their school(s).

Put simply, Relay GSE uses relatively inexperienced teachers to grant degrees to their own new colleagues, where those colleagues may be required by the school to gain those credentials in order to retain employment. No conflict of interest here? But I digress. Back to the point.

Their modules, as shown on the Relay website, are in their best light, little more than mindless professional development for classroom management, and reading inspirational books by school founders, discussed with “champion” teachers. Hardly the stuff of legitimate graduate work, in any field. But again, I digress.

Relay GSE will likely place a significant number of its graduates in its own school (or in network).

North Star Academy has pretty good growth scores, by the (bogus) New Jersey growth metric.

Therefore, not only is North Star Academy totally awesome, but Relay GSE must be an outstanding  teacher preparation institution! It’s just that simple. They must be offering that secret sauce of teaching pedagogy which we should all be looking to as a model. Right?

Setting aside that the New Jersey growth scores themselves are suspect, and that the endeavor of linking teacher preparation program effectiveness to such measures is completely invalid, what the current approach fails to recognize is that North Star Academy actually retains less than 50% of any given 5th grade cohort through 12th grade in any given year, and far fewer than that for black boys. The school loses the vast majority of black boys, and for the few who remain behind, their growth scores – likely as influenced by dwindling peer group composition among those left as by “teacher” effects – are pretty good.

But is a school really successful if 50 enter 5th grade, 1/3 are gone by 8th grade and only a handful ever graduate?

Is this any indication of the quality of teaching, or pedagogy involved?  I won’t go so far as to suggest that what I personally might perceive as offensive, demeaning pedagogy is driving these attrition rates (okay… maybe I just did).

But, at the very least, I might argue that a school that loses over half its kids from grade 5 to 12 is a failing school, not an outstanding one. Whether that has any implications for labeling their teachers as “failing” and their preparation programs as “failing” is another question entirely.

It is quite simply completely and utterly ridiculous to suggest that Relay GSE is an outstanding graduate school of education as a function of measured test score gains of the few students who might stick around to take the tests in subsequent years.

No secret sauce here… just a boatload of bogus policy assumptions creating perverse incentives and taking our education system even further in the wrong direction.

Notably, this does not prove it’s a bad, or awful grad school of education either (see their videos, and read the reports here for evidence of that).

My point here is that this particular case – or what it has the potential to be – is wonderfully (in a twisted way) illustrative of the numbskullery that pervades public education policy from k-12 school accountability metrics to proposals for “improving” teacher preparation.

This foolishness must stop.

A Poverty of Thinking about Poverty Measures in New Jersey School Finance

Cross Posted at http://njedpolicy.wordpress.com/2013/07/18/a-poverty-of-thinking-about-poverty-measures-in-new-jersey-school-finance/

Link to PDF of Policy Brief: Poverty_Counts_July_2013

Bruce D. Baker, Rutgers University, Graduate School of Education


Introduction

Every few years or so, in nearly any state but especially in those where leadership is actively seeking ways to reduced financial support to local public school districts serving lower income children,[1] one can expect the re-emergence of politically induced media outrage over rampant fraud in National School Lunch Program. The usual course of events is as follows:

  1. Manufacture some scandalous but largely anecdotal manifesto about how local district officials are egregiously mislabeling children as low income in order to hoard and obscene sums of state aid.
  2. Manufacture other claims that poverty really doesn’t matter anyway and certainly these poverty measures have little or nothing to do with determining whether children are likely to do well in school.[2]
  3. Assign a task force composed mainly of lay people with little or no expertise in education policy, finance or specifically the measurement of poverty, to swallow whole the manufactured evidence and generate politically convenient policy recommendations.

During my years in Kansas, on faculty at the University of Kansas, similar debates occurred with regularity. At one point, the legislature established an “At Risk Council” whose charge was to evaluate alternative proxies for determining student need, to be used in the state aid formula. Former education Commissioner Andy Tompkins was assigned to chair the task force, which eventually concluded:

The Council continues to believe that the best state proxy for identifying at-risk students is poverty, whether that be measured by free or free and reduced price lunches.[3]

Nonetheless, Kansas legislators continued to seek, and eventually adopt, alternative measures that would drive additional funding to lower poverty suburban districts, and thus, away from higher poverty districts, under the auspices of special needs.[4]

In 2011, the New Jersey State Auditor released a report blasting rampant fraud in the school lunch program.[5] In 2012, a task force primarily of lay persons was formed to evaluate whether the state aid formula should continue to drive funding to local public school districts on the basis of these obviously fraudulent and overstated counts of children in need. But little seems to have come thus far of last year’s efforts to raise suspicion over the implications of supposed rampant fraud in the free and reduced lunch program for the equity and adequacy of the state aid formula.

Thus, here we go again. This month, the New Jersey auditor has released yet another scathing report of rampant fraud, instigated by local school officials, in the National School Lunch Program. Immediately that report has been cast as having significant implications for how school funding is allocated.[6]

This year’s report again audited a select number of applications for the school lunch program, from 15 school districts, finding cases of misreported income, often by school officials themselves. Such fraud, if indeed validly characterized in the auditor’s report, is certainly wrong and should be handled appropriately. But the implications of the auditor’s findings for using subsidized lunch as a measure for driving state aid are negligible, other than the fact that the state should continue regular auditing.

Income Measures & School Funding Formulas

The basic assumption behind targeting additional resources to higher poverty schools and districts is that high need districts can leverage the additional resources to implement strategies that help to improve various outcomes for children at risk.  Some share of the additional resources is needed in higher poverty settings simply to provide for “real resource” equity – or to pay the wage premium required to recruit and retain teachers into higher poverty settings. Further, resource intensive strategies such as reduced class sizes in the early grades, intensive tutoring and extended learning time programs may significantly improve outcomes of low income students.

When seeking a measure for differentiating between higher and lower need settings, the idea is to find that indicator or measure that seems to best capture the likelihood that children will struggle in school – that they will enter kindergarten less prepared and have access to fewer out of school resources during their time in school (including limited summer learning opportunities).

A variety of socioeconomic indicators might be considered. But often, the information that happens to be most available is counts of kids who are from low income families, as identified through the National School Lunch Program income criteria.  And, as a measure of convenience, it tends to work quite well. I compare this measure below with Census poverty measures, based on children in families living in a certain area (within school district boundaries) who fall below the much lower income threshold of 100%, which has some advantages but also some major shortcomings.

To determine whether school lunch counts are useful for guiding school finance policies, one must look more broadly at the validity of these measures when cast at the school district level, statewide. Small scale audits of individual applications are of marginal use in this regard. The simplest validity checks on the usefulness of subsidized lunch measures as a student need proxy for state aid are as follows:

Is the Poverty Measure Correlated with Other Poverty Measures?

It is indeed desirable to find some measure on which to base funding allocations that can’t be gamed, or manipulated by those who stand to receive the additional funding. But that’s not always feasible (or cost effective). And, even if a count method does involve local district officials gathering data, it can, and should still be audited.[7]

One reasonable way to evaluate district collected data on children qualifying for free or reduced lunch is to evaluate the relationship between the free/reduced lunch concentrations and census poverty estimates based on resident populations.

In Figure 1 we see that Census poverty rates tend to range from 0 to about 45% and free/reduced rates – children in families under a much higher income threshold, up to about 100%. In fact, as I’ve noticed in many analyses, the free/reduced lunch data tend to get messy above 80%, suggesting that this is the range within which local administrators may be maxing out their ability to get parents to comply & file paperwork. Here, we see that even though poverty rates keep climbing, free/reduced rates seem to level off. Arguably, if anything is going on here, it’s that very high poverty districts like Camden and Trenton – which fall “below the curve” are under-reporting their free/reduced rates – with some possibility of marginal over-reporting in Elizabeth.

Overall, however, census poverty explains nearly 90% of the variation in free/reduced rates.

In other words, free/reduced lunch makes a pretty good proxy.

Figure 1. Relationship between Census Poverty 2010 and District Free/Reduced Lunch 2011

Slide1

In Figure 2, I’ve tried to better tease out the districts that may be under or over reporting by cleaning up the non-linear relationship by expressing both measures in their natural logarithm form. Here, we see that the relationship remains very strong and still slightly curved.

If there were districts substantially over-reporting free/reduced lunch, they would appear to pop above the outer/upper edge of the curve. That is, their reported rates would be higher than predicted based on the alternative measure. On the other hand, there are a number of districts that are relatively low in poverty but report disproportionately low free/reduced lunch rates – that is, under-reporting.

Figure 2. Logged Relationship (natural log) between Census Poverty and Free/Reduced Lunch

Slide2

In general, these figures show that free/reduced lunch rates are a reasonable proxy for district poverty rates. These figures do not indicate substantial, systematic (beyond predicted, based on resident child poverty rates) mis-classification.

Is the Poverty Measure Correlated with Student Outcomes?

The “big question” is which version of the measure better captures differences in student outcomes – or predicts more accurately educational disadvantage.  This is straightforward enough to check as well. The first figure here shows the relationship between free/reduced lunch rates and proficiency rates on state assessments in 2011.

Figure 3 shows that % free/reduced lunch alone explains about 81% of the variation in proficiency rates across districts.  So, it’s a pretty reasonable proxy of educational disadvantage.

 Figure 3. Free/Reduced Lunch & Proficiency in 2011

Slide3

I have some concerns about the extent to which this relationship erodes at and approaching free/reduced rates above 80%. Is it really that Camden and Trenton perform that poorly compared to Union and Elizabeth despite serving even less poor populations? Or might the story be more complex than this.

Figure 4 which shows the relationship between Census Poverty and proficiency sheds some additional light on this issue.

Figure 4. Census Poverty and Proficiency

Slide4

Figure 4 suggests that Camden and Trenton are actually a) higher poverty than Elizabeth (and Camden higher than Union) and b) perform more or less where they are expected to [somewhat below, as opposed to well below]. This is an interesting contrast that adds some support to my speculation above that these very high poverty cities may in fact be understating their poverty rates in their free/reduced lunch data. Indeed, there may be some overstating in Union and Elizabeth, but neither popped substantially above the curve in the previous charts.

Census poverty rates, while capturing a unique story of difference between Camden and Trenton vs. Union and Elizabeth do slightly less well at explaining variations in proficiency rates, making the free/reduced count preferable in this regard.

Additional Policy Considerations

Given all of this, there are a few additional considerations when pondering which measure to actually use in state school finance policy.

More Stringent Count Methods require Larger Weights

First, if we choose to use a more stringent income threshold for poverty, like the census poverty measure, we would need to assign the appropriate weight to drive the appropriate amount of funding to high need districts. Simply changing our method of counting kids in poverty doesn’t change the needs of Camden or Trenton. It merely recasts those needs with an alternative measure. More stringent measures require larger weights, an issue that has been explored empirically.[8]

This applies to the choice of using free lunch (130% income threshold) as opposed to free or reduced lunch. Using free lunch only might permit better differentiation between high poverty districts, but a higher weight would then be required to drive sufficient funds to those districts. That is, shifting to this weight should not drive less total targeted aid, but rather, should target that aid more accurately.

Problems with Residential/Geography Based Measures in New Jersey

Census poverty measures are limited in their usefulness in the current New Jersey policy context, because they are based on location of residence and linked to geographic boundaries of school districts. New Jersey has significant numbers of non-unified, regional secondary school districts for which poverty estimates may be imprecise or inaccurate.

Further expansion of charter schools and inter-district choice programs complicates use of measures based on place of residence. Funding to schools must be sensitive to the demographics of students enrolled in those schools.  It would be entirely inappropriate, for example, to require a sending district like Newark or Camden to pay charter or other district tuition on the basis of their own average resident poverty rate if the charter school or receiving district is not taking a comparable share of children in poverty.

As a result, free or free and reduced price lunch measures likely remain preferable.


[1] The assertion here that New Jersey officials are actively seeking a rationale for reducing state aid to higher poverty districts is justified here, https://schoolfinance101.wordpress.com/2012/03/02/amazing-graph-proves-poverty-doesnt-matter/, where State Education Commissioner Cerf presents data to assert that poverty may not have strong influence on student outcomes, here (https://schoolfinance101.wordpress.com/2012/12/18/twisted-truths-dubious-policies-comments-on-the-njdoecerf-school-funding-report/) where the Commissioner asserts that “dollarizing” student needs simply doesn’t work, and most notably, here (https://schoolfinance101.wordpress.com/2013/03/02/civics-101-school-finance-formulas-the-limits-of-executive-authority/) in which I explain how state leaders have already, against authority of the school funding statute itself, chosen to calculate district aid on the basis of “average daily attendance” rather than fall enrollment count, leading to substantive, disproportionate reduction of aid to higher poverty districts.

[4] http://skyways.lib.ks.us/ksleg/KLRD/Publications/2013Briefs/2013/I-1-SchoolFinance.pdf (specifically adding a weight for non-low-income, non-proficient students)

[7] Preferably in a  more thorough and responsible way than checking a smattering of individual families’ forms for those who fall closest to the income threshold, necessarily ignoring those who fall just the other side of the threshold but didn’t file.

[8] Duncombe, W., & Yinger, J. (2005). How much more does a disadvantaged student cost?. Economics of Education Review, 24(5), 513-532.

Thinking (& Writing) About Education Research & Policy Implications

Education reporters out there… here are a few thoughts for you as you embark on whatever may be your next article pertaining to an education research study.

FIRST, do a Google Scholar (easiest lit search around!) search on the topic in question to see what other peer reviewed an non-peer reviewed stuff has been written on the same topic? And more specifically, if you are reporting on a “work in progress,” or non-peer reviewed recent release, compare the a) methods used and b) phrasing of major conclusions, to those used in the peer reviewed stuff. While peer review isn’t a be all and end all for research quality, methods do tend to get refined in the process, and junky methods often (though not always) get filtered out or substantively improved (it’s all relative)! More complicated methods aren’t always better. Good authors can explain more complicated methods in reasonable terms.

These next two are perhaps even more important… and require somewhat less technical background…

SECOND, stop, take a breath and revisit your basic knowledge of how schools work – how they are set up – etc. How classrooms are organized – how kids and teachers are sorted across classrooms, schools, neighborhoods, etc.  Ponder how classrooms are organized and how those classrooms may differ from one school to the next, one town or city to the next. Scribble out pictures of “how schools work,” how a child’s day, week, year – inside and outside of school is organized…AND THEN, ONLY THEN, start pondering the possible implications of the study.

THIRD, while pondering the implications of the study, make yourself a list of major current policy agendas and ask yourself – what the heck might any of this mean, when it comes to, say, studies of the effectiveness of charter schools? The effect of charter expansion? Or the usefulness of test-score based measures for evaluating teacher effectiveness?

One recent example that comes to mind is the reporting on a report (well, actually a series of them) from the Hamilton Institute.  Specifically, The Boston Globe covered the portion of the report where one of the report author’s Michael Greenstone indicated that:

High-income families have always invested more in education, but they now spend seven times more a year on average than a low-income family, up from four times in the 1970s, according to the report, coauthored by MIT economics professor Michael Greenstone. These families now spend as much as $9,000 annually on private tutoring, SAT prep courses, computers, and other activities, compared with about $1,300 for low-income families. (cited from the Boston Globe)

The (rather unfulfilling) policy implications punchline(s) from the Boston Globe article were:

For example, said Greenstone, simplifying financial aid applications and providing low-income families help in filling them out could increase college enrollment by about 8 percentage points at a cost of less than $100 a student.

Another recent study found that mailing high-achieving, low-income students personalized information on their college options nudged students to apply to better schools.

Surely, a seven fold difference in private contributions to children’s learning between richer and poorer families has broader implications than this? Right?

Actually, this kind of disparity, and knowing how richer and poorer kids and their schools are organized, has potential ripple effect implications across nearly everything we study in education policy research.   Think about this – just a little bit – from a very basic and practical standpoint.

Wealthier families are adding up to $9k annually to the educational expenditures on their children, compared to $1.3k for less wealthy families.  So, even if these two groups lived in similar towns and attended “equally” funded schools, we’d have a substantial disparity in the financial inputs to their education. Now, if all of this additional spending is pointless, and, for example, doesn’t in any way contribute to improved test scores, then perhaps it’s a non-issue when we consider other implications for popular policy research. But, to the extent that this personal expenditure matters at all, then it has important ripple effects across numerous types of studies, pertaining to current favored policy topics.

For example, if teachers are going to be evaluated on the basis of student test score gains, and those tests are to be given annually, wouldn’t it be better to be the teacher of kids whose parents are spending more (assuming they are choosing wisely) on after school, weekend… and especially SUMMER academic opportunities? Seriously – first consider (jot it down/back of the napkin) how many hours per day for a 185 day school year a kid has contact with her algebra teacher.  Then add up the hours for a typical KUMON program after school or on weekends. Add in all of those summer days, and potential access to a plethora of interesting summer academic & enrichment programs. 45 minutes a day for 185 days is a relatively small portion of a child’s life over the course of a year. Doesn’t take any heavy statistical lifting to figure that out. Just stepping back and think about how kids’ lives and schools are organized.

And is that 45 minutes a day in a class of 35 (dividing the teacher’s attention by 35) really equivalent to 45 minutes in a class of 16? And which kid is more likely in which class? (depends somewhat on state context).

There’s already a substantial body of literature validating substantial summer achievement growth differences by income status. Quite honestly, if our best value-added measures and growth percentile measures aren’t picking up such large, non-random, non-school investments in student learning – if these investments don’t affect the model results – it may just be because the models and measures on which they are based are crap.

It turns out that this differential investment by parents in out of school opportunities not only compromises how we think about per pupil spending differences across children, but it also may blow a pretty big hole in how we interpret a whole lot of other policy research & policy recommendations.

A second example, which I have discussed previously, is reporting on the much discussed CREDO studies of charter school “effects” on student achievement gains.   These studies really require that we ponder how school systems work and how kids sort (as well as how we measure who is similar to one another). Otherwise, we miss some really, really, important points.

First, I’ve explained previously through pictures that studies characterized as “randomized” lottery studies of charter schools really aren’t randomized, which can easily be seen by sketching out where, in the process randomization occurs (lottery).  A true randomized study would take a representative population, and randomly put half in a charter school, and half in a control (whatever that may be) school. Like this:

Randomized

But a lottery study starts with a sample of those who entered the lottery, which may or may not be representative of the total population – but in theory they were/are all similarly motivated to enter a lottery.  But it’s only the lottery that’s randomized. Not the peer group into which the kids fall when they finally end up at their assigned school. Like this:

pseudo-randomized

So who cares, if they are supposed otherwise similar kids (of course, as I’ve noted, the measures are often insufficient for defining them as such)?  Well, let’s ponder again how schools work and how we evaluate the “effects” of a school on a kid. What’s in a school, after all?

Bricks & mortar, materials, supplies and equipment, yes.

Teachers, yes.

Other school staff, check.

And other kids! Check!

The “effect” of a school as measured in most studies of this type are the “effect” on measured test score changes during a given time period, of all of this stuff – and for that matter – any and all outside of school stuff that goes on during this same time period. And that includes the peer group. And a substantial body of research supports that peer groups matter for student outcomes.

The average current achievement level of the peers affects individual student’s outcomes.[1]

In other words, cream-skimming and/or selective attribution, to the extent it exists and to the extent it affects peer groups, matters (on both the up, and down side)[2] – in this type of study, which considers any and all school conflated factors to contribute to measured school effects.

This is not a condemnation of the CREDO method, but rather a limitation (I might condemn the extent that they ignore and obfuscate this point). It’s really hard to sort out the peer, from teacher or school effect. They’re all conflated. And guess what… all of this stuff then relates back to those huge differences in which kids’ families spend more on their outside of school education! It would certainly be a huge stretch to suggest that positive effects found for a charter school, or charter schooling, using this method tell us anything about the relative effectiveness of charter versus “other” school teachers.

Then there’s the issue  of how these CREDO type studies frequently address (read: brush off) the issue of cream-skimming. First, many use measures insufficient to actually capture cream-skimming (calling all special ed kids, or all “low income” kids equal, when they’re not, and when they may not be randomly sorted as either individuals or peers).

Second, they often set up a deceptive comparison… say… for example… showing that kids who entered charter middle schools from district elementary schools are representative of the total population of their cohort from the district elementary schools. The casual reader then assumes that this means that if the charter applicant and matriculated kids were representative of the populations of the sending schools, then so too must be the kids in the “control” group – district middle schools.

But wait a second, those aren’t the only two pipelines, or options out of feeder schools. Rather, a more complete picture might look like this…

feeder

Among kids in those feeder, urban (perhaps) neighborhood elementary schools, when middle school comes along, some may go to district magnet schools that have selective admissions (and thus selective peer groups), some may go to private schools and some may in fact move out to the suburbs. And then there are those who go to the district, “regular” schools – the likely “control” group in CREDO like studies. Do we really think that the kids who sort through each of these various pipelines are similar to one another? Or might comparing against a “feeder” group that sorts in many directions be a little deceptive, at least if it’s done without any acknowledgement of the various directions into which kids sort, and the uneven distribution that may (likely) result from that sorting?

To tie this altogether, it’s also certainly likely that the family contributions to outside of schooling education across these pathways also varies.

So… draw some pictures. Ponder how the system works. Think broadly. Step back & revisit to see if anything might be missing.  Step outside the immediate implications provided by study authors and ask the bigger questions. And with each new study that comes along, don’t forget entirely all those that came before it!


[1] Hanushek, E. A., Kain, J. F., Markman, J. M., & Rivkin, S. G. (2003). Does peer ability affect student achievement?. Journal of Applied Econometrics, 18(5), 527-544.

The results indicate that peer achievement has a positive effect on achievement growth. Moreover, students throughout the school test score distribution appear to benefit from higher achieving schoolmates.

Hoxby, C. M., & Weingarth, G. (2005). Taking race out of the equation: School reassignment and the structure of peer effects. Working paper.

We find support for the Boutique and Focus models of peer effects, as well as for a generic monotonicity property by which a higher achieving peer is better for a student’s own achievement all else equal.

Burke, M. A., & Sass, T. R. (2013). Classroom peer effects and student achievement. Journal of Labor Economics, 31(1), 51-82.

…we find that peer effects depend on an individual student’s own ability and on the ability level of the peers under consideration, results that suggest Pareto‐improving redistributions of students across classrooms and/or schools. Estimated peer effects tend to be smaller when teacher fixed effects are included than when they are omitted, a result that suggests co‐movement of peer and teacher quality effects within a student over time. We also find that peer effects tend to be stronger at the classroom level than at the grade level.

[2] Dills, A. K. (2005). Does cream-skimming curdle the milk? A study of peer effects. Economics of Education Review, 24(1), 19-28.

The determinants of education quality remain a puzzle in much of the literature. In particular, no one has been able to isolate the effect of the quality of a student’s peers on achievement. I identify this by considering the introduction of a magnet school into a school district. The magnet school selects high quality students from throughout the school district, generating plausibly exogenous variation in the quality of classmates remaining to those students in the regular schools. I find that the loss of high ability peers lowers the performance of low-scoring students remaining in regular schools.

Stop School Funding Ignorance Now! A Philadelphia Story

On a daily basis, I continue to be befuddled by the ignorant bluster, intellectual laziness and mathematical and financial ineptitude of those who most loudly opine on how to fix America’s supposed dreadful public education system.  Common examples that irk me include taking numbers out context to make them seem shocking, like this Newark example (some additional context), or the repeated misrepresentation of per pupil spending in New York State.

And then there are those times, when a loudmouthed pundit simply chooses to ignore reality altogether – and frame the problem as it exists only in their own cloistered world or own head. That brings me to this tweet:

Perhaps I’m misinterpreting, but it appears that Andy Smarick in this tweet is placing blame for the financial distress of Philadephia schools squarely if not entirely on the city school district itself. In fact, he suggests that someone has been “propping up” the district. And that because the district – like all “urban” districts do – fails – it must be replaced by an assortment of private providers. See this post for more insights into Smarick’s “solution” to this “problem” that Philly Schools has clearly created on its own.

To callously assert that the problems faced by Philly schools are primarily if not entirely a  function of local mismanagement – and that someone somewhere has actually been trying to “prop” the district up – displays a baffling degree of willful ignorance.  Save for another day a discussion of the fact that over the past 10 years, the city has in fact adopted many of the strategies that Smarick himself endorses (privatized management, charter expansion, etc.).

One might argue that to a significant extent, through the state’s dysfunctional and inequitable approach to providing financial support for local public districts, Pennsylvania has for some time (but for a brief period of temporary reforms) actually been trying to put an end to Philly schools. And it appears that they may be achieving their goals. To summarize:

  1. Pennsylvania has among the least equitable state school finance systems in the country, and Philly bears the brunt of that system.
  2. Pennsylvania’s school finance system is actually designed in ways that divert needed funding away from higher need districts like Philadelphia.
  3. And Pennsylvania’s school finance system has created numerous perverse incentives regarding charter school funding, also to Philly’s disadvantage. (see here also)

I would be remiss if I didn’t actually include data or a graph in this post, beyond the citations to sources above that include plenty.  So here it is – the distribution of state and local revenues for districts in the Philly metro area from 2005 to 2011, with respect to child poverty.

Slide1A district with average state and local revenue for the metro area would fall on the 1.0 line. The sizes of the shapes represent the size of the districts in terms of enrollment. Circles are for 2005, triangles for 2007 and so on (see key). The vertical position of larger shapes is measured from their center. Notably, Philly hangs at marginally above 80% of metro average funding.  Yes… following the Rendell formula reforms Philly’s position started to improve slightly but has since fallen back, and never really made sufficient progress. Way up in that upper left hand corner, is Lower Merion School District, perhaps the most affluent suburb of Philly. They’re doin’ just fine!

What we also notice here is that Philly’s indicator is, year after year, moving to the right in our picture. Some of this is a poverty measurement issue, but some of it is real (to be parsed more carefully at a later point). Philly school aged children are getting poorer. They were never compensated with sufficient additional resources to begin with and those resources are now in decline.

I’ve explained previously that Cost pressures in education are primarily local/regional. Education is a labor intensive industry. Salaries must be competitive on the local/regional labor market to recruit and retain quality teachers. And for children to have access to higher education, they must be able to compete with peers in their region.

And within any region, children with greater needs and schools serving higher concentrations of children with greater needs require more resources – more resources to recruit and retain even comparable numbers of comparable teachers – and more resources to provide smaller class sizes and more individual attention.

Put simply – Philly needs far more than its surrounding districts but has, year after year, had far less.

More information on how and why money matters can be found here:

http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

As far back as I’ve been running the numbers with both national and state data sources, Philly has been among the most screwed urban public districts in the nation. Philly has never been “propped up.”

End the district? Because it’s clearly the right thing to do for these kids? Because we’ve propped them up year after year… and they just keep blowing it – acting inefficiently – in the interest of adults not kids – as all “urban” districts do? Are you freakin’ kidding me? Wake the hell up. Look at some damned data and evaluate the problem a little more carefully before you make such absurd declarations.

For those who wish to levy similar accusations against Chicago….

Slide2Those BIG shapes there… which like Philly, fall below the “average” line and have much higher child poverty than other districts in their metro? yeah… that’s Chicago. As I’ve noted on numerous previous posts in this blog (just search for “Chicago” or Illinois) Chicago and Philly are consistently among the most screwed major urban districts – operating in states with the least equitable state school finance systems. The links above to reports slamming PA (first two bullets) provide similar tales of inequity in Illinois.

UPDATE:

Clearly, Andy Smarick cares little that he lacks even the most basic understanding of the financial plight of Philadelphia public schools.  The tweets keep coming… and remain as wrong as ever… simply … factually… wrong! There is just no excuse for this kind of BS.

As for the presumptive solution here… that the “failed urban” district should/can be replaced with portfolio of charter operators that will necessarily be more effective, consider again that Philly has been dabbling for over a decade with resource free attempts at porfolio-izing the district. Consider also that even where charters – at small market share (http://shankerblog.org/?p=8609) do appear relatively effective – there remain substantive differences in their student populations, and in many cases substantive differences in their access to resources.

There are no miracles, regardless of the type of provider. Here’s one particularly relevant post on the non-reformy lessons of KIPP: https://schoolfinance101.wordpress.com/2013/03/01/the-non-reformy-lessons-of-kipp/ & here’s a more cynical post regarding NJ charters, and Uncommon schools in particular:

https://schoolfinance101.wordpress.com/2013/07/14/newark-charter-update-a-few-new-graphs-musings/

In other words, if the urban school district has proven, with unlimited resources, that it cannot succeed, and if charters have largely proven a break even endeavor in their urban contexts, then they too are equal failures. Only in Smarick’s wild imagination is the solution so simple and clear, yet so potentially dangerous if blindly accepted as public policy.

https://schoolfinance101.wordpress.com/2013/04/08/the-disturbing-language-and-shallow-logic-of-ed-reform-comments-on-relinquishment-sector-agnosticism/

This level of fact-free schlock and feeble minded policy advocacy must stop. Civil discourse? Sorry. I just can’t. This stuff is just too dumb for words! It’s irresponsible, ill-informed, reckless and more.

The Glaring Hypocrisy of the NCTQ Teacher Prep Institution Ratings

I’ve already written about this topic in the past.

But, given that NCTQ has just come out with their really, really big new ratings of teacher preparation institutions… with their primary objective of declaring teacher prep by traditional colleges and universities in the U.S. a massive failure, I figured I should once again revisit why the NCTQ ratings are, in general, methodologically inept & vacuous and more specifically wholly inconsistent with NCTQ’s own primary emphasis that teacher quality and qualifications matter perhaps more than anything else in schools and classrooms.

The debate among scholars and practitioners in education as to whether a good teacher is more important than a good curriculum, or vice versa, is never-ending. Most of us who are engaged in this debate lean one way or the other. Disclosure – I lean in favor of the “good teacher” perspective.  Those with labor economics background or interests tend to lean toward the good teacher importance, and perhaps those with more traditional “education” training lean toward the importance of curriculum.  I’m grossly oversimplifying here (perhaps opening a can of worms that need not be opened). Clearly, both matter.

I would argue that NCTQ has historically leaned toward the idea that the “good teacher” trumps all – but for their apparent newly acquired love of the Common Core Standards.

Now here’s the thing – if the content area expertise of elementary and secondary classroom teachers and the selectivity and rigor of their preparation matters most of all – how is it that at the college and university level, faculty substantive expertise (including involvement in rigorous research pertaining to the learning sciences, and specifically pertaining to content areas) is completely irrelevant to the quality of institutions that prepare teachers? That just doesn’t make sense.

Here’s a snapshot of the data collection framework used by NCTQ to rate teacher preparation institutions:

NCTQ

Seemingly most important of all is whether the teacher preparation institution teaches teachers how to teach/adopt the Common Core Standards.  The vast majority of this information seems to be derived from documents such as syllabi and course catalogs.  In fact, the majority of items in this framework are about curriculum as represented in whatever documents they decided to/were able to collect and how they then chose to interpret those documents.

ABSOLUTELY NOWHERE IN THE DATA FRAMEWORK ABOVE, OR IN THEIR ENTIRE METHODOLOGY DOCUMENT, IS THERE ANY REFERENCE TO FACULTY TRAINING OR EXPERTISE (INCLUDING RESEARCH CONTRIBUTIONS TO THE SCIENCE OF TEACHING AND LEARNING).

Culling key words in syllabi and catalogs is no way to determine the quality of teacher preparation institutions any more than one can evaluate the quality of a high school by looking at the list of graduation requirements and courses offered (theoretically offered by their existence in a course catalog).

Heads up for future NCTQ reports – nor is it particularly useful to try to rank teacher preparation institutions by the test scores of students of their graduates.

Yeah… it’s relatively convenient. Yeah… it allows NCTQ to subjectively tweak their ratings for their own political purposes.  It’s not only a largely pointless endeavor, but one that runs in complete contrast with what NCTQ claims is of central importance to improving the quality of our supposedly dreadful teacher preparation pipeline. It’s certainly easy enough to game this goofy methodology if we wanted to bother inserting common core “here” everywhere that NCTQ’s minimally trained minions might search.

There are numerous issues regarding teacher preparation that legitimately require our attention. I’ve pointed out previously that credential production for teachers is adrift.

I’ve pointed out in research a number of years back that ed schools are actually in an awkward position when it comes to recruiting faculty and building a team of faculty that bring to the table the diverse set of skills and expertise needed to provide teachers with balanced, rigorous preparation.  The faculty pipeline for teacher preparation is bifurcated between research and practice orientations and many preparation programs are imbalanced in one direction or the other, with the standards of their institutions shaping their preferences and practices in ways that don’t always support better teacher preparation.

These are complex issues that my colleagues and I at the University of Kansas (back in 2005) and many others have addressed and continue to address. They need real attention.

The new NCTQ report offers minimal guidance and a whole lot of misguided hype.

Related Articles

Wolf-Wendel, L, Baker, B.D., Twombly, S., Tollefson, N., & Mahlios, M.  (2006) Who’s Teaching the Teachers? Evidence from the National Survey of Postsecondary Faculty and Survey of Earned Doctorates.  American Journal of Education 112 (2) 273-300

Baker, B.D., Wolf-Wendel, L.E., Twombly, S.B. (2007) Exploring the Faculty Pipeline in Educational
Administration: Evidence from the Survey of Earned Doctorates 1990 to 2000. Educational
Administration Quarterly 43 (2) 189-220

Revisiting the Chetty, Rockoff & Friedman Molehill

My kids and I don’t watch enough Phineas and Ferb anymore. Awesome show. I was reminded just yesterday of this great device!

320px-Mountain_out_of_molehill-inatorThis… is the Mountain-Out-Of-A-Molehill-INATOR!  The name is rather self-explanatory – but here’s the official explanation anyway:

The Mountain-out-of-a-molehill-inator turns molehills into big mountains. It uses energy pellets to do so. It was created because all his life he was told “Don’t make mountains out of molehills”.

Now, I don’t mean to belittle the famed Chetty, Rockoff and Friedman study from a while back, which was quite the hit among policy wonks. As I explained in both my first, and second posts on this study, it’s a heck of a study, with lots of interesting stuff… and one hell of a data set!

What irked me then, and has all along is the spin that was put on the study, and that the spin was not just a matter of interpretation by politicos and the media, but that the spin was being fed by the study’s authors.

I figured that would eventually die down. I figured eventually cooler heads would prevail. But alas, I was wrong.  Worst of all, we still have at least some of the study’s authors prancing around like Doofenschmirtz (pictured above) with their very own Mountain-out-of-a-molehill-inator!

So what the heck am I talking about? This! is what I’m talking about. This graph provides the basis for the oft-repeated claim that having a good teacher generates $266k in additional income for a classroom full of kids over their lifetime. $266k – that’s a heck of a lot of money! We must get all kids in classrooms with these amazing teachers!

Rockoffs_Mole_Hill

This graph comes from a presentation given the other day to the New Jersey State Board of Education, in an effort to urge them to continue moving forward using Student Growth Percentiles as a substantial share of high stakes teacher evaluation (yes… to be used in part for dismissing the “bad” teachers, and retaining the “good” ones).

This graph shows us that the $266k figure actually comes from a figure of about $250! CHECK OUT THE VERTICAL AXIS ON THIS GRAPH! First of all, the authors chose to graph only one age (28) at which there even was a statistically significant difference in the earnings of children with super awesome versus only average teachers!  The full range on the vertical axis GOES ONLY FROM $20,400 TO $21,200! And the trendline goes from $20,600 to $21,200 – for a total vertical range of about $600! Yeah… that’s a molehill… about 2.9%.  The difference from the top to the average (albeit amidst a rather uncertain scatter) is only about $250. Now, the authors wouldn’t have generated quite the same buzz by pointing out that they found a wage differential of this magnitude – statistically significant or not- in a data set of this magnitude.

Here’s further explanation of their Mountain-out-of-a-molehill-inator calculation:

Rockoffs_Mole_Hill_2

That’s right… just point the Mountain-Out-Of-A-Molehill-Inator at the graph above, and all of the sudden that rather small differential that occurs at one age (displayed as a huge effect by spreading the heck out of the Y axis) all of the sudden becomes $266k.

Heck, why not multiply times a whole freakin’ village! Or why not the entire enrollment of NYC schools (context for the study). What if every kid in NYC for 10 straight years had awesome rather than sucky teachers? How much more would they earn over a lifetime?

I was somewhat forgiving of this playful spin the first time around, when they first released the paper. These are the kind of things authors do to playfully explain the magnitude of their results.  It’s one thing when this occurs as playful explanation in an academic context. It’s yet another when this is presented as a serious policy consideration to naive state policymakers – a result that somehow might plausibly occur if those policymakers move boldly forward in adopting a substantively different measure of teacher effectiveness to be used for firing all of the bad teachers.

What really are the implications of this study for practice – for human resource policy in local public (or private schools)? Well, not much! A study like this can be used to guide simulations of what might theoretically happen if we had 10,000 teachers, and were able to identify, with slightly better than even odds, the “really good” teachers – keep them, and fire the rest (knowing that we have high odds that we are wrongly firing many good teachers… but accepting this fact on the basis that we are at least slightly more likely to be right than wrong in identifying future higher vs. lower value added producers). As I noted on my previous post, this type of big data – this type of small margin-of-difference finding in big data – really  isn’t helpful for making determinations about individual teachers in the real world. Yeah… works great in big-data simulations based on big-data findings, but that’s about it.

Indeed it’s an interesting study, but to suggest that this study has important immediate implications for school and district level human resource management is not only naive, but reckless and irresponsible and must stop.

The Disturbing Inequities of the New Normal

I wrote a post a while back, providing an overview of the basics of state school finance formulas, reforms and why they matter. I revisit this post having how conducted more extensive analysis of the retreat from school funding equity over the period from 2005 through 2011 (most recent available federal school finance data). Let’s begin with a review of my previous post.

School Funding Formula Basics

Modern state school finance formulas – aid distribution formulas – typically strive (but fail) to achieve two simultaneous objectives: 1) accounting for differences in the costs of achieving equal educational opportunity across schools and districts, and 2) accounting for differences in the ability of local public school districts to cover those costs. Local district ability to raise revenues might be a function of either or both local taxable property wealth and the incomes of local property owners, thus their ability to pay taxes on their properties.

Figure 1 presents a hypothetical example of the distribution of state and local revenue per pupil across school districts, sorted by poverty concentration. The hypothetical relies on the simplified assumption that districts with weaker local revenue raising capacity also tend to be higher in poverty concentration. While that’s not uniformly true, there is often at least some correlation between the two [it serves to make this hypothetical a bit more straightforward]. Accepting this oversimplified characterization, Figure 1 shows that the typical low poverty and high local fiscal capacity district would likely raise the vast majority of the cost of providing its children with equal educational opportunity through local tax dollars. There may be some small share of state general aid assuming that the total cost of providing equal educational opportunity exceeds the local resources raised with a fair tax rate.

Figure 1

 

This pattern is usually arrived at (if it is arrived at) through some overly complicated formula requiring multiple inefficiently and illogically laid out spreadsheets of calculations and based on measures for which each state chooses its own, completely distinct and unrecognizable nomenclature. A short version might go as follows:

Step 1 – determine target funding level (need & cost adjusted foundation level) per pupil for each district

Target Funding per Pupil = Foundation Level x Student Need Adjustments x Geographic Cost Adjustments

Where the foundation level is some specified per pupil dollar amount. Where student need adjustments include adjustments for individual student educational needs, as for children with limited English language proficiency and children with one or more disabilities, and collective characteristics of the student population such as poverty, homelessness and/or mobility/transiency rates. Where geographic costs refer to geographic variations in competitive wages, and factors such as economies of scale and population sparsity.

Step 2 – determine the share of target funding to be raised by local communities

State Aid per Pupil = Target Funding per Pupil – Local Fair Share

Yep. That’s it. Student needs and costs are accommodated in Step 1, and differences in local wealth and/or capacity to pay are accommodated in Step 2! Now convert that into about 2,000+ separate calculations and create incomprehensible names for each measure (like calling a weight on “low income students” a “student success factor”) and you’ve got a state school finance formula.

But I digress.

Implicit in the design of state school finance systems is that money may be leveraged for improving both the measured and unmeasured outcomes of children.  That is, that money matters to the quality of schooling that can be provided in general and that money matters toward the provision of special services for children with greater educational needs. That is, money can be an equalizer of educational opportunity.

In a typical foundation aid formula, it is implied that a foundation level of “X” should be sufficient for producing a given level of student outcomes in an average school district. It is then assumed that if one wishes to produce a higher level of outcomes, the foundation level should be increased. In short, it costs more to achieve higher outcomes[1] and the foundation level in a state school finance formula is the tool used for determining the overall level of support to be provided.

Further, it is assumed that resource levels may be adjusted in order to permit districts in different parts of the state to recruit and retain teachers of comparable quality. That is, the wages paid to teachers affect who will be willing to work in any given school. In other words, teacher wages affect teacher quality and in turn they affect school quality and student outcomes. This is plain common sense, and this teacher wage effect operates at two levels. First, in general, teacher wages must be sufficiently competitive with other career opportunities for similarly educated individuals. The overall competitiveness of teacher wages affects the overall academic quality of those who choose to enter teaching.[2] Second, the relative wages for teachers across local public school districts determine the distribution of teaching quality.[3] Districts with more favorable working conditions (more desirable facilities, fewer low income and minority students) can pay a lower wage and attract the same teacher. Wages matter, therefore, money matters.

Finally, those student need adjustments in state school finance formulas assume that the additional resources can be leveraged to improve outcomes for low income students, or students with limited English language proficiency. First, note that some share of the additional resources is needed in higher poverty settings simply to provide for “real resource” equity – or to pay the wage premium for doing the more complicated job. Second, resource intensive strategies such as reduced class sizes in the early grades, high quality (using qualified teaching staff)[4] early childhood programs, intensive tutoring and extended learning time programs may significantly improve outcomes of low income students. And these strategies all come with significant additional costs (even when adopted under the veil of “no excuses charterdom“).

But, because providing more money to support public schools often means raising more tax dollars and because providing supplemental resources to children whose own communities may lack local revenue raising capacity often means more aggressive redistribution of state tax revenues, whether and how money  matters in education is often hotly politically contested.

School finance is a political minefield, which is arguably why so many pundits have tried to distract from school finance issues by advancing ludicrous arguments that education equity and overall quality can be improved by altering teacher labor markets via statistical deselection without ever addressing funding deficiencies and wage disparities or by expanding charter schooling and ignoring the role of philanthropic contributions (while counting on them).  Unfortunately for those political pundits, school finance is a minefield they must eventually walk through if they ever expect to make real progress in resolving quality or equity concerns.

How and Why Money Matters

In a recent report titled Revisiting the Age Old Question: Does Money Matter in Education?[5] I review the controversy over whether, how and why money matters in education, evaluating the current political rhetoric in light of decades of empirical research.  I ask three questions, and summarize the response to those questions as follows:

Does money matter? Yes. On average, aggregate measures of per pupil spending are positively associated with improved or higher student outcomes. In some studies, the size of this effect is larger than in others and, in some cases, additional funding appears to matter more for some students than others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent – in other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.

Do schooling resources that cost money matter? Yes. Schooling resources which cost money, including class size reduction or higher teacher salaries, are positively associated with student outcomes. Again, in some cases, those effects are larger than others and there is also variation by student population and other contextual variables. On the whole, however, the things that cost money benefit students, and there is scarce evidence that there are more cost-effective alternatives.

Do state school finance reforms matter? Yes. Sustained improvements to the level and distribution of funding across local public school districts can lead to improvements in the level and distribution of student outcomes. While money alone may not be the answer, more equitable and adequate allocation of financial inputs to schooling provide a necessary underlying condition for improving the equity and adequacy of outcomes. The available evidence suggests that appropriate combinations of more adequate funding with more accountability for its use may be most promising.

While there may in fact be better and more efficient ways to leverage the education dollar toward improved student outcomes, we do know the following:

  • Many of the ways in which schools currently spend money do improve student outcomes.
  • When schools have more money, they have greater opportunity to spend productively. When they don’t, they can’t.
  • Arguments that across-the-board budget cuts will not hurt outcomes are completely unfounded.

In short, money matters, resources that cost money matter and more equitable distribution of school funding can improve outcomes. Policymakers would be well-advised to rely on high-quality research to guide the critical choices they make regarding school finance.

Regarding the politicized rhetoric around money and schools, which has become only more bombastic and less accurate in recent years, I explain the following:

Given the preponderance of evidence that resources do matter and that state school finance reforms can effect changes in student outcomes, it seems somewhat surprising that not only has doubt persisted, but the rhetoric of doubt seems to have escalated. In many cases, there is no longer just doubt, but rather direct assertions that: schools can do more than they are currently doing with less than they presently spend; the suggestion that money is not a necessary underlying condition for school improvement; and, in the most extreme cases, that cuts to funding might actually stimulate improvements that past funding increases have failed to accomplish.

To be blunt, money does matter. Schools and districts with more money clearly have greater ability to provide higher-quality, broader, and deeper educational opportunities to the children they serve. Furthermore, in the absence of money, or in the aftermath of deep cuts to existing funding, schools are unable to do many of the things they need to do in order to maintain quality educational opportunities. Without funding, efficiency tradeoffs and innovations being broadly endorsed are suspect. One cannot tradeoff spending money on class size reductions against increasing teacher salaries to improve teacher quality if funding is not there for either – if class sizes are already large and teacher salaries non-competitive. While these are not the conditions faced by all districts, they are faced by many.

It is certainly reasonable to acknowledge that money, by itself, is not a comprehensive solution for improving school quality. Clearly, money can be spent poorly and have limited influence on school quality. Or, money can be spent well and have substantive positive influence. But money that’s not there can’t do either. The available evidence leaves little doubt: Sufficient financial resources are a necessary underlying condition for providing quality education.

There certainly exists no evidence that equitable and adequate outcomes are more easily attainable where funding is neither equitable nor adequate. There exists no evidence that more adequate outcomes will be attained with less adequate funding. Both of these contentions are unfounded and quite honestly, completely absurd.

 Evaluating the Retreat from Equity

Now let’s take a look at what has happened in several states in recent years. Let’s start with a quick look at the framework I use for characterizing state school finance systems, as developed for the report Is School Funding Fair?

Slide1In Is School Funding Fair, we estimate a regression model to identify the slope of the relationship between poverty concentrations and state and local revenue, controlling for population density, district size and variation in competitive wages. We then characterize states as higher and/or lower spending and progressive or regressive. As explained above, the rationale for a progressive system is that progressively distributed revenues/expenditures provide the opportunity to leverage the additional resources to provide smaller class sizes, supplemental services and/or compensation differentials to recruit and retain teachers, aiding in the closing of achievement gaps between higher and lower poverty settings.

In my most recent post, I showed the rather dramatic retreat from equity in New Jersey over a fairly short period of time, in both state and local revenues and expenditures. Here it is again.

Slide2Slide3Here are the effects in a handful of other states. These graphs, like the New Jersey graphs, use state and local revenues per pupil from the Census Fiscal Survey of Local Governments (F-33). Unlike the School Funding Fairness Report, these are simply best fit lines of the relationship between Census Poverty rates and state and local spending, for all districts enrolling over 2,000 pupils. No inflation adjustment is used, nor is there adjustment for within state competitive wage variation. That will come in a future post when we’ve completed our annual funding fairness analysis.

Slide4

Slide6

Slide8

Slide10

Slide12

Slide16

 

Slide18

 


[1] Duncombe, W. and Yinger, J.M. (1999). Performance Standards and Education Cost Indexes: You Can’t Have One Without the Other. In H.F. Ladd, R. Chalk, and J.S. Hansen (Eds.), Equity and Adequacy in Education Finance: Issues and Perspectives (pp.260-97). Washington, DC: National Academy Press.

[2] Allegretto, S.A., Corcoran, S.P., Mishel, L.R. (2008) The teaching penalty : teacher pay losing ground. Washington, D.C. : Economic Policy Institute, ©2008.  Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352. David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498. Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408. Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

[3] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144. Lankford, H., Loeb., S., Wyckoff, J. (2002) Teacher Sorting and the Plight of Urban Schools. Educational Evaluation and Policy Analysis 24 (1) 37-62. Clotfelter, C., Ladd, H.F., Vigdor, J. (2011) Teacher Mobility, School Segregation and Pay Based Policies to Level the Playing Field. Education Finance and Policy , Vol.6, No.3, Pages 399–438. Clotfelter, Charles T., Elizabeth Glennie, Helen F. Ladd, and Jacob L. Vigdor. 2008. Would higher salaries keep teachers in high-poverty schools? Evidence from a policy intervention in North Carolina. Journal of Public Economics 92: 1352–70.

[5] Baker, B.D. (2012) Revisiting the Age Old Question: Does Money Matter in Education. Shanker Institute. http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Follow-up: Title I Funding DOES NOT Make Rich States Richer!

In one of my earliest posts, I took on a myth created and shared by many DC Think Tanks that the Title I funding formula inappropriately favors “rich states” and school districts in urban areas.

This myth has its origins in a handful of policy papers and poorly constructed analyses, some of which eventually made into print – albeit in law review journals that tend to be light on reviewing quantitative evidence.

Today, after many conversations over the years, Lori Taylor of Texas A&M, Jay Chambers, Jesse Levin and Charles Blankenship of the American Institutes for Research and I finally published our article in the journal Education Finance and Policy in which we critique the arguments that Title I is making rich states richer. In short, much of confusion boils down to the mis-measurement of income and poverty, an issue I’ve discussed extensively on this blog.

The assertion from prior reports is that the Title I aid formula includes a number of critical flaws that ultimately lead to providing disproportionate funding to states that are relatively high income and can spend more than other states to begin with, and to school districts in urban and suburban areas, shorting the rural districts which on their face may appear to have comparable or even higher poverty in some cases. We summarize this literature as follows:

Because Title I provides the largest share of direct federal education funding to states and local districts, Title I funds are a likely target for political tug-of-war during re-authorization. In recent years questions have been raised about whether Title I funding in particular is appropriately targeted to those districts, schools, and children that need it most. Deliberations have focused on perceived flaws in the design of the Title I funding formulas (Carey & Roza, 2008; Liu, 2007, 2008; Miller, 2009; Miller & Brown, 2010a,2010b). Critics argue that Title I funding favors wealthy states and larger urban districts, to the detriment of very poor states and rural areas, in part because parts of the formula described above are driven by state’s own spending levels and because rich states are able to spend more, thus gain more Title I funding (Liu, 2008, 2007; Miller, 2009; Miller & Brown, 2010a, 2010b).[1] Specifically, Liu (2007, 2008) provided analyses that suggest that lower poverty states and urban districts receive disproportionate share of Title I funding per poor child and asserted that (1) “By allocating aid to states in proportion to state per-pupil expenditures, Title I reinforces vast spending inequalities between states to the detriment of poor children in high-poverty jurisdictions,” and (2) “small or mid-sized districts that serve half or more of all poor children in areas of high poverty receive less aid than larger districts with comparable poverty” (Liu, 2008, p. 973).

But, as I’ve discussed previously on this blog, there are two issues that need to be considered when comparing the distribution of Title I dollars across local public school districts. In this previous post, I was able to crudely tackle those issues. That is, first, one must consider how the Title I dollar varies in value from one state to another, one region to another, across rural and urban settings, and so on. Education being a labor intensive industry, accounting for variation in school labor costs is critical for determining the fairness of the distribution of funding. In this previous post, I used the Education Comparable Wage Index developed by Lori Taylor for the National Center for Education Statistics.  Lori has been kind enough to update this index on her own through 2011 and post it on the Texas A&M web site. The second step I took in my earlier post was to adjust poverty rates for each state by an index created by Trudi Renwick of the Census Bureau. After adjusting for both the value of the Title I dollar and for Renwick’s state level poverty adjustments, I found that the Title I distributions really weren’t that awful – and certainly didn’t systematically reward rich states.

Thanks to the brilliance of Lori, Jay, Jesse and Charles (and some others providing supporting roles) we are now able to take this analysis a step (or more) further and re-evaluate Title I distributions down to the school district level to determine not only at large scale whether rich states are rewarded over poor ones, but whether the formula also advantages urban versus rural areas, and so on. Let’s take a quick walk through the two adjustments.  First, we have Lori’s updated Education Comparable Wage Index, which uses Census Data to estimate how much the wages for non-educators vary across labor markets nationally. That variation looks something like this:

Figure 1. National ECWI

Slide3

This index can be used to adjust the value of the Title I dollar.

Next, we have our poverty adjustment factor, which is arrived at through a few steps, also using Census Data. This process starts with a similar wage index (details in the full article) which is intended to capture differences in wages across locales and regions which are largely driven by differences in underlying costs of living…but in many cases tend to be less extreme than cost of living differences (because, in many cases, high costs are accompanied by desirable amenities).  We use this index to create an adjusted income threshold for poverty for each labor market nationwide. Then, we re-calculate the number of children in families below and above this adjusted income threshold, and compare our new poverty rate to the original poverty rate. This gives us a poverty adjustment factor- or a multiplier that lets us adjust the poverty rate in a given area from its original level to the poverty rate that would exist at the adjusted income threshold. Here’s what that poverty adjustment factor looks like nationally.

Figure 2. Poverty Adjustment Factor

Slide4

So, taking into account regional wage/cost variation, poverty rates in urban and northeastern areas require an upward adjustment on the order of 25 to 55% in some cases, where in areas such as northwest Kansas, poverty rates actually require substantial downward adjustment.

We can probably see where this is headed at this point. But let’s go there anyway… since that is the main point here. Let’s start with this graph of Title I allocations per child in poverty by locale and by region, applying only the first adjustment for the value of the Title I dollar (updated ECWI). Metropolitan areas are areas around a core with population of at least 50k and micropolitan areas are areas around a core of 10k to 50k.

Figure 3. Applying the Dollar Value Adjustment Only (ECWI)

Slide5

In the left half of the figure we have “unadjusted” allocations and in the right we have adjusted allocations. Northeastern metropolitan districts have, in unadjusted dollars, over $1,800 per poverty pupil. This would appear to be the highest of any group. But even after applying only the first adjustment, this figure drops to $1,500 and is lower than most micropolitan and rural districts. Even this first step sheds significant doubt on the original assertion (which in some cases, did use a regional cost adjustment).

Figure 4 takes the next step of applying adjustments to poverty rates, in order to better capture just how many children live in families below a more locally [labor market] reasonable income level. Here, we see that once we have made both adjustments, metropolitan districts generally are being significantly shortchanged relative to their micropolitan and rural peers. In fact, rural and micropolitan districts in central (plains) states are receiving in some cases twice as much (or more) per poverty pupil in Title I aid as are metropolitan residents.

Figure 4. Applying the Dollar Value and Poverty Adjustment

Slide7

In short, Title I funding DOES NOT ADVANTAGE WEALTHY, NORTHEASTERN, METROPOLITAN AREAS!  That is, not when one more accurately measures both the value of the education dollar and the expected numbers of children in need.

Now, back to the Title I formula. We discuss in our article that the Title I formula does indeed include factors that are, on their face illogical and seemingly unfair. Why, after all, would policy drive more need-based funding to those who can and choose to spend more on their own (the Spending factor)? The formula also includes political giveaways like the small state minimum. But these political giveaways don’t amount to much (because small states, are, well, small…).  It would certainly make sense to replace the illogical factors that currently drive Title I funding with our more logical factors addressed herein. But, it is important to understand that doing so will drive MORE, not less funding to metropolitan areas and states with higher average income. Empirically, it’s the right thing to do.

A few closing points are in order. First, it’s also important to understand that Title I alone cannot resolve the persistent disparities in state school finance systems. The Title I effect on funding fairness remains relatively small. Here it is in 2010.

Figure 5. Title I Effect on Funding Fairness

Slide1

So, no matter what we do, Title I will not solve our biggest funding equity issues. That remains largely a state problem.

Finally, it’s also worth considering how similar adjustments might apply across federal benefit programs.  Consider, for example, this interactive map of the current geographic distribution of federal benefits.

Selected References

Carey, K., & Roza, M. (2008). School funding’s tragic flaw. Seattle, WA: Center on Reinventing Public Education.

Liu, G. (2008). Improving Title I funding equity across states, districts and schools. Iowa Law Review, 93, 973-1014.

Miller, R. (2009). Secret recipes revealed: Demystifying the Title I, Part A funding formulas. Washington, DC: Center for American Progress.

Miller, R. T., & Brown, C. G. (2010a). Bitter pill, better formula: Toward a single, fair, and equitable formula for ESEA Title I, Part A. Washington, DC: Center for American Progress.

Miller, R. T., & Brown, C. G. (2010b). Spoonful of sugar: An equity fund to facilitate a single, fair, and equitable formula for ESEA Title I, Part A. Washington, DC: Center for American Progress.

Renwick, T. (2009). Alternative geographic adjustments of U.S. poverty thresholds: Impact on state poverty rates. Washington, DC: U.S. Census Bureau.

Renwick, T. (2011, January). Geographic adjustments of supplemental poverty measure thresholds: Using the American Community Survey five-year data on housing costs. Washington, DC: U.S. Census Bureau.


[1] Additional criticisms of Title I funding point to the fact that three of the four formulas used to allocate dollars do not take into account state fiscal effort (the level of state and local revenue dedicated to providing public education) and state-minimum provisions guarantee relatively large allocations to states with small populations (see Miller, 2009).

I don’t know anything about them, but they suck! Reformy thoughts on Ed Schools

It all started here, when Ben Riley of NSVF suggested that comments from Finnish Ed Guru Pasi Sahlberg (hero of the anti-reformers) regarding teacher preparation in Finland (and elsewhere) meant that the U.S. really needed to start shutting down teacher preparation programs.

Ben Riley’s main takeaway from Sahlberg’s post was that the U.S. should have about the same number of ed schools as Finland…. ? (or at least he lacked clarity on the point… So Sherman Dorn set him straight on the basic math):

A point on which Riley capitulated. So, now we’ve got that straight. The U.S. could indeed reduce the number of teacher preparation programs. But Finland’s total number of 8 really doesn’t match the U.S. Population. Rather, we might use about 500 relatively highly regulated programs, largely housed in research universities and/or professional teaching colleges.

A bit of a sidebar here… Sherman Dorn is also pointing out that the Sahlberg article actually speaks of a system which maintains a strong role for the country’s research universities.

That is, not increased reliance on for-profit institutions, or quasi-academic non-research based startups like Relay GSE (which emphasize sit-down-and-shut-up classroom management) which rely almost exclusively on relatively inexperienced current teachers who themselves hold only a master’s degree (many from non-competitive programs – Relay Faculty/Relay NCATE App 9-2012) to deliver their certification programs.

Then the conversation enters new territory. So, what’s been going in in teacher preparation in the U.S. Where have many of the emerging graduate degrees and credentials been coming from in education?

To which Ben Riley issues the incoherent response:

So, rather confidently as purveyors of decisive reformy thought tend to do, Ben Riley submits that he knows for sure that the system as a whole and invariably is still crappy… and uses the term “ecosystem” to sound informed/thoughtful.

But this is actually really funny, because the whole point of analogizing such systems to natural ecosystems is to understand their diversity and interconnectedness. Yet all that follows here conveys that Ben Riley has limited or no understanding of that nor does he believe that it is important.

So, I figure I’ll jump in (after standing by for a while) and post a link to my slides on changes in the pattern of production of education credentials over the past 20 years:

And why not throw in some citations to published research while I’m at it.

Skipping ahead here… because we somehow went on another tangent about Finland…I ask Ben Riley if he believes this system that he knows for sure is crappy… is crappier than it was 20 years ago?

I dare suggest that history matters. Context matters… and to know where we are headed, we might want to look first at where we’ve been. After all crappiness requires context- either in terms of time, or in terms of some relevant peer group – or both. To know crappy, one must have some idea of what’s not crappy.

And here’s where the conversation just gets stupid and offensive, and so absurdly anti-intellectual that it is perhaps revealing of deeper problems with education in America.

Amazingly, Riley’s response is that it’s just crappy. Damn… that’s just brilliant!  I push to clarify… Doesn’t history matter? Shouldn’t we understand where we’ve been to figure out where we’re headed? The trends are rather striking. Yes, we’ve criticized teacher preparation in the U.S. for decades… but it certainly seems to be coming to a head of late. But what’s changed so dramatically? This post tells an interesting story!

So, asking again about whether history matters… (and yeah… putting it bluntly & chastising Ben Riley… who I feel at this point deserves a jab or two…)

[Note – My original post erred in attributing a Ben Riley response to this statement as denying this statement – a “nope, it does not.” However, the message here still stands. Ben Riley, throughout this conversation displayed complete disregard for the history or context of “ed schools,” or their “ecosystem” responding instead with grossly misinformed, fact-challenged generalizations.]

Apparently, this was not worthy of a response? Does history and context matter? or can we just call the current system crappy without any regard for either?

Perhaps this complete and utter disregard for intellectual inquiry into how/why or even if there are problems, disregard for history and misunderstanding of complexity and “ecosystems” is indicative of the failures of Yale Law School? After all, Yale Law has recently give us this (John King) and this (Neerav Kingsland [who I like and respect, but…]) (and much more to be discussed later).  Is there some funky mind-numbing (anti-critical-thinking) Koolaid being passed around in New Haven?

And perhaps it is indicative of the core problem of the modern education reform movement- be it the emphasis on misuse of measures in teacher evaluation (or rating ed schools) – the desire to rapidly expand and deregulate charter schooling – or the crusade against ed schools as if they are some stagnant monolithic entity.  Our willful ignorance of context and complete disregard for history is leading down a questionable path – well, actually several at once.

We concluded the conversation after one last side trip to Finland. I pointed out that there are various systemic complexities that make it difficult to assume that focusing solely or even primarily on teacher preparation institutions (w/o consideration for earnings competitiveness, etc.) is wrongheaded.

And I’m met with the classic “all of the good countries out there” that obviously beat us into the ground on international assessments do it differently… from us… and of course… the same as each other… you know… like they all have only 8 prep institutions regardless of total population, and only take the top 2% of HS graduates into teaching… and that top 2% goes into teaching regardless of expected earnings. And the programs all get accredited and rated and/or shut down based on whether they contribute positively to the country’s PISA ranking.  And while their institutions are called universities… and have instructors called professors… who appear to be engaged in research… really, they’re  more like entrepreneurial start-ups that are totally different from university based Ed Schools in the U.S.? Yeah… okay… whatever. What a load of crap!

My final response:

I’m sick of data-free, research void conversations with those who claim so belligerently to know all of the problems and have all of the answers. In other words, I know a crappy argument when I see one, and this was surely a crappy argument!

Related Research

Baker, B.D, Orr, M.T., Young, M.D. (2007) Academic Drift, Institutional Production and Professional Distribution of Graduate Degrees in Educational Administration. Educational Administration Quarterly  43 (3)  279-318

Baker, B.D., Fuller, E. The Declining Academic Quality of School Principals and Why it May Matter. Baker.Fuller.PrincipalQuality.Mo.Wi_Jan7

Baker, B.D., Wolf-Wendel, L.E., Twombly, S.B. (2007) Exploring the Faculty Pipeline in Educational Administration: Evidence from the Survey of Earned Doctorates 1990 to 2000. Educational Administration Quarterly 43 (2) 189-220

Wolf-Wendel, L, Baker, B.D., Twombly, S., Tollefson, N., & Mahlios, M.  (2006) Who’s Teaching the Teachers? Evidence from the National Survey of Postsecondary Faculty and Survey of Earned Doctorates.  American Journal of Education 112 (2) 273-300