Blog

On School Finance Equity & Money Matters: A Primer

Conceptions of Equity, Equal Opportunity and Adequacy

Reforms across the nation to state school finance systems have been focused on simultaneously achieving equal educational opportunity and educational adequacy. While achieving and maintaining educational adequacy requires a school finance system that consistently and equitably meets a certain level of educational outcomes, it is important to maintain equal education opportunity in those cases where the funding provided falls below adequacy thresholds. That is, whatever the outcome currently attained across the system, that outcome should be equally attainable regardless of where a child resides or attends school and regardless of his or her background.

Conceptions of school finance equity and adequacy have evolved over the years. Presently, the central assumption is that state finance systems should be designed to provide children, regardless of where they live and attend school, with equal opportunity to achieve some constitutionally adequate level of outcomes.[i] Much is embedded in this statement and it is helpful to unpack it, one layer at a time.

The main concerns of advocates, policymakers, academics and state courts from the 1960s through the 1980s were to a) reduce the overall variation in per-pupil spending across local public school districts; and b) disrupt the extent to which that spending variation was related to differences in taxable property wealth across districts. That is, the goal was to achieve more equal dollar inputs – or nominal spending equity – coupled with fiscal neutrality – or reducing the correlation between local school resources and local property wealth. While modern goals of providing equal opportunity and achieving educational adequacy are more complex and loftier than mere spending equity or fiscal neutrality, achieving the more basic goals remains relevant and still elusive in many states.

An alternative to nominal spending equity is to look at the real resources provided across children and school districts: the programs and services, staffing, materials, supplies and equipment, and educational facilities provided. (Still, the emphasis is on equal provision of these inputs.)[ii] Providing real resource equity may, in fact, require that per-pupil spending not be perfectly equal if, for example, resources such as similarly qualified teachers come at a higher price (competitive wage) in one region than in another. Real resource parity is more meaningful than mere dollar equity. Further, if one knows how the prices of real resources differ, one can better compare the value of the school dollar from one location to the next.

Modern conceptions of equal educational opportunity and educational adequacy shift emphasis away from schooling inputs and onto schooling outcomes and more specifically equal opportunity to achieve some level of educational outcomes. References to broad outcome standards in the school finance context often emanate from the seven standards[iii] articulated in Rose v. Council for Better Education,[iv] a school funding adequacy case in 1989 in Kentucky argued by scholars to be the turning point from equity toward adequacy in school finance legal theory.[v] There are two separable but often integrated goals here – equal opportunity and educational adequacy. The first goal is achieved where all students are provided the real resources to have equal opportunities to achieve some common level of educational outcomes. Because children come to school with varied backgrounds and needs, striving for common goals requires moving beyond mere equitable provision of real resources. For example, children with disabilities and children with limited English language proficiency may require specialized resources (personnel), programs, materials, supplies, and equipment. Schools and districts serving larger shares of these children may require substantively more funding to provide these resources. Further, where poverty is highly concentrated, smaller class sizes and other resource-intensive interventions may be required to strive for those outcomes commonly achieved by the state’s average child.

Meanwhile, conceptions of educational adequacy require that policymakers determine the desired level of outcome to be achieved. Essentially, adequacy conceptions attach a “level” of outcome expectation to the equal educational opportunity concept. Broad adequacy goals are often framed by judicial interpretation of state constitutions. It may well be that the outcomes achieved by the average child are deemed to be sufficient. But it may also be the case that the preferences of policymakers or a specific legal mandate are somewhat higher (or lower) than the outcomes achieved by the average child. The current buzz phrase is that schools should ensure that children are “college ready.” [vi]

One final distinction, pertaining to both equal educational opportunity and adequacy goals, is the distinction between striving to achieve equal or adequate outcomes versus providing the resources that yield equal opportunity for children, regardless of their backgrounds or where they live to achieve those outcomes. Achieving equal outcomes is statistically unlikely at best, and of suspect policy relevance, given that perfect equality of outcomes requires leveling down (actual outcomes) as much as leveling up. The goal of school finance policy in particular is to provide the resources to offset pre-existing inequalities in the likelihood that one child has greater chance of achieving the desired outcome levels than any other.

[i] Baker, B. D., Green, P. C. (2009) Conceptions, Measurement and Application of Educational Adequacy Standards. In D.N. Plank (Ed.) AERA Handbook on Education Policy. New York: Routledge.

Baker, B., & Green, P. (2014). Conceptions of equity and adequacy in school finance. Handbook of research in education finance and policy, 203-221.

Baker, B., & Green, P. (2008). Conceptions of equity and adequacy in school finance. Handbook of research in education finance and policy, 203-221

[ii]               While often treated as a newer approach to equity analysis than measuring pure fiscal inputs, equity evaluations of real resources pre-date modern school finance equity, often being used for example to evaluate the uniformity of segregated black and white schools operating in the pre-Brown, “separate but equal” era.

Baker, B. D., & Green, P. C. (2009). Does increased state involvement in public schooling necessarily increase equality of educational opportunity? The Rising State: How State Power is Transforming Our Nation’s Schools, 133.

[iii]              As per the court’s declaration: “an efficient system of education must have as its goal to provide each and every child with at least the seven following capacities: (i) sufficient oral and written communication skills to enable students to function in a complex and rapidly changing civilization; (ii) sufficient knowledge of economic, social, and political systems to enable the student to make informed choices; (iii) sufficient understanding of governmental processes to enable the student to understand the issues that affect his or her community, state, and nation; (iv) sufficient self-knowledge and knowledge of his or her mental and physical wellness; (v) sufficient grounding in the arts to enable each student to appreciate his or her cultural and historical heritage; (vi) sufficient training or preparation for advanced training in either academic or vocational fields so as to enable each child to choose and pursue life work intelligently; and (vii) sufficient levels of academic or vocational skills to enable public school students to compete favorably with their counterparts in surrounding states, in academics or in the job market.

Rose v. Council for Better Educ., Inc., 790 S.W.2d 186, 212(Ky. 1989).

http://law-apache.uky.edu/wordpress/wp-content/uploads/2012/06/Thro-II.pdf

[iv] Rose v. Council for Better Educ., Inc., 790 S.W.2d 186 (Ky. 1989).

[v] Clune, W. H. (1994). The shift from equity to adequacy in school finance. Educational Policy, 8(4), 376-394.

[vi] http://www.parcconline.org/pennsylvania

School Finance Reforms & Student Outcomes

There exists an increasing body of evidence that substantive and sustained state school finance reforms matter for improving both the level and distribution of short-term and long-run student outcomes. A few studies have attempted to tackle school finance reforms broadly applying multi-state analyses over time. Card and Payne (2002) found “evidence that equalization of spending levels leads to a narrowing of test score outcomes across family background groups.”[i] (p. 49) Most recently, Jackson, Johnson & Persico (2015) evaluated long-term outcomes of children exposed to court-ordered school finance reforms, finding that “a 10 percent increase in per-pupil spending each year for all twelve years of public school leads to 0.27 more completed years of education, 7.25 percent higher wages, and a 3.67 percentage-point reduction in the annual incidence of adult poverty; effects are much more pronounced for children from low-income families.”(p. 1) [ii]

Numerous other researchers have explored the effects of specific state school finance reforms over time, applying a variety of statistical methods to evaluate how changes in the level and targeting of funding affect changes in outcomes achieved by students directly affected by those funding changes. Figlio (2004) explains that the influence of state school finance reforms on student outcomes is perhaps better measured within states over time, explaining that national studies of the type attempted by Card and Payne confront problems of a) the enormous diversity in the nature of state aid reform plans, and b) the paucity of national level student performance data.[iii]

Several such studies provide compelling evidence of the potential positive effects of school finance reforms. Studies of Michigan school finance reforms in the 1990s have shown positive effects on student performance in both the previously lowest spending districts, [iv] and previously lower performing districts. [v] Similarly, a study of Kansas school finance reforms in the 1990s, which also involved primarily a leveling up of low-spending districts, found that a 20 percent increase in spending was associated with a 5 percent increase in the likelihood of students going on to postsecondary education.[vi]

Three studies of Massachusetts school finance reforms from the 1990s find similar results. The first, by Thomas Downes and colleagues, found that the combination of funding and accountability reforms “has been successful in raising the achievement of students in the previously low-spending districts.”(p. 5)[vii]The second found that “increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students.”[viii] The most recent of the three, published in 2014 in the Journal of Education Finance, found that “changes in the state education aid following the education reform resulted in significantly higher student performance.”(p. 297)[ix] Such findings have been replicated in other states, including Vermont. [x]

Indeed, the role of money in improving student outcomes is often contested. Baker (2012) explains the evolution of assertions regarding the unimportance of money for improving student outcomes, pointing out that these assertions emanate in part from misrepresentations of the work of Coleman and colleagues in the 1960s, which found that school factors seemed less associated with student outcome differences than did family factors. This was not to suggest, however, that school factors were entirely unimportant, and more recent re-analyses of the Coleman data using more advanced statistical techniques than available at the time clarify the relevance of schooling resources.[xi]

Hanushek (1986) ushered in the modern era “money doesn’t matter” argument, in a study in which he tallied studies reporting positive and negative correlations between spending measures and student outcome measures, proclaiming as his major finding:

“There appears to be no strong or systematic relationship between school expenditures and student performance.” (p. 1162)[xii]

Baker (2012) summarized re-analyses of the studies tallied by Hanushek, wherein authors applied quality standards to determine study inclusion, finding that more of the higher quality studies yielded positive findings with respect to the relationship between schooling resources and student outcomes.[xiii] While Hanushek’s above characterization continues to permeate policy discourse over school funding, often used as evidence that “money doesn’t matter,” it is critically important to understand that this statement is merely one of uncertainty about the direct correlation between spending measures and outcome measures, based on studies prior to 1986. Neither this statement, nor the crude tally behind it ever provided any basis for assuming with certainty that money doesn’t matter.

A separate body of literature challenges the assertion of positive influence of state school finance reforms in general and court ordered reforms in particular. Baker and Welner (2011) explain that much of this literature relies on anecdotal characterizations of lagging student outcome growth following court ordered infusions of new funding. Hanushek and Lindseth (2009) provide one example of this anecdote-driven approach in a book chapter which seeks to prove that court-ordered school funding reforms in New Jersey, Wyoming, Kentucky, and Massachusetts resulted in few or no measurable improvements. However, these conclusions are based on little more than a series of descriptive graphs of student achievement on the National Assessment of Educational Progress in 1992 and 2007 and an undocumented assertion that, during that period, each of the four states infused substantial additional funds into public education in response to judicial orders. That is, the authors merely assert that these states experienced large infusions of funding, focused on low income and minority students, within the time period identified. They necessarily assume that, in all other states which serve as a comparison basis, similar changes did not occur. Yet they validate neither assertion.

Baker and Welner (2011) explain that Hanushek and Lindseth failed to measure whether substantive changes had occurred to the level or distribution of school funding as well as when and for how long. In New Jersey, for example, infusion of funding occurred from 1998 to 2003 (or 2005), thus Hanushek and Lindseth’s window includes 6 years on the front end where little change occurred. Kentucky reforms had largely faded by the mid to late 1990s, yet Hanushek and Lindseth measure post reform effects in 2007. Further, in New Jersey, funding was infused into approximately 30 specific districts, but Hanushek and Lindseth explore overall changes to outcomes among low-income children and minorities using NAEP data, where some of these children attend the districts receiving additional support but many did not.[xiv] Finally, the authors concede that Massachusetts did, in fact experience substantive achievement gains, but attribute those gains to changes in accountability policies rather than funding.

In equally problematic analysis, Neymotin (2010) set out to show that court ordered infusions of funding in Kansas following Montoy v. Kansas led to no substantive improvements in student outcomes. However, Neymotin evaluated changes in school funding from 1997 to 2006, but the first additional funding infused following the January 2005 Supreme Court decision occurred in the 2005-06 school year, the end point of Neymotin’s outcome data.[xv] Finally, Greene and Trivitt (2008) present a study in which they claim to show that court ordered school finance reforms let to no substantive improvements in student outcomes. However, the authors test only whether the presence of a court order is associated with changes in outcomes, and never once measure whether substantive school finance reforms followed the court order, but still express the conclusion that court order funding increases had no effect.[xvi]

To summarize, there exist no methodologically competent analyses yielding convincing evidence that significant and sustained funding increases provide no educational benefits, and a relative few which do not show decisively positive effects.[xvii] On balance, it is safe to say that a sizeable and growing body of rigorous empirical literature validates that state school finance reforms can have substantive, positive effects on student outcomes, including reductions in outcome disparities or increases in overall outcome levels.[xviii]

Schooling Resources & Student Outcomes

The premise that money matters for improving school quality is grounded in the assumption that having more money provides schools and districts the opportunity to improve the qualities and quantities of real resources. The primary resources involved in the production of schooling outcomes are human resources – or quantities and qualities of teachers, administrators, support and other staff in schools. Quantities of school staff are reflected in pupil to teacher ratios and average class sizes. Reduction of class sizes or reductions of overall pupil to staff ratios require additional staff, thus additional money, assuming the wages and benefits for additional staff remain constant. Qualities of school staff depend in part on the compensation available to recruit and retain them – specifically salaries and benefits, in addition to working conditions. Notably, working conditions may be reflected in part through measures of workload, like average class sizes, as well as the composition of the student population.

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[xix] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[xx] Loeb and Page (2000) tackled the specific issues of relative pay noted above. They showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[xxi]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[xxii]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[xxiii]

Others have argued that the dominant structure of teacher compensation which ties salary growth to years of experience and degrees obtained, despite weak correlations between those measures and student achievement gains, creates inefficiencies that negate the overall relationship between school spending and school quality. [xxiv] This argument is built on the assertion that existing funds could instead be used to compensate teachers according to (measures of) their effectiveness, while dismissing high cost “ineffective” teachers, replacing them with better ones with existing resources, thus achieving better outcomes with the same or less money.[xxv]

This argument depends on three assumptions. First, that adopting a pay-for-performance, rather than step-and-lane salary model would dramatically improve performance at the same or less expense. Second, that shedding the “bottom 5% of teachers” according to statistical estimates of their “effectiveness” can lead to dramatic improvements at equal or lower expense. Third and finally, both the incentive pay argument and deselecting the bottom 5% argument depend on sufficiently accurate and precise measures of teaching effectiveness, across settings and children.

Existing studies of pay for performance compensation models fail to provide empirical support for this argument – either that these alternatives can substantially boost outcomes, or that they can do so at equal or lower total salary expense.[xxvi] Simulations purporting to validate the long run benefits of deselecting “bad” teachers depend on the average pool of replacements lining up to take those jobs being substantively better than those who were let go (average replacing “bad”). Simulations promoting the benefits of “bad teacher” deselection assume this to be true, without empirical basis, and without consideration for potential labor market consequences of the deselection policy itself.[xxvii] Finally, existing measures of teacher “effectiveness” fall well short of these demands.[xxviii]

Most importantly, arguments about the structure of teacher compensation miss the bigger point – the average level of compensation matters with respect to the average quality of the teacher labor force. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it’s less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high need settings. In other words, resources used for teacher quality matter.

Ample research indicates that children in smaller classes achieve better outcomes, both academic and otherwise, and that class size reduction can be an effective strategy for closing racial or socio-economic achievement gaps. [xxix] While it’s certainly plausible that other uses of the same money might be equally or even more effective, there is little evidence to support this. For example, while we are quite confident that higher teacher salaries may lead to increases in the quality of applicants to the teaching profession and increases in student outcomes, we do not know whether the same money spent toward salary increases would achieve better or worse outcomes if it were spent toward class size reduction. Some have raised concerns that large scale-class size reductions can lead to unintended labor market consequences that offset some of the gains attributable to class size reduction (such as the inability to recruit enough fully qualified teachers). For example, studies of California’s statewide class size reduction initiative suggest that as districts across the socioeconomic spectrum reduced class sizes, fewer high quality teachers were available in high poverty settings.[xxx]

Many over time have argued the need for more precise cost/benefit analysis regarding the tradeoffs between applying funding to class size reduction versus increased compensation.[xxxi] Still, the preponderance of existing evidence suggests that the additional resources expended on class size reductions do result in positive effects. Both reductions to class sizes and improvements to competitive wages can yield improved outcomes, but the efficiency gains of choosing one strategy over the other are unclear, and local public school districts rarely have complete flexibility to make tradeoffs.[xxxii] Class size reduction may be constrained by available classrooms. Smaller class sizes and reduced total student loads are a relevant working condition simultaneously influencing teacher recruitment and retention.[xxxiii] That is, providing smaller classes may partly offset the need for higher wages for recruiting or retaining teachers. High poverty schools require a both/and rather than either/or strategy when it comes to smaller classes and competitive wages.

As discussed above, achieving equal educational opportunity requires leveraging additional real resources, lower class sizes and more intensive support services, in high need settings. Merely achieving equal qualities of real resources, including equally qualified teachers, likely requires higher competitive wages, not merely equal pay in a given labor market. As such, higher need settings may require substantially greater financial inputs than lower need settings. Lacking sufficient financial inputs to do both, districts must choose one or the other. In some cases, higher need districts may lack sufficient resources to do either.

Notes

[i] Card, D., and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

[ii] Jackson, C. K., Johnson, R., & Persico, C. (2014). The Effect of School Finance Reforms on the Distribution of Spending, Academic Achievement, and Adult Outcomes (No. w20118). National Bureau of Economic Research.

Jackson, C. K., Johnson, R., & Persico, C. (2015). The Effects of School Spending on Educational and Economic Outcomes: Evidence from School Finance Reforms (No. w 20847) National Bureau of Economic Research.

[iii] Figlio, D. N. (2004) Funding and Accountability: Some Conceptual and Technical Issues in State Aid Reform. In Yinger, J. (Ed.) p. 87-111 Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. MIT Press.

[iv] Roy, J. (2011). Impact of school finance reform on resource equalization and academic performance: Evidence from Michigan. Education Finance and Policy, 6(2), 137-167.

Roy (2011) published an analysis of the effects of Michigan’s 1990s school finance reforms which led to a significant leveling up for previously low-spending districts. Roy, whose analyses measure both whether the policy resulted in changes in funding and who was affected, found that “Proposal A was quite successful in reducing interdistrict spending disparities. There was also a significant positive effect on student performance in the lowest-spending districts as measured in state tests.” (p. 137)

[v] Papke, L. (2005). The effects of spending on test pass rates: evidence from Michigan. Journal of Public Economics, 89(5-6). 821-839.

Hyman, J. (2013). Does Money Matter in the Long Run? Effects of School Spending on Educational Attainment. http://www-personal.umich.edu/~jmhyman/Hyman_JMP.pdf.

Papke (2001), also evaluating Michigan school finance reforms from the 1990s, found that “increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance.” (p. 821)

Most recently, Hyman (2013) also found positive effects of Michigan school finance reforms in the 1990s, but raised some concerns regarding the distribution of those effects. Hyman found that much of the increase was targeted to schools serving fewer low income children. But, the study did find that students exposed to an additional “12%, more spending per year during grades four through seven experienced a 3.9 percentage point increase in the probability of enrolling in college, and a 2.5 percentage point increase in the probability of earning a degree.” (p. 1)

[vi] Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284. (p. 275)

[vii] Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

[viii] Guryan, J. (2001). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

“The magnitudes imply a $1,000 increase in per-pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures.” (p. 1)

[ix] Nguyen-Hoang, P., & Yinger, J. (2014). Education Finance Reform, Local Behavior, and Student Performance in Massachusetts. Journal of Education Finance, 39(4), 297-322.

[x] Downes had conducted earlier studies of Vermont school finance reforms in the late 1990s (Act 60). In a 2004 book chapter, Downes noted “All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post-Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance.” (p. 312)

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (Ed.), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

[xi] Konstantopolous, S., Borman, G. (2011) Family Background and School Effects on Student Achievement: A Multilevel Analysis of the Coleman Data. Teachers College Record. 113 (1) 97-132

Borman, G.D., Dowling, M. (2010) Schools and Inequality: A Multilevel Analysis of Coleman’s Equality of Educational Opportunity Data. Teachers College Record. 112 (5) 1201-1246

[xii] Hanushek, E.A. (1986) Economics of Schooling: Production and Efficiency in Public Schools. Journal of Economic Literature 24 (3) 1141-1177. A few years later, Hanushek paraphrased this conclusion in another widely cited article as “Variations in school expenditures are not systematically related to variations in student performance”

Hanushek, E.A. (1989) The impact of differential expenditures on school performance. Educational Researcher. 18 (4) 45-62

Hanushek describes the collection of studies relating spending and outcomes as follows:

“The studies are almost evenly divided between studies of individual student performance and aggregate performance in schools or districts. Ninety-six of the 147 studies measure output by score on some standardized test. Approximately 40 percent are based upon variations in performance within single districts while the remainder look across districts. Three-fifths look at secondary performance (grades 7-12) with the rest concentrating on elementary student performance.” (fn #25)

[xiii] Baker, B. D. (2012). Revisiting the Age-Old Question: Does Money Matter in Education?. Albert Shanker Institute.

Relevant re-analyses include:

Greenwald, R., Hedges, L., Laine, R. (1996) The Effect of School Resources on Student Achievement. Review of Educational Research 66 (3) 361-396

Wenglinsky, H. (1997) How Money Matters: The effect of school district spending on academic achievement. Sociology of Education 70 (3) 221-237

[xiv] Hanushek (2006) goes so far as to title a concurrently produced volume on the same topic “How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm Our Children.” [emphasis added] The premise that additional funding for schools often leveraged toward class size reduction, additional course offerings or increased teacher salaries, causes harm to children is, on its face, absurd. The book which implies as much in its title never once validates that such reforms ever cause observable harm. Rather, the title is little more than a manipulative attempt to instill fear of pending harm in mind of the un-critical spectator. The book also includes two examples of a type of analysis that occurred with some frequency in the mid-2000s which also had the intent of showing that school funding doesn’t matter. These studies would cherry pick anecdotal information on either or both a) poorly funded schools that have high outcomes or b) well-funded schools that have low outcomes (see Evers & Clopton, 2006, Walberg, 2006).

[xv] Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Hanushek, E. A., and Lindseth, A. (2009). Schoolhouses, Courthouses and Statehouses. Princeton, N.J.: Princeton University Press., See also: http://edpro.stanford.edu/Hanushek/admin/pages/files/uploads/06_EduO_Hanushek_g.pdf

Hanushek, E. A. (ed.). (2006). Courting failure: How school finance lawsuits exploit judges’ good intentions and harm our children (No. 551). Hoover Press.

Evers, W. M., and Clopton, P. (2006). “High-Spending, Low-Performing School Districts,” in Courting Failure: How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm our Children (Eric A. Hanushek, ed.) (pp. 103-194). Palo Alto, CA: Hoover Press.

Walberg, H. (2006) High Poverty, High Performance Schools, Districts and States. in Courting Failure: How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm our Children (Eric A. Hanushek, ed.) (pp. 79-102). Palo Alto, CA: Hoover Press.

Hanushek, E. A., and Lindseth, A. (2009). Schoolhouses, Courthouses and Statehouses. Princeton, N.J.: Princeton University Press., See also: http://edpro.stanford.edu/Hanushek/admin/pages/files/uploads/06_EduO_Hanushek_g.pdf

[xvi] Greene, J. P. & Trivitt, (2008). Can Judges Improve Academic Achievement? Peabody Journal of Education, 83(2), 224-237.

Neymotin, F. (2010) The Relationship between School Funding and Student Achievement in Kansas Public Schools. Journal of Education Finance 36 (1) 88-108.

[xvii] Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

[xviii] Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Two reports from Cato Institute are illustrative (Ciotti, 1998, Coate & VanDerHoff, 1999).

Ciotti, P. (1998). Money and School Performance: Lessons from the Kansas City Desegregations Experience. Cato Policy Analysis #298.

Coate, D. & VanDerHoff, J. (1999). Public School Spending and Student Achievement: The Case of New Jersey. Cato Journal, 19(1), 85-99.

[xix] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[xx] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[xxi] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[xxii] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

See also:

Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[xxiii] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

[xxiv] Hanushek, E. A. (2011). The economic value of higher teacher quality. Economics of Education Review, 30(3), 466-479.

[xxv] Hanushek, E. A. (2009). Teacher deselection. Creating a new teaching profession, 168, 172-173.

[xxvi] Springer, M. G., Ballou, D., Hamilton, L., Le, V. N., Lockwood, J. R., McCaffrey, D. F., … & Stecher, B. M. (2011). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching (POINT). Society for Research on Educational Effectiveness.

Yuan, K., Le, V. N., McCaffrey, D. F., Marsh, J. A., Hamilton, L. S., Stecher, B. M., & Springer, M. G. (2012). Incentive Pay Programs Do Not Affect Teacher Motivation or Reported Practices Results From Three Randomized Studies. Educational Evaluation and Policy Analysis, 0162373712462625.

Goodman, S. F., & Turner, L. J. (2013). The design of teacher incentive pay and educational outcomes: Evidence from the New York City bonus program. Journal of Labor Economics, 31(2), 409-420.

Goodman, S., & Turner, L. (2011). Does Whole-School Performance Pay Improve Student Learning? Evidence from the New York City Schools. Education Next, 11(2), 67-71.

[xxvii] Baker, B. D., Oluwole, J. O., & Green III, P. C. (2013). The Legal Consequences of Mandating High Stakes Decisions Based on Low Quality Information: Teacher Evaluation in the Race-to-the-Top Era. education policy analysis archives, 21(5), n5.

[xxviii] Baker, B. D., Oluwole, J. O., & Green III, P. C. (2013). The Legal Consequences of Mandating High Stakes Decisions Based on Low Quality Information: Teacher Evaluation in the Race-to-the-Top Era. education policy analysis archives, 21(5), n5.

[xxix] See http://www2.ed.gov/rschstat/research/pubs/rigorousevid/rigorousevid.pdf;

Jeremy D. Finn and Charles M. Achilles, “Tennessee’s Class Size Study: Findings, Implications, Misconceptions,” Educational Evaluation and Policy Analysis, 21, no. 2 (Summer 2009): 97-109;

Jeremy Finn et. al, “The Enduring Effects of Small Classes,” Teachers College Record, 103, no. 2, (April 2001): 145–183; http://www.tcrecord.org/pdf/10725.pdf;

Alan Krueger, “Would Smaller Class Sizes Help Close the Black-White Achievement Gap.” Working Paper #451 (Princeton, NJ: Industrial Relations Section, Department of Economics, Princeton University, 2001) http://www.irs.princeton.edu/pubs/working_papers.html;

Henry M. Levin, “The Public Returns to Public Educational Investments in African American Males,” Dijon Conference, University of Bourgogne, France. May 2006. http://www.u-bourgogne.fr/colloque-iredu/posterscom/communications/LEVIN.pdf;

Spyros Konstantopoulos Spyros and Vicki Chun, “What Are the Long-Term Effects of Small Classes on the Achievement Gap? Evidence from the Lasting Benefits Study,” American Journal of Education 116, no. 1 (November 2009): 125-154.

[xxx] Jepsen, C., Rivkin, S. (2002) What is the Tradeoff Between Smaller Classes and Teacher Quality? NBER Working Paper # 9205, Cambridge, MA. http://www.nber.org/papers/w9205

“The results show that, all else equal, smaller classes raise third-grade mathematics and reading achievement, particularly for lower-income students. However, the expansion of the teaching force required to staff the additional classrooms appears to have led to a deterioration in average teacher quality in schools serving a predominantly black student body. This deterioration partially or, in some cases, fully offset the benefits of smaller classes, demonstrating the importance of considering all implications of any policy change.” p. 1

For further discussion of the complexities of evaluating class size reduction in a dynamic policy context, see:

David Sims, “A Strategic Response to Class Size Reduction: Combination Classes and Student Achievement in California,” Journal of Policy Analysis and Management, 27(3) (2008): 457–478

David Sims, “Crowding Peter to Educate Paul: Lessons from a Class Size Reduction Externality,” Economics of Education Review, 28 (2009): 465–473.

Matthew M. Chingos, “The Impact of a Universal Class-Size Reduction Policy: Evidence from Florida’s Statewide Mandate,” Program on Education Policy and Governance Working Paper 10-03 (2010).

[xxxi] Ehrenberg, R.G., Brewer, D., Gamoran, A., Willms, J.D. (2001) Class Size and Student Achievement. Psychological Science in the Public Interest 2 (1) 1-30

[xxxii] Baker, B., & Welner, K. G. (2012). Evidence and rigor scrutinizing the rhetorical embrace of evidence-based decision making. Educational Researcher, 41(3), 98-101.

[xxxiii] Loeb, S., Darling-Hammond, L., & Luczak, J. (2005). How teaching conditions predict teacher turnover in California schools. Peabody Journal of Education, 80(3), 44-70.

Isenberg, E. P. (2010). The Effect of Class Size on Teacher Attrition: Evidence from Class Size Reduction Policies in New York State. US Census Bureau Center for Economic Studies Paper No. CES-WP-10-05.

Angry Andy’s Failing Schools & the Finger of Blame

NY Governor Andrew Cuomo’s office has released a report in which it identifies what it refers to in bold type on the cover as “Failing Schools.”

Report here: https://www.governor.ny.gov/sites/governor.ny.gov/files/atoms/files/NYSFailingSchoolsReport.pdf

Presumably, these are the very schools on which Angy Andy would like to impose death penalties – or so he has opined in the past.

The report identifies 17 districts in particular that are home to failing schools. The point of the report is to assert that the incompetent bureaucrats, high paid administrators and lazy teachers in these schools simply aren’t getting the job done and must be punished/relieved of their duties. Angry Andy has repeatedly vociferously asserted that he and his less rabid predecessors have poured obscene sums of funding into these districts for decades. Thus – it’s their fault – certainly not his, for why they stink!

Slide3Slide4

I have addressed over and over again on this blog the plight of high need, specifically small city school districts under Governor Cuomo.

  1. On how New York State crafted a low-ball estimate of what districts needed to achieve adequate outcomes and then still completely failed to fund it.
  2. On how New York State maintains one of the least equitable state school finance systems in the nation.
  3. On how New York State’s systemic, persistent underfunding of high need districts has led to significant increases of numbers of children attending school with excessively large class sizes.
  4. On how New York State officials crafted a completely bogus, racially and economically disparate school classification scheme in order to justify intervening in the very schools they have most deprived over time.

I have also written reports on New York State’s underfunding of the school finance formula – a formula adopted to comply with prior court order in CFE v. State.

  1. Statewide Policy Brief with NYC Supplement: BBaker.NYPolicyBrief_NYC
  2. 50 Biggest Funding Gaps Supplement: 50 Biggest Aid Gaps 2013-14_15_FINAL

Among my reports is one in which I identified the 50 districts with the biggest state aid shortfalls with respect to what the state itself says these districts require for providing a sound basic (constitutional standard) education.  Districts across NY state have funding gaps for a variety of reasons, but I have shown in the past that it is generally districts with greater needs – high poverty concentrations & more children with limited English language proficiency, as well as more minority children – which tend to have larger funding gaps.

I have also pointed out very recently on this blog that some high need upstate cities in NY have had persistently inequitable/inadequate funding for decades, including this one from Angry Andy’s hit list.

Slide4

Personally, even I was shocked to see the relationship between my 50 most underfunded districts list and Angry Andy’s 17 districts that suck.

NY State has over 650 school districts, many of which may be showing relatively low test scores for a variety of reasons, including & especially due to serving high concentrations of needy students.

Based on my updated 2015 runs (final adopted budget) of 50 biggest state aid shortfalls, 12 of Angry Andy’s sucky 17 had among the 50 largest state aid shortfalls.

Yeah… that’s right… 12 of 17 had really big funding shortfalls.

5 of the top 10 biggest funding shortfall districts are on Angry Andy’s list. Yeah.. the list of schools that have supposedly been subjected to obscene amounts of support and additional funding, but due only to their own ineptitude, have failed.

So how big are those funding shortfalls? How much state aid is supposed to be allocated to these districts to provide a sound basic education? Here are a few cuts at the numbers. First, here are the failing 17, by their state aid gap rank for 2014 and 2015. Included also are their state aid gaps per Aidable Foundation Pupil Unit. Note that their gaps per actual warm body – enrolled pupil – are larger (TAFPU includes some additional “weighted” pupils).

But even with this conservative figure, Hempstead’s gap – the amount of state aid they are not getting with respect to their calculated target – is over $6,000 per pupil. Yes – OVER $6,000 PER PUPIL!  (where’s that NY lottery guy when you need him?). Note that the apparent reduction in gaps from 2014 to 2015 occurs due to a manipulation by the state of funding targets and required local contributions – with a smaller share of that reduction actually coming from new state aid.

Slide2All of these are high need districts, having Pupil Need Index values well above 1.5.

Here’s what it looks like in graph form, with local contribution, actual state aid and the gap identified.

Slide1In some cases, the actual state aid received is not a whole lot more than the gap. All of Angry Andy’s failing districts have substantial shortfalls from the funding targets.  Funding targets that were specifically identified as funding needed to achieve desired outcome levels.

Notably, as I’ve explained in the past – the outcome levels used for determining those funding targets were much lower than the outcome levels expected under the state’s current testing and accountability system.

Even then, the state’s approach to estimating the cost of achieving those (much lower) outcomes results in a low-ball manipulated number. (I actually have a book chapter that explains this as an exemplar of classic school finance manipulation)

So, where should that finger of blame point here? 

Or is this just how things work these days – slash the funding of the highest need districts – call them failing – close their schools – give their property and their teacher’s jobs to someone else – and claim victory – leaving others, years down the line to clean up your mess?

Angry Andy – this is your mess. Now do the right thing and fix it!

 

 

Disclaimer: Yes, I spent all day Monday this week testifying at trial about the funding shortfalls for New York State districts, specifically Small City districts with a pending lawsuit against the state. My opinions are the same here as they were there, and have been for several years as reflected in numerous published sources. That’s because my opinions here merely reflect the factual status of the state school finance system in New York, as represented by the state’s own formula calculations and data.

 

 

 

Families for Excellent Schools Totally Bogus Analysis of NYC Schools

Families for Excellent Schools of New York – the Don’t Steal Possible folks – has just released an impossibly stupid analysis in which they claim that New York City is simply throwing money at failure. Spending double on failing schools what they do on totally awesome ones (if they really have any awesome ones). A link to their press release is here:

http://www.familiesforexcellentschools.org/news/press-release-cost-failure

And what is their astounding new evidence that validates that NYC is stealing possible by throwing money at failing schools? Well, they ever so carefully identified the 50 worst and 50 totally awesomest schools in the city, and then took the average of their per pupil budgets to show that the worst schools are substantially outspending the awesomest ones. Thus – money doesn’t matter- especially when in the hands of schools under the governance of their nemesis Mayor BDB and his possible-thieving lackeys.

Oh, where to even begin on this analysis. Let’s peel it all back a little, one layer at a time. Let’s begin with the fact that New York City a while back, under their favored Mayor Bloomberg, adopted something called Fair Student Funding. That formula was designed to drive systematically more money to schools serving higher need populations, including schools with higher shares of children with disabilities and higher shares of low income children. In other words, we would expect that the schools with the largest per pupil budgets in the city would be the ones serving the highest need student populations.

At the same time, we might expect that schools serving the neediest pupils might have lower average performance, even with the additional resources to leverage. The Families for Excellent Schools analysis fails to take into account any of this, using the worst possible outcome measures and failing entirely to consider differences in needs and related costs when evaluating the spending measures. Thus, they draw bold conclusions like the following:

At the middle school level, the bottom 50 schools received an average $30,256 per pupil, compared with $16,277 at the top 50 middle schools.

I happen to have a rich data set on NYC schools from 2008 to 2010, including charter schools, which has been used in this report and in a forthcoming peer reviewed article in Education Finance and Policy. The spending measure in particular is documented in detail in this report.

While these data are a few years old now, they are still useful for illustrating the utter ridiculousness of the FES analysis and for illustrating how far more appropriate analysis yields far more logical findings.

First, let’s look at how per pupil spending is distributed across NYC Middle Schools back in 2010

Figure 1 – Spending and Special Education (NYC Middle Schools 2010)

Slide1

Hmmm… so schools that spend up near that $30k mark tend to have 30% or so special ed, compared to, oh, about 10% in schools spending under $15k.  That might explain some of their findings (uh… or nearly all!)

Figure 2 – Spending and % Free Lunch (NYC Middle Schools 2010)

Slide2

Hmm… so, schools that are low spending tend to have relatively low shares of low income children, as well as very few special education children.

Indeed, it’s true, that schools with more low income children and more children with disabilities actually have larger per pupil budgets. Reports by the City’s Independent Budget Office have found similarly, but have pointed out that the Fair Student Funding formula never really reached full implementation.

And what about unadjusted performance level measures and student outcomes? Well, as one might expect proficiency rates do tend to be lower in schools with higher rates of low income children and children with disabilities.

Figure 3 – % Special Education and Math Grade 8 Proficiency

Slide3

Figure 4 – % Low Income and Math Grade 8 Proficiency (NYC Middle Schools 2010)

Slide4

And thus, by logical extension, the higher spending schools which serve needier populations also have the lower outcomes. Which tells us freakin nothing!

So what’s the next step here? Well, first of all, what we really want to know is not what the average performance LEVEL (proficiency rate, or mean scale scores) of a school is, but whether the school is producing achievement gains for its students. So my data set also includes a measure of school value-added, constructed from the teacher value added reports released a few years back (wherein a school’s value added is the mean of its teachers’ value added). New York City’s value added model accounts (at least partly) for student characteristics, making for fairer comparisons of what schools themselves contribute to student outcomes. For example:

Figure 5 – % Special Education and School Value Added (NYC Middle Schools 2010)

Slide5

Figure 6 – % Low Income and School Value Added (NYC Middle Schools 2010)

Slide6

As we can see, value added outcomes of students are much less biased by the student population characteristics of the school.

But, there’s one more step we have to take here. If we really want to know whether there’s any relationship between school spending and student outcomes – or reveal the god-awful finding of FES that more spending actually harms students (when in the district schools), lowering their outcomes, making them into failures – we need to run a somewhat more complicated model to tease out that relationship.

We can run that model in either of two directions – asking:

  • Given their student populations and scale of operation, do schools with higher per pupil budgets produce higher (or lower) student achievement gains? In other words, in the current system, is higher spending associated with greater achievement gains? This is a Production Function.

Value Added Outcomes = f(Students, Scale, Spending)

  • Given the outcomes currently achieved across schools, given their student populations, what is a school expected to spend (or need to spend) to achieve those outcomes? In other words, in the current system, does it cost more to achieve higher outcomes? This is a Cost Function.

Spending = f(Value Added Outcomes, Students, Scale, Inefficiency)

So, with my approximately 260 regular middle schools per year from 2008 to 2010, I run a few models, where the goal here is simply to determine whether there really exists a massive negative relationship between spending and outcomes, as Families for Crappy Analysis would imply, or whether, in fact that relationship works in the opposite (and more likely) direction. I use a modeling approach similar to that used here.

Figure 7 – Stochastic Frontier Production Model (NYC Middle Schools 2008-2010)

Slide7

In plain language, higher spending is associated with higher value-added!

Figure 8 – Stochastic Frontier Cost Model (NYC Middle Schools 2008-2010)

Slide8

In plain language, achieving higher value added comes at a higher per pupil cost (spending, less inefficiency).

Shockingly, well, not really, I find that in each case there exists a statistically significant positive relationship between per pupil spending and the value added a school’s teachers contribute. In other words, when you do a more reasonable analysis of these data, rather than some total BS tabulation like that of Families for Agenda Driven schlock, you get a reasonable result.

Now, I have one more piece of this puzzle to add. The hacks behind this joke of an analysis propose as their policy solution, further expansion of charter schooling, shifting more resources over to charters and away from these over funded failing district schools.

The irony here is that New York City charter schools have achieved the “successes” they have, at least in part a) while spending far more per pupil than otherwise similar district schools and b) while serving less need student populations.

Let’s look at where those charters lie when placed into the graphs above on spending and student needs.

Figure 9 – Special Education Shares and per Pupil Spending (NYC Middle Schools 2010)

Slide9

Figure 10 – Low Income Shares and per Pupil Spending (NYC Middle Schools 2010)

boe v charter

So, what they are really saying with this junk analysis is that city leaders should take money away from children with special needs in district run schools – the schools that actually serve such children – because those children aren’t “proficient” on state tests. And that money should instead be diverted to schools that serve even fewer of those children than the average district school, while already significantly outspending them. Well, that makes sense.

So the story line goes:

Money matters for us, but not for you.

Money spent on us, by us, is good.

On you, by you is bad!

Really? Really? I just can’t take this anymore.

For a more thorough summary of real research on this topic, see: http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf

Note: All spending and student population measures thoroughly documented in this report.

 

Ed Writers – Try looking beyond propaganda & press releases for success stories

UPDATED 2/5/2015

I enter into this blog post knowing full well that this is a lose-lose deal.  Rating and comparing school quality, effectiveness or efficiency with existing publicly available data is, well, difficult if not impossible. But I’m going there in this post.

Why? Well, one reason I’m going there is that I’m sick of getting e-mail and phone inquiry after inquiry about the same charter schools – and only charter schools – asking how/why are they creating miracle outcomes. I try to explain that there may be more to the story. The reporter then says that the charter school’s data person says I’m wrong – validating their miracle outcomes (despite their own data not being publicly available/replicable, etc. and often with reference to awesome outcomes reported in popularly cited studies of totally different charter schools).

But we may be having our conversation about the wrong schools to begin with.  The whole conversation starts perhaps with a call from the school’s own PR lackey to the local paper, along with a self-congratulatory press release, or alternatively, from the local news outlet itself following up on preconceived notions of which schools are doing miracle work (for a slow news day).  It’s not just that it seems always to be about charter schools, but that it seems to be about the same charter schools every time.

If I wanted my graduate students to figure out what makes successful schools  tick, I’d want them to use a more thoughtful and rigorous selection strategy to identify those schools – rather than merely responding to press releases or preconceived notions.

What if instead, we started with a statistical analysis of all schools, from there, figuring out which schools actually do beat expectations? Which schools achieve greater gains than would be expected, given the students they serve and the resources they have available? There may indeed be some charter schools in this mix. I’d be surprised if there weren’t. They may or may not be the usual suspects. But also, there may be some traditional district schools in this mix. They (under the radar charters and district schools) just may not be puttin’ out those press releases or have PR lackeys hooked in with local media.

To begin with, let me clarify these terms – quality, effectiveness and efficiency – and explain how they have different meanings for different constituents – specifically for parent consumers versus policy makers.

First and foremost when we think of schools we must think of all of the stuff that goes into them and the community which surrounds them – which includes the qualities of the employees who work there, the children who attend and families who interact with the school, the facilities, the local taxpayer support, or not, for the schools. It’s a package deal. When a family chooses where to live or where to send their child to school, they are choosing not only the teachers, but also the building, and the peer group.

  • Quality (unconditional) – we might broadly think of quality as the full package of what a school has to offer – including all of that stuff listed above, and how that stuff ultimately relates to how many kids go to college and where, what kinds of test scores kids get along the way (to the extent that they have any predictive value), what kinds of programs and services are offered and so on. But, as we know, quality in this broad sense, is highly related to community wealth, income and education levels and support provided for local schools. This is quality in an “unconditional” sense. “Best High School” Ratings like those in our popular monthly magazines found in dentists offices in the ‘burbs – those are classic unconditional rankings.  Numbers of kids taking AP courses – average SAT scores, numbers of kids attending selective colleges are common measures and whether these outcomes are a function of the families and communities, or anything special the school might do is of minimal consequence.
  • Effectiveness – One might consider “effectiveness” to be a conditional measure of quality – or at least I will frame it that way here.  Effectiveness measures attempt to sort out whether and to what extent actual differences in schools contribute to those outcomes listed above. That is, if two schools served similar student populations, do they achieve different measured results? These are “conditional” comparisons – estimates the “effectiveness” of a school take into consideration those differences in children who attend the school. These measures are of greater interest to policymakers. We want to know not only if a school has high test scores, or shows strong growth, but also whether they do so while serving student populations similar to other schools. We want to know this in part so that we can draw inferences about whether the methods used by the school might be transferable. But, these measures are still only partly conditional. It may be that one school is more effective with certain children because it has access to more resources – has smaller class sizes, more specialized teachers, or has been able to recruit and retain a stronger team of teachers and administrators by paying more competitive wages. The school may be more “effective” because it has the resources to be more effective.
  • Efficiency – Efficiency measures take the effectiveness measures one step further – considering not only if schools are able to produce comparable outcomes for comparable children, but also if they are able to do so with comparable resources. These measures are conditional on both student characteristics AND resources, and should provide us with a better picture of whether schools, given who they serve and the aggregate resources they have, are generating greater or lesser growth in student outcomes (based on the limited available measures).

At best – at best – at best – much like estimating teacher/classroom influences on student achievement growth – estimating school relative efficiency is imprecise and as much art as science. (see: http://cepa.stanford.edu/sites/default/files/2002316.pdf#page=19) As I often say, the art of working with existing data (publicly available or not) is the art of doing the “least bad analysis” possible.

So, all of that said, I’ve taken it on myself here to gather up data on school characteristics from 2010 to 2014 in New Jersey, the state’s student growth percentile data, as well as using statewide staffing file data to construct measures of school aggregate resources. These are updated versions of the models I use in this post: https://njedpolicy.wordpress.com/2014/10/31/research-note-on-student-growth-the-productivity-of-new-jersey-charter-schools/ Code is provided below.

Because there are geographic differences in economic, demographic and other environmental conditions I compare schools to all schools serving similar grade ranges in the same county, along with similar demographics and resource levels. Yes, the data are less precise than I’d like. But they are equally imprecise for everyone and publicly available (no one got to submit their own super secret version of data).

First, here’s a quick look at the models (each also contain a dummy variable for each county and for each year of background data):

Slide1Slide2

As I’ve shown previously, these various factors explain a lot of the variation in school level growth measures, even state officials continue to live in denial (and construct consequential policies on that denial). Student population characteristics and resources are both associated with overall growth, explaining nearly 50% of the variation in some cases.

Across all 8 models, I can calculate which schools most consistently showed greater, or lesser “growth” on state assessments than predicted, given their students and resource levels. While I could have applied trickier statistical models – stochastic frontier, etc. – these really don’t change the rankings that much.  And that’s what I’ve done to generate the following list of the “Top 50 productive efficient schools in New Jersey.” Notably absent here are any schools that serve only upper grades and thus have not growth percentile measure to model.

UPDATED LIST OF SCHOOLS! Updated NJ Rankings

Updated, updated Top 50 (not much change)

NJ Top 50

Now… is this list really all that meaningful? I’m not sure I’d go that far. Ratings are certainly somewhat sensitive to model specification, seem to shift from math to language arts and from year to year. There may indeed be some totally screwy results in these runs – as often happens when we try to take a model which relies on patterns across thousands to characterize the position of any one point.

However, it’s at least more defensible than relying on press releases and preconceived notions.  AND, at the very least it’s a whole lot more interesting than hearing the same old story.

At least according to this list, if you’re looking for an interesting charter school to visit, check out Discovery. I know nothing about it… but its numbers POP here. If you’re looking for another school in Newark, how about Hawthorne Ave? which seems to beat most Newark Charter and District schools.

And to those out in the schools? Please don’t make too much of this list. It is what it is – based on narrowly defined outcome measures and excessively crude population measures.

Bonus Charts!

Schools within the City of Newark

Slide1

Charter Schools Statewide

Slide2

=====================

Geek stuff:

1) model output Stata output

2) data set building part I Step 1-Staffing Files

3) data set building part II Step 2-School Level Variables

4) data set building part III Step 3-School Resource Aggregation

More Money, More Money, More Money? Have we really ever tried sustained, targeted school funding for America’s neediest children?

I’m no-longer surprised these days by the belligerent wrongness of rhetoric around school funding equity and adequacy. Arguably, much of the supporting rationale for the current (and other recent) education reforms is built on the house of cards that when it comes to financing equitably and adequately our public school systems – especially those serving our neediest children, we’ve been there and done that. In fact, we’ve been there and done that for decades. It just never ends. For example, as recently summarized in regional New York State news outlet:

Cuomo said more money isn’t necessarily the answer.

“We’ve been putting more money into failing schools for decades,” he said. “Over the last 10 years, 250,000 children went through those failing schools, and New York government did nothing.

http://poststar.com/news/local/teacher-evaluations-are-baloney-cuomo-says/article_f51aefda-a1c0-11e4-8d43-1fe6f62ba3e6.html

Yes – that’s it – for decades New York State has simply been pouring more and more money into failing schoolsall of its “failing” schools, and all for naught.

Similarly, data free ideology tells us that the Commonwealth of Pennsylvania has made a fiscal experiment of Philadelphia, pouring massive sums of funding into that city’s schools, well, again, nearly forever! And to higher amounts that, well, really any other district in or in near proximity to the Commonwealth! Or so says edu-pundit Andrew Smarick on twitter just over a year ago. Here are a few gems from his Twitter rant on Philly schools:

Philly’s district = terrible for decades, families left, as a result it’s bankrupt. Gotten huge state funding for yrs to prop it up.

I know Philly gets among (if not THE) highest levels of funding from the state. I also know it’s been losing thousands of students.

As per the usual course, we are now also told that the exorbitant – highest in the country in fact – spending of Camden public schools and their persistent failure are yet another (anecdotal) proof positive that throwing money down the rat hole of government schools serving high poverty neighborhoods, is well, pointless. Or so says a new Reason Foundation documentary using Camden as the anecdote du jour:

 In Camden, per pupil spending was more than $25,000 in 2013, making it one of the highest spending districts in the nation. http://reason.com/reasontv/2015/01/26/money-didnt-fix-camdens-failing-schools

Well, actually, the only national database of per pupil spending available even by this date go up to 2011-12. So I’m not quite sure what they’re talking about. In 2012, taking census fiscal survey data (http://www.census.gov/govs/school/) divided by the regional cost index (http://bush.tamu.edu/research/faculty/taylor_CWI/) Camden comes in ranked 1,128th of about 13,376 districts with complete data. So yes, it’s relatively high – Top 10%. But that quote above is totally made up (since the data aren’t even available to support it) and, well, exaggerated, at best.

Nor is it necessarily useful to try to use such bombastic (and made up) anecdotes to “prove” a point that’s been repeatedly rebutted by actual research!

In addition to the studies noted at the link above, I’ve addressed in two lengthy academic articles this propensity to drive policy rhetoric with half true (at the very best) anecdotes (hoping no one will ever actually fact check). For years, Newark (the state of New Jersey) and Kansas City, Missouri (as a result of desegregation litigation from the 1980s through early 1990s) were frequently used as proof that money doesn’t matter. See these two articles for the through debunking:

  • Green III, P. C., & Baker, B. D. (2006). Urban Legends, Desegregation and School Finance: Did Kansas City Really Prove That Money Doesn’t Matter. Mich. J. Race & L., 12, 57.
  • Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Something I find particularly baffling is the tendency of those crafting these anecdotes to not even bother to check whether, to what extent and for how long funding was actually poured into these districts – or any high need districts. Have we really been there and done that? Have state governments across this great land already done all they can to provide fiscal sustenance to those with the greatest educational needs? And all for naught? Thus bring on the cost cutting reforms?

Well, even the most fractionally-witted reader knows that deep disparities still exist in state school finance systems.

What is less well known – or less frequently illustrated – is that in many cases, not much has changed in as much as twenty years!

And what is also less well known is that even in those cases where funding was targeted to areas of greater need, that funding really never reached the levels that would have been needed to make substantive progress on closing achievement gaps – even with surrounding districts’ children, and that funding while coming in the occasional burst in select locations, was never really sustained even as long as it would take a single cohort of children to pass through a k-12 system.

We do have some evidence from peer reviewed literature on what it might actually take to provide children in higher poverty settings with equal opportunity to achieve comparable outcomes to their more economically advantaged peers. William Duncombe and John Yinger (2005) find that as we move from a school or district with 0% to 100% low income children, the cost per pupil of achieving common outcome target doubles, or more than doubles when using Census poverty rates as the measure.[1] In other words, it would take a substantial infusion of funding to have any real shot at closing achievement gaps.

Let’s take a look at a) some persistently financially deprived settings over time and b) some settings where we have supposedly been there and done that (with money). For the following figures, I use the strategy that I’ve used previously – comparing the relative poverty and relative per pupil revenues (and spending) of school districts – compared to all other districts in the same labor market. Here’s my explanation for using this approach, from a recent Center for American Progress report:

It is important to understand that the value of any given level of education funding, in any given location, is relative. That is, it matters less whether a district spends $10,000 per pupil or $20,000 per pupil than how that funding compares to other districts operating in the same regional labor market—and, for that matter, how that money relates to other conditions in the regional labor market.

The first reason relative funding matters is that schooling is labor intensive. The quality of schooling depends largely on the ability of schools or districts to recruit and retain quality employees. The largest share of school districts’ annual operating budgets is tied up in the salaries and wages of teachers and other school workers. The ability to recruit and retain teachers in a school district in any given labor market depends on the wage a district can pay to teachers relative to other surrounding schools or districts and relative to nonteaching alternatives in the same labor market.[2] The second reason is that graduates’ access to opportunities beyond high school is largely relative and regional. The ability of graduates of one school district to gain access to higher education or the labor force depends on the regional pool in which the graduate must compete.[3]

First let’s look at longitudinal trends for some of the very districts I identified as suffering severe financial disadvantage in my Center for American Progress Report:

Slide1

[poured money in for decades, really?]

Slide2

Slide3

[note that this time period starts over a decade after the economic decline chronicled in the song]

Slide4

Slide6

Slide7

Slide8

Slide9

Note that in some cases, relative poverty declines as poverty rates of surrounding districts increases.

Next, let’s take a look at some of those districts that supposedly provide proof positive – irrefutable evidence – that decades worth of highest in the nation spending – was all for naught. One need not even address the “naught” part if the “all for” part was, well, untrue.

Slide12

First, here’s Kansas City – where funding was indeed elevated in the early 1990s, scaling up beginning around 1989. We discuss this case extensively in this article.

  • Green III, P. C., & Baker, B. D. (2006). Urban Legends, Desegregation and School Finance: Did Kansas City Really Prove That Money Doesn’t Matter. Mich. J. Race & L., 12, 57.

However, it is important to note that this funding, after 1995 tapered off rapidly and never really rebounded, except for a temporary spike around 2009, when the district’s boundaries were reduced (when the remaining majority white corner of the district annexed itself to neighboring Independence school district). Throughout the 2000s, but for this temporary blip, Kansas City’s state and local revenues per pupil have hovered marginally above labor market averages, while the district’s poverty rate has hovered at more than 2.5 times the labor market average rate.

How about Newark and, now Camden. While these cities are clearly better off than Philly, Bridgeport or Utica, it’s not like there’s really been a sustained massive infusion of funding over time. Both start the period with funding near labor market average levels and with double (Newark) to more than triple (Camden) the labor market poverty rates. Funding escalates from the late 1990s in Newark, and mid-2000s in Camden to about a 34% (Newark) and temporarily over 50% (Camden) margin over labor market averages, coming to rest around1/3 above labor market averages for both. But given their relative poverty and estimates of costs associated with poverty, even these margins fall well short. Further, they are only sustained for about 4 years (Camden) to almost ten years (at smaller margins, in Newark).

Slide10

Slide11

Should we expect some bang for this buck? Yes. And the broader literature on the topic certainly suggests that such bang exists.

But the point of this post is that we have to first look at how hard we really have tried thus far, and to acknowledge those places where we haven’t tried at all.

Yeah we’ve put some though really not all that much effort, given the extreme needs, into places like Newark and Camden.

But other places? like:

Philadelphia

Allentown

Reading

Utica, NY

Chicago

Waukegan, IL

Bridgeport, CT

New Britain, CT

We’ve never really tried. These districts and the children they serve have never – in the past 20 years been given a fair shot. And even more depressing is that while policymakers and the fabricati (hack pundits who simply make shit up if it serves their purpose) that (dis)inform them have perpetuated these myths and lies, things have only gotten worse.

=======

NOTES

[1] Duncombe, W., & Yinger, J. (2005). How much more does a disadvantaged student cost?. Economics of Education Review, 24(5), 513-532.

[2] Bruce D. Baker, “Revisiting the Age-Old Question: Does Money Matter in Education?” (Washington: Albert

Shanker Institute, 2012), available at http://www.shankerinstitute.org/images/doesmoneymatter_final.pdf.

[3] Bruce D. Baker and Preston C. Green III as well as William Koski and Rob Reich explain that to a large extent, education operates as a positional good, whereby the advantages obtained by some necessarily translate to disadvantages for others. For example, Baker and Green explain that, “In a system where children are guaranteed only minimally adequate K–12 education, but where many receive far superior opportunities, those with only minimally adequate education will have limited opportunities in higher education or the workplace.” Bruce D. Baker and Preston C. Green, “Conceptions of Equity and Adequacy in School Finance.” In Helen F. Ladd and Edward B. Fiske, eds., Handbook of Research in Education Finance and Policy (New York: Routledge, 2008), p. 203–221; Koski and Rob Reich, “When “Adequate” Isn’t: The Retreat From Equity in Educational Law and Policy and Why It Matters,” Emory Law Review 56 (3) (2006): 545–618, available at http://www.law.emory.edu/fileadmin/journals/elj/56/3/Koski___Reich.pdf.

How and Why Money Matters in Schools (one more time – updated)

This post is taken from a forthcoming report in which I summarize literature related to state school finance reforms and explore relationships between changing distributions of funding and distributions of tangible classroom level resources. The newly released Jackson, Johnson and Persico NBER paper speaks to similar issues and is included in the discussion that follows:

==========

In a comprehensive review of literature addressing the question “Does Money Matter in Education?” in 2012, Baker concluded:

            To be blunt, money does matter. Schools and districts with more money clearly have greater ability to provide higher-quality, broader, and deeper educational opportunities to the children they serve. Furthermore, in the absence of money, or in the aftermath of deep cuts to existing funding, schools are unable to do many of the things they need to do in order to maintain quality educational opportunities. Without funding, efficiency tradeoffs and innovations being broadly endorsed are suspect. One cannot tradeoff spending money on class size reductions against increasing teacher salaries to improve teacher quality if funding is not there for either – if class sizes are already large and teacher salaries non-competitive. While these are not the conditions faced by all districts, they are faced by many.

Building on the findings and justifications provided by Baker (2012), we offer Figure 4 as a simple model of the relationship of schooling resources to children’s measurable school achievement outcomes. First, the fiscal capacity of states – their wealth and income – does affect their ability to finance public education systems. But, as we have shown in related research, on which we expand herein, the effort put forth in state and local tax policy plays an equal role.

Figure 4

how schools work

The amount of state and local revenue raised drives the majority of current spending of local public school districts, because federal aid constitutes such a relatively small share. Further, the amount of money a district is able spend on current operations determines the staffing ratios, class sizes and wages a local public school district is able to pay. Indeed, there are tradeoffs to be made between staffing ratios and wage levels. Finally, as noted above, a sizable body of research illustrates the connection between staffing qualities and quantities and student outcomes.

The connections laid out in this model seem rather obvious. How much you raise dictates how much you can spend. How much you spend – in a labor intensive industry dictates how many individuals you can employ, the wage you can pay them, and in turn the quality of individuals you can recruit and retain. But in this modern era of resource-free school “reforms” the connections between revenue, spending and real, tangible resources are often ignored, or worse, argued to be irrelevant. A common theme advanced in modern political discourse is that all schools and districts already have more than enough money to get the job done. They simply need to use it more wisely and adjust to the “new normal.”[i]

But, on closer inspection of the levels of funding available across states and local public school districts within states, this argument rings hollow. To illustrate, we spend a significant portion of this report statistically documenting these connections. First, we take a quick look at existing literature on the relevance of state school finance systems, and reform of those systems for improving the level and distribution of student outcomes, and literature on the importance of class sizes and teacher wages for improving school quality as measured by student outcomes.

Equitable and Adequate Funding

There exists an increasing body of evidence that substantive and sustained state school finance reforms matter for improving both the level and distribution of short-term and long-run student outcomes. A few studies have attempted to tackle school finance reforms broadly applying multi-state analyses over time. Card and Payne (2002) found “evidence that equalization of spending levels leads to a narrowing of test score outcomes across family background groups.”[ii] (p. 49) Most recently, Jackson, Johnson & Persico (2015) evaluated long-term outcomes of children exposed to court-ordered school finance reforms, finding that “a 10 percent increase in per-pupil spending each year for all twelve years of public school leads to 0.27 more completed years of education, 7.25 percent higher wages, and a 3.67 percentage-point reduction in the annual incidence of adult poverty; effects are much more pronounced for children from low-income families.”(p. 1) [iii]

Numerous other researchers have explored the effects of specific state school finance reforms over time. [iv] Several such studies provide compelling evidence of the potential positive effects of school finance reforms. Studies of Michigan school finance reforms in the 1990s have shown positive effects on student performance in both the previously lowest spending districts, [v] and previously lower performing districts. [vi] Similarly, a study of Kansas school finance reforms in the 1990s, which also involved primarily a leveling up of low-spending districts, found that a 20 percent increase in spending was associated with a 5 percent increase in the likelihood of students going on to postsecondary education.[vii]

Three studies of Massachusetts school finance reforms from the 1990s find similar results. The first, by Thomas Downes and colleagues found that the combination of funding and accountability reforms “has been successful in raising the achievement of students in the previously low-spending districts.” (p. 5)[viii] The second found that “increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students.”[ix] The most recent of the three, published in 2014 in the Journal of Education Finance, found that “changes in the state education aid following the education reform resulted in significantly higher student performance.”(p. 297)[x] Such findings have been replicated in other states, including Vermont. [xi]

On balance, it is safe to say that a sizeable and growing body of rigorous empirical literature validates that state school finance reforms can have substantive, positive effects on student outcomes, including reductions in outcome disparities or increases in overall outcome levels.[xii]

Class Sizes and Teacher Salaries

The premise that money matters for improving school quality is grounded in the assumption that having more money provides schools and districts the opportunity to improve the qualities and quantities of real resources. Jackson, Johnson and Persico (2015) explain that the spending increases they found to be associated with long term benefits “were associated with sizable improvements in measured school quality, including reductions in student-to-teacher ratios, increases in teacher salaries, and longer school years.” (p. 1)

The primary resources involved in the production of schooling outcomes are human resources – or quantities and qualities of teachers, administrators, support and other staff in schools. Quantities of school staff are reflected in pupil to teacher ratios and average class sizes. Reduction of class sizes or reductions of overall pupil to staff ratios require additional staff, thus additional money, assuming the wages and benefits for additional staff remain constant. Qualities of school staff depend in part on the compensation available to recruit and retain them – specifically salaries and benefits, in addition to working conditions. Notably, working conditions may be reflected in part through measures of workload, like average class sizes, as well as the composition of the student population.

A substantial body of literature has accumulated to validate the conclusion that both teachers’ overall wages and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olson (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career,[xiii] while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers.[xiv] In addition, more recent studies have tackled the specific issues of relative pay noted above. Loeb and Page showed that:

“Once we adjust for labor market factors, we estimate that raising teacher wages by 10 percent reduces high school dropout rates by 3 percent to 4 percent. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities.”[xv]

In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes.

Research on the flip side of this issue – evaluating spending constraints or reductions – reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, David Figlio and Kim Rueben (2001) note that, “Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.”[xvi]

Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich, Pas and Yinger (2008) “find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county.”[xvii]

In addition, ample research indicates that children in smaller classes achieve better outcomes, both academic and otherwise, and that class size reduction can be an effective strategy for closing racial or socio-economic achievement gaps. [xviii] While it’s certainly plausible that other uses of the same money might be equally or even more effective, there is little evidence to support this. For example, while we are quite confident that higher teacher salaries may lead to increases in the quality of applicants to the teaching profession and increases in student outcomes, we do not know whether the same money spent toward salary increases would achieve better or worse outcomes if it were spent toward class size reduction. Indeed, some have raised concerns that large scale-class size reductions can lead to unintended labor market consequences that offset some of the gains attributable to class size reduction (such as the inability to recruit enough fully qualified teachers).[xix] And many, over time, have argued the need for more precise cost/benefit analysis. [xx] Still, the preponderance of existing evidence suggests that the additional resources expended on class size reductions do result in positive effects.

Both reductions to class sizes and improvements to competitive wages can yield improved outcomes, but the efficiency gains of choosing one strategy over the other are unclear, and local public school districts rarely have complete flexibility to make tradeoffs.[xxi] Class size reduction may be constrained by available classrooms. Smaller class sizes and reduced total student loads are a relevant working condition simultaneously influencing teacher recruitment and retention.[xxii] That is, providing smaller classes may partly offset the need for higher wages for recruiting or retaining teachers. High poverty schools require a both/and rather than either/or strategy when it comes to smaller classes and competitive wages.

[i] Baker, B., & Welner, K. G. (2012). Evidence and rigor scrutinizing the rhetorical embrace of evidence-based decision making. Educational Researcher, 41(3), 98-101.

[ii] Card, D., and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

[iii] Jackson, C. K., Johnson, R., & Persico, C. (2014). The Effect of School Finance Reforms on the Distribution of Spending, Academic Achievement, and Adult Outcomes (No. w20118). National Bureau of Economic Research.

Jackson, C. K., Johnson, R., & Persico, C. (2015). The Effects of School Spending on Educational and Economic Outcomes: Evidence from School Finance Reforms (No. w 20847) National Bureau of Economic Research.

[iv] Figlio (2004) explains that the influence of state school finance reforms on student outcomes is perhaps better measured within states over time, explaining that national studies of the type attempted by Card and Payne confront problems of a) the enormous diversity in the nature of state aid reform plans, and b) the paucity of national level student performance data.

Figlio, D. N. (2004) Funding and Accountability: Some Conceptual and Technical Issues in State Aid Reform. In Yinger, J. (Ed.) p. 87-111 Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. MIT Press.

[v] Roy, J. (2011). Impact of school finance reform on resource equalization and academic performance: Evidence from Michigan. Education Finance and Policy, 6(2), 137-167.

Roy (2011) published an analysis of the effects of Michigan’s 1990s school finance reforms which led to a significant leveling up for previously low-spending districts. Roy, whose analyses measure both whether the policy resulted in changes in funding and who was affected, found that “Proposal A was quite successful in reducing interdistrict spending disparities. There was also a significant positive effect on student performance in the lowest-spending districts as measured in state tests.” (p. 137)

[vi] Papke, L. (2005). The effects of spending on test pass rates: evidence from Michigan. Journal of Public Economics, 89(5-6). 821-839.

Hyman, J. (2013). Does Money Matter in the Long Run? Effects of School Spending on Educational Attainment. http://www-personal.umich.edu/~jmhyman/Hyman_JMP.pdf.

Papke (2001), also evaluating Michigan school finance reforms from the 1990s, found that “increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance.” (p. 821)

Most recently, Hyman (2013) also found positive effects of Michigan school finance reforms in the 1990s, but raised some concerns regarding the distribution of those effects. Hyman found that much of the increase was targeted to schools serving fewer low income children. But, the study did find that students exposed to an additional “12%, more spending per year during grades four through seven experienced a 3.9 percentage point increase in the probability of enrolling in college, and a 2.5 percentage point increase in the probability of earning a degree.” (p. 1)

[vii] Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284. (p. 275)

[viii] Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

[ix] Guryan, J. (2001). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

“The magnitudes imply a $1,000 increase in per-pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures.” (p. 1)

[x] Nguyen-Hoang, P., & Yinger, J. (2014). Education Finance Reform, Local Behavior, and Student Performance in Massachusetts. Journal of Education Finance, 39(4), 297-322.

[xi] Downes had conducted earlier studies of Vermont school finance reforms in the late 1990s (Act 60). In a 2004 book chapter, Downes noted “All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post-Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance.” (p. 312)

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (Ed.), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

[xii] Indeed, this point is not without some controversy, much of which is readily discarded. Second-hand references to dreadful failures following massive infusions of new funding can often be traced to methodologically inept, anecdotal tales of desegregation litigation in Kansas City, Missouri, or court-ordered financing of urban districts in New Jersey.

Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Two reports from Cato Institute are illustrative (Ciotti, 1998, Coate & VanDerHoff, 1999).

Ciotti, P. (1998). Money and School Performance: Lessons from the Kansas City Desegregations Experience. Cato Policy Analysis #298.

Coate, D. & VanDerHoff, J. (1999). Public School Spending and Student Achievement: The Case of New Jersey. Cato Journal, 19(1), 85-99.

Hanushek and Lindseth (2009) provide a similar anecdote-driven approach in which they dedicate a chapter of a book to proving that court-ordered school funding reforms in New Jersey, Wyoming, Kentucky, and Massachusetts resulted in few or no measurable improvements. However, these conclusions are based on little more than a series of graphs of student achievement on the National Assessment of Educational Progress in 1992 and 2007 and an untested assertion that, during that period, each of the four states infused substantial additional funds into public education in response to judicial orders. That is, the authors merely assert that these states experienced large infusions of funding, focused on low income and minority students, within the time period identified. They necessarily assume that, in all other states which serve as a comparison basis, similar changes did not occur. Yet they validate neither assertion. Baker and Welner (2011) explain that Hanushek and Lindseth failed to even measure whether substantive changes had occurred to the level or distribution of school funding as well as when and for how long. In New Jersey, for example, infusion of funding occurred from 1998 to 2003 (or 2005), thus Hanushek and Lindseth’s window includes 6 years on the front end where little change occurred (When?). Kentucky reforms had largely faded by the mid to late 1990s, yet Hanushek and Lindseth measure post reform effects in 2007 (When?). Further, in New Jersey, funding was infused into approximately 30 specific districts, but Hanushek and Lindseth explore overall changes to outcomes among low-income children and minorities using NAEP data, where some of these children attend the districts receiving additional support but many did not (Who?). In short the slipshod comparisons made by Hanushek and Lindseth provide no reasonable basis for asserting either the success or failures of state school finance reforms. Hanushek (2006) goes so far as to title the book “How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm Our Children.” The premise that additional funding for schools often leveraged toward class size reduction, additional course offerings or increased teacher salaries, causes harm to children is, on its face, absurd. And the book which implies as much in its title never once validates that such reforms ever do cause harm. Rather, the title is little more than a manipulative attempt to convince the non-critical spectator who never gets past the book’s cover to fear that school finance reforms might somehow harm children. The book also includes two examples of a type of analysis that occurred with some frequency in the mid-2000s which also had the intent of showing that school funding doesn’t matter. These studies would cherry pick anecdotal information on either or both a) poorly funded schools that have high outcomes or b) well-funded schools that have low outcomes (see Evers & Clopton, 2006, Walberg, 2006).

In equally problematic analysis, Neymotin (2010) set out to show that massive court ordered infusions of funding in Kansas following Montoy v. Kansas led to no substantive improvements in student outcomes. However, Neymotin evaluated changes in school funding from 1997 to 2006, but the first additional funding infused following the January 2005 Supreme Court decision occurred in the 2005-06 school year, the end point of Neymotin’s outcome data.

Baker, B. D., & Welner, K. G. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

Hanushek, E. A., and Lindseth, A. (2009). Schoolhouses, Courthouses and Statehouses. Princeton, N.J.: Princeton University Press., See also: http://edpro.stanford.edu/Hanushek/admin/pages/files/uploads/06_EduO_Hanushek_g.pdf

Hanushek, E. A. (ed.). (2006). Courting failure: How school finance lawsuits exploit judges’ good intentions and harm our children (No. 551). Hoover Press.

Evers, W. M., and Clopton, P. (2006). “High-Spending, Low-Performing School Districts,” in Courting Failure: How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm our Children (Eric A. Hanushek, ed.) (pp. 103-194). Palo Alto, CA: Hoover Press.

Walberg, H. (2006) High Poverty, High Performance Schools, Districts and States. in Courting Failure: How School Finance Lawsuits Exploit Judges’ Good Intentions and Harm our Children (Eric A. Hanushek, ed.) (pp. 79-102). Palo Alto, CA: Hoover Press.

Hanushek, E. A., and Lindseth, A. (2009). Schoolhouses, Courthouses and Statehouses. Princeton, N.J.: Princeton University Press., See also: http://edpro.stanford.edu/Hanushek/admin/pages/files/uploads/06_EduO_Hanushek_g.pdf

Greene and Trivitt (2008) present a study in which they claim to show that court ordered school finance reforms let to no substantive improvements in student outcomes. However, the authors test only whether the presence of a court order is associated with changes in outcomes, and never once measure whether substantive school finance reforms followed the court order, but still express the conclusion that court order funding increases had no effect.

Greene, J. P. & Trivitt, (2008). Can Judges Improve Academic Achievement? Peabody Journal of Education, 83(2), 224-237.

Neymotin, F. (2010) The Relationship between School Funding and Student Achievement in Kansas Public Schools. Journal of Education Finance 36 (1) 88-108.

[xiii] Richard J. Murnane and Randall Olsen (1989) The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71 (2) 347-352

[xiv] David N. Figlio (2002) Can Public Schools Buy Better-Qualified Teachers?” Industrial and Labor Relations Review 55, 686-699. David N. Figlio (1997) Teacher Salaries and Teacher Quality. Economics Letters 55 267-271. Ronald Ferguson (1991) Paying for Public Education: New Evidence on How and Why Money Matters. Harvard Journal on Legislation. 28 (2) 465-498.

[xv] Loeb, S., Page, M. (2000) Examining the Link Between Teacher Wages and Student Outcomes: The Importance of Alternative Labor Market Opportunities and Non-Pecuniary Variation. Review of Economics and Statistics 82 (3) 393-408

[xvi] Figlio, D.N., Rueben, K. (2001) Tax Limits and the Qualifications of New Teachers. Journal of Public Economics. April, 49-71

See also:

Downes, T. A. Figlio, D. N. (1999) Do Tax and Expenditure Limits Provide a Free Lunch? Evidence on the Link Between Limits and Public Sector Service Quality52 (1) 113-128

[xvii] Ondrich, J., Pas, E., Yinger, J. (2008) The Determinants of Teacher Attrition in Upstate New York. Public Finance Review 36 (1) 112-144

[xviii] See http://www2.ed.gov/rschstat/research/pubs/rigorousevid/rigorousevid.pdf;

Jeremy D. Finn and Charles M. Achilles, “Tennessee’s Class Size Study: Findings, Implications, Misconceptions,” Educational Evaluation and Policy Analysis, 21, no. 2 (Summer 2009): 97-109;

Jeremy Finn et. al, “The Enduring Effects of Small Classes,” Teachers College Record, 103, no. 2, (April 2001): 145–183; http://www.tcrecord.org/pdf/10725.pdf;

Alan Krueger, “Would Smaller Class Sizes Help Close the Black-White Achievement Gap.” Working Paper #451 (Princeton, NJ: Industrial Relations Section, Department of Economics, Princeton University, 2001) http://www.irs.princeton.edu/pubs/working_papers.html;

Henry M. Levin, “The Public Returns to Public Educational Investments in African American Males,” Dijon Conference, University of Bourgogne, France. May 2006. http://www.u-bourgogne.fr/colloque-iredu/posterscom/communications/LEVIN.pdf;

Spyros Konstantopoulos Spyros and Vicki Chun, “What Are the Long-Term Effects of Small Classes on the Achievement Gap? Evidence from the Lasting Benefits Study,” American Journal of Education 116, no. 1 (November 2009): 125-154.

[xix] Jepsen, C., Rivkin, S. (2002) What is the Tradeoff Between Smaller Classes and Teacher Quality? NBER Working Paper # 9205, Cambridge, MA. http://www.nber.org/papers/w9205

“The results show that, all else equal, smaller classes raise third-grade mathematics and reading achievement, particularly for lower-income students. However, the expansion of the teaching force required to staff the additional classrooms appears to have led to a deterioration in average teacher quality in schools serving a predominantly black student body. This deterioration partially or, in some cases, fully offset the benefits of smaller classes, demonstrating the importance of considering all implications of any policy change.” p. 1

For further discussion of the complexities of evaluating class size reduction in a dynamic policy context, see:

David Sims, “A Strategic Response to Class Size Reduction: Combination Classes and Student Achievement in California,” Journal of Policy Analysis and Management, 27(3) (2008): 457–478

David Sims, “Crowding Peter to Educate Paul: Lessons from a Class Size Reduction Externality,” Economics of Education Review, 28 (2009): 465–473.

Matthew M. Chingos, “The Impact of a Universal Class-Size Reduction Policy: Evidence from Florida’s Statewide Mandate,” Program on Education Policy and Governance Working Paper 10-03 (2010).

[xx] Ehrenberg, R.G., Brewer, D., Gamoran, A., Willms, J.D. (2001) Class Size and Student Achievement. Psychological Science in the Public Interest 2 (1) 1-30

[xxi] Baker, B., & Welner, K. G. (2012). Evidence and rigor scrutinizing the rhetorical embrace of evidence-based decision making. Educational Researcher, 41(3), 98-101.

[xxii] Loeb, S., Darling-Hammond, L., & Luczak, J. (2005). How teaching conditions predict teacher turnover in California schools. Peabody Journal of Education, 80(3), 44-70.

Isenberg, E. P. (2010). The Effect of Class Size on Teacher Attrition: Evidence from Class Size Reduction Policies in New York State. US Census Bureau Center for Economic Studies Paper No. CES-WP-10-05.

The Subgroup Scam & Testing Everyone Every Year

This post is a follow up to my previous post in which I discussed the misguided arguments for maintaining a system of annual standardize testing of all students.

In my post, I skipped over one argument that seems to be pretty common among the beltway pundits. I skipped this argument largely because the point is moot if we plan on using testing data appropriately to begin with. My point in the previous post was about tests, testing data and how to use it appropriately. But just as the beltway pundit crowd so dreadfully misunderstands tests and testing data, they also dreadfully misunderstanding demography and geography and the intersection of the two. A related example of the complete lack of demographic “data sense” in the current policy discourse is addressed in my recent post on “suburban poverty.”

Among other issues I addressed in my previous post, the beltway crowd is up in arms that if we don’t test every kid every year, we’ll never have sufficient “n” – sample sizes – well actually “N” subpopulation sizes [since this is about testing everyone] – to really know how “subgroups” of students are performing – and more importantly – to apply to a school’s accountability rating! And that, of course is critical to the use of testing for the protection of children’s civil rights. But of course, this argument assumes many things.

For example, the pundits over at Bellwether explain:

Arne Duncan has estimated that hundreds of thousands of students were invisible to state accountability systems because of n-size issues. CAP has praised states in the past for lowering their n-sizes, but their plan to have fewer students “count” toward a school’s accountability rating would mean less attention on important subgroups of students. [ http://blog.bellwethereducation.org/grade-span-accountability-is-a-bad-idea-just-ask-cap-and-the-aft/ ]

There are so many layers of problems in this explanation it’s hard to know where to even begin. In this post, I critique the following three assumptions underlying this claim of urgency for retaining annual testing of everyone:

  • First, that testing everyone every year actually solves the problem of having sufficient numbers of children in each subgroup, in each school and district, to be able to make meaningful comparisons among them.
  • Second, that the subgroup classifications we use for “testing-based-accountability” purposes are, in fact, meaningful distinctions – meaningful ways to characterize student populations and measure differences among them.
  • Third, that the measures we are constructing of student outcomes for making comparisons between these subgroups are somehow meaningful and useful for characterizing school performance. In other words, that we aren’t violating the basic rules I set forth in my previous post, by, for example imposing penalties/sanctions on schools merely for exhibiting the presence of difference in proficiency rates or average scores between group A and group B.

Each of these assumptions is suspect!

Let’s break it down.

Population UNiverse Data Breeds IrratioNal ExuberaNce over “N”

As outlined so eloquently by ArNe DuNcaN (as characterized in the Bellwether blog), one impetus for maintaining testing everyone every year is so that poor and minority children don’t end up being “invisible” when it comes to rating school performance!

We must know the gaps between black and white where black children attend majority white schools, and vice versa.

We must know how poor children are performing in rich schools, and vice versa.

If we don’t test all children every year, we’ll miss those ten black kids in the white school, and those ten Hispanic kids in the black school!

We might even overlook the vast differences in proficiency rates between those 10 Asian kids and those tend black kids in the predominantly white school!

Leaving such gaping holes in our ability to label, takeover, close, reconstitute local public schools is entirely unacceptable!

First, most kids don’t attend racial and economically diverse schools. The differences across our educational system are mainly between schools and districts, not within them. As such, within school subpopulation sizes of subgroups are nearly always insufficient. Second, the common form of these measures – differences in average proficiency between these small groups (where even measurable) – are utterly meaningless as “school accountability” indices (more on this later). Using them for this purpose is reckless and irresponsible.

American public schools remain highly segregated. Elementary schools often enroll about 300 to 500 children, grades KG to 6 (with variations, of course). So, that’s just under 60 kids per grade level who might fall into a tested subpopulation. 10 kids are about 17%. But, in highly segregated metropolitan areas, many schools are either black or white (greater than 85% one or the other). For illustrative purposes, let’s use that 85% threshold as a threshold at which we are unlikely to have sufficient subpopulation size of any subgroup making up the other 15% (among tested students) even when testing everyone. Figure 1 shows the percentages of statewide white students in these racially diverse northeastern states, attending schools that are over 85% white, and percentages of statewide black students attending these schools that are over 85% white.

Figure 1. Whites and Blacks in White Schools

Slide1

In New Jersey, a population dense and racially/ethnically diverse state, nearly 1/3 of white students attend schools that are over 85% white. Only about 2% of the state’s black students attend these “white” schools. 15 to 20% of black students in New Jersey attend schools that are over 85% black. While Pennsylvania has a larger share of black students attending “white” schools – about 7% of black students statewide – in Pennsylvania, nearly 2/3 of white students attend “white” schools.

In other words, even when testing everyone, lots of schools will have no measures of subgroup gaps to count either for or against them, because those schools are so highly segregated that they include no subpopulations of sufficient “N.” Thus, we are relegated to deciding which among the integrated/diverse schools to slap with sanctions!

New Jersey, in its waiver thin nouveau-Duncanian accountability policy, uses “achievement gaps” between subgroups, and low subgroup performance as a basis for state intervention – labeling schools deemed as problematic in this regard as “focus schools.”

But most of the focus school labels are earned by completely erroneous classification resulting from subpopulation size thresholds achieved through aggregation.

As I’ve shown in previous blog posts, the schools assigned this distinction are nearly all “middle schools” in racially diverse school districts. Why is that? Why aren’t the elementary schools also labeled as focus schools? Well, that’s because the elementary schools simply don’t have enough students in the subgroup to count. But, when they all come together – perhaps from two or three elementary schools of about 400 kids, into a middle school of about 800 to 1200 kids, all of the sudden there enough to count. Nothing has really changed except that minimum subpopulation size thresholds have been met. The fact that the gaps are measured – and used to enforce sanctions on middle schools – is merely a function of aggregation. It’s meaningless, ignorant and data abuse.

I’ve actually (half-jokingly, only half, mind you) recommended to my students who are middle school administrators that they thwart this anomaly of aggregation by working with district leadership to reorganize their middle schools into many separate schools, co-located, shall we say. Give them different names. Apply for a charter for one corner of your school. Sort kids in randomly across these new “schools” within the original school and viola… no more subgroups to count in any one school! No more focus group status!

Testing every kid every year, in my view, actually exacerbates the subgroup comparison problem because it generates this false sense confidence in “evaluating everyone” on which policymakers then rely in making completely fallacious judgments about schools.

The Subgroups Aren’t Meaningful Distinctions for Informing Policy or Judging Schools

I’ve addressed poverty achievement gaps on a number of occasions, including how and why our measures of poverty are often insufficient, especially where those measures are constructed by ramming arbitrary cut-points through varied income distributions across settings. And I’ve addressed on numerous occasions more broadly, problems with the ways in which we measure poverty in education policy analysis.

Racial classifications only seem more straightforward, since they are clearly groups (categorical variables), not some arbitrary cut point rammed through a continuous distribution. But that’s not in fact true. Racial classifications used in education policy, as in NCLB, are equally arbitrary clustering of economically, socially and educational diverse racial and ethnic sub-sub-populations.

Current policies, including those used for assigning sanctions to schools in states like New Jersey rest on evaluating gaps between Blacks, Whites, Hispanics and Asians, assuming these distinctions to be educationally, economically and socially important. Back in 2000, a colleague (Lisa) her doctoral student (Christine) and I, after a long conversation over lunch, decided to do a little exploring into this topic – asking ourselves – isn’t it quite possible that the differences among sub-sub-populations are actually greater than the differences in aggregate populations? And further, what’s the sense behind these aggregate groupings anyway?

We specifically explored the composition of the “Asian” and “Hispanic” classifications, where “Asian” (including “Pacific Islander”) included everything from Samoan to Sri Lankan, Japanese, Korean, Hmong, Laotian, etc. It was hard, even from the most American-Centric (Vermont raised white guy point of view) to presume any coherence to this classification. Using data from the National Educational Longitudinal Study of the 8th grade class of 1988, we found:

In the case of both the Hispanic and Asian/Pacific Island aggregate groups there are substantial, though not always statistically significant, academic performance differences among ethnic subgroups, with a range of math performance among Hispanic subgroups of 10.7 points (mean score = 34.4) between Cuban and Puerto Rican students and a range of math performance among Asian/Pacific Island students of 15.3 points (mean score = 41.0) between West Asian and Pacific Island students.

http://epx.sagepub.com/content/14/4/511.short

Why does this matter? Who cares? So there are some differences among the subgroups, which sometimes are even bigger than the differences between the aggregate groups. Well, it certainly matters if we are using, say, test score gaps between Hispanic and White students to decide which school should be subjected to sanctions – especially if one school’s Hispanic population is predominantly Cuban and middle class and another school’s is predominantly recent Mexican immigrant or lower income Puerto Rican families.

The implication of the sanctions imposed on those serving one type of “Hispanic” immigrant versus another is – Why can’t you turn your Hispanics around like they did? This is offensive on so many freakin’ levels! What about the district serving an “Asian” population dominated by middle class Filipinos versus more affluent Koreans? Do we say – what’s wrong with your Asians and why aren’t they performing like their Asians? That’s what current policy does with no regard for the nuance of racial/ethnic (national origin, generational status, etc) classification and the organization of communities with respect to patterns of immigration.

Here a are few snapshots of New Jersey.

First, using U.S. Census American Community Survey data here are the aggregate and breakout populations for large New Jersey cities (and some combinations of cities).

Jersey City has a sizable Hispanic population share. But notably, using these still relatively coarse grained categories, the area in and around Jersey City has a Hispanic population that is mostly “Other” as well as slightly more Cuban, and far less Mexican and Puerto Rican than other cities in New Jersey. Jersey City’s Asian population is substantially Filipino as well as Indian, differentiating it from other parts of the state.

Figure 2. Aggregate Group Distribution and Breakout of Hispanic and Asian Populations of Families with School –Aged Children

Slide2

Figure 3 shows the average income differences across aggregate groups relative to whites in the same city/area. Not surprisingly, Blacks and Hispanics tend to have lower family income than whites, and Asians have similar to or higher than white income.

Figure 3. Differences in Income Relative to Whites for Aggregate Groups

Slide3

But these aggregations can be really deceiving. For example, Figure 4 shows that while Mexican and Puerto Rican family income does tend to be lower than white income, the income of families of school aged children for those of Cuban national origin is comparable to and in Trenton, higher than white income.

Figure 4. Differences in Income Relative to Whites for Disaggregated Hispanic Groups

Slide4

There exists similar variation across Asian sub-subgroups.

Figure 5. Differences in Income Relative to Whites for Disaggregated Asian Groups

Slide5

Gaps in Proficiency Rates within Schools Aren’t Meaningful!

Finally, even if the group classifications were meaningful and even if testing everyone every year created sufficient “N” for comparison purposes, one is left to ask… are we using these data in meaningful, informative appropriate ways to begin with? What the heck does a subgroup proficiency gap even mean in terms of school performance? Well, here’s what Matt Di       Carlo over at Shankerblog has had to say on this topic:

Gaps based on proficiency rates are not fit for use in most any context. When you measure gaps using proficiency rates, you are basically converting test scores into “yes or no” outcomes for two groups (in the case of income, groups also defined by conversion into “yes/no” outcomes), and then comparing those groups. There is no need to get into the technical details (see, for example, Ho 2008 for details), but the problem here is even worse than it is for overall, non-gap measures such as schoolwide proficiency rates. Proficiency-based gaps, particularly over time, are so prone to distortion that they are virtually guaranteed to be misleading, and should be used with extreme caution in any context, to say nothing of using them in a formal, high-stakes accountability system. There are alternative ways to measure gaps using proficiency- and other cutpoint-based rates (e.g., Ho and Reardon 2012), but I haven’t yet seen any states using them.

http://shankerblog.org/?p=10879

In other words, I’m not the only curmudgeon who thinks this whole endeavor is folly. And if the endeavor is folly, then so too are arguments favoring the maintenance of data to support that endeavor.

To Summarize

The bottom line here is that:

Even when you test every kid every year, you don’t have appropriate subpopulation sizes for individual institutions to make meaningful distinctions, because of the way in which our schools are organized and segregated.

The subgroup distinctions, be they differences in test scores above/below arbitrary income thresholds, or between highly aggregated racial/ethnic classifications aren’t particularly meaningful to begin with.

As it is, subgroup data are being used to calculate junk measures and to use them entirely inappropriately (as if there was an appropriate use to begin with?)

So, as it stands, policymakers are taking meaningless measures of outcome differences between arbitrary groupings of student populations and using those measures to make high stakes determinations about the jobs of teachers and principals.

While there are indeed racial disparities within diverse, tracked high schools and middle schools in America that should not be overlooked, the most substantial racial disparities across our educational system are those between schools – and more importantly – between districts serving substantively different student populations by race, ethnicity, national origin and economic status. These differences can and should be captured via appropriate stratified sampling methods, and more fine grained characterizations of our diverse student population.

But more importantly, when collected this information should be used appropriately!

Cutting through the Stupid in the Debate over Annual Testing

I’ve hesitated thus far to enter into the big debate over the usefulness or not of annual testing. It continues to blow my mind that many engaged on the pro-annual testing side of the debate see the annual testing of all children in all grades as the one and only method of achieving all of the things testing, in their view, is intended to achieve, including:

The presumption is that a single method of testing – testing everyone every year in every subject – is the appropriate – the only method to accomplish all of these tasks, simultaneously.

We can’t possibly make sure no child is left behind if we don’t test them all annually every year.

And we can’t possibly point the finger of blame for a child being left behind if we don’t test them all every year, and link those testing data to their teachers and schools!

And, we can’t possibly ensure that all children are “college ready” unless we can show that each and every one of them receives near the end of their high school year, a test score on a common assessment (PARCC or Smarter Balanced) that is reasonably predictive of achieving a combined score of 1,550 or higher on the SAT!

Further, that this entire system must be built on a set of common national standards if ever we are to make valid comparisons of the quality of schooling from Tennessee to Massachusetts, or the effectiveness of individual teachers from the Bayou to Battle Creek.

The counter argument at this point seems to favor the complete abandonment of yearly assessment, and common standards all together –reverting to a hodgepodge of state and local curriculum, standards and assessments.

Missed in most of the conversation are the valid, relevant uses of student assessments, and the different uses, and approaches to using testing, measurement, large and small scale assessment in our schooling system.

Mixed in with this discussion of late is whether annual testing enhances the civil rights of children, or erodes them.

Here’s my quick run-down on a) the purposes of testing in schools, b) how to implement testing to best address those purposes, c) the right and wrong uses of testing with respect to civil rights concerns, and d) the role of common standards in all of this.

Purposes of Testing (measuring student achievement) in our Public Schools

While there are potentially many more purposes of assessment in school settings, I boil it down here to:

  1. testing for diagnostic and instructional purposes
  2. testing for system monitoring purposes (e.g. accountability)

I focus on this distinction because these two major purposes of testing are best achieved by very different approaches to and uses of testing.

Testing for diagnostic & instructional purposes (Individual)

When it comes to diagnostic testing, for enhancing the instruction of individual children and groups of children – the dynamic teacher/student interaction – we want to implement that testing in a way that allows children to move at their own pace, receive immediate feedback, and provide timely relevant information to teachers on what kids know, what they don’t know, what they’re struggling with, etc. This is fine grained information, speaking to specific knowledge and skills children are developing, on a day to day basis (not from April of one year to April of the next, with feedback the following October).

The logical implementation approach here, given the technologies of testing today, is to have kids engage in assessments along the way, through computer adaptive testing, asynchronous. Not all the kids in a big room of computers taking the same item bank on a given day, but kids progressing through relevant, timely computer adaptive assessments (a few minutes here and there), providing immediate diagnostic feedback to teachers. Plenty of schools already do this kind of stuff, whether effectively or not.

To be clear – I’M NOT TALKING ABOUT THIS BEING THE PRIMARY INSTRUCTIONAL MODEL ITSELF! – OR DOMINANT DAY TO DAY CLASSROOM ACTIVITY. I’m talking about this being an available tool, used appropriately [sparsely] to help teachers figure out what kids are getting and what they are not (recognizing that teachers have many other tools at their disposal… like actually asking questions and listening to kids, or reading what the kids write. Further, there’s plenty that simply can’t be evaluated effectively by a computer!).

This information should NOT be used for “accountability” purposes. It should NOT be mined/aggregated/modeled to determine at high level whether institutions or individuals are “doing their jobs,” or for closing schools and firing teachers. That’s not to say, however, that there might not be some use for institutions (schools districts) mining these data to determine how student progress is being made on certain concepts/skills across schools, in order to identify, strengths and weaknesses. In other words, for thoughtful data informed management. Current annual assessments aren’t particularly useful for “data informed” leadership either. But this stuff could be, given the right modeling tools.

This is the approach we use to ensure that no child is left behind. By the time annual, uniform, standardized assessment data are returned in relatively meaningless aggregate scores to the front office 6 months down the road, those kids have already been left behind, and the information provided isn’t even sufficiently fine grained as to be helpful in helping them to catch up.

Testing for accountability/System Monitoring (Institutional)

When it comes to testing for system monitoring, where we are looking at institutions and systems, rather than individuals, immediate feedback is less important. Time intervals can be longer, because institutional change occurs over the long haul, not from just this year, to next. Further, we want our sampling – our measurements – to be as minimally intrusive as possible – both in terms of the number of times we take those measurements, and in terms of the number of measurements we take at any one time. In part, we want measurement for accountability purposes to be non-intrusive so that teachers and local administrators, and the kids especially, can get on with their day – with their learning – development of knowledge and skills.

So, when it comes to “System Monitoring” the most appropriate approach is to use a sampling scheme that is minimally sufficient to capture, at point in time, achievement levels of kids in any given school or district (Institution). You don’t have to test every kid in a school to know how kids in that school are doing. You don’t have to have any one kid take an entire test, if you creatively distribute relevant test items across appropriately sampled kids. Using sampling methods like those used in the National Assessment of Educational Progress can go a long way toward reducing the intrusiveness of testing while providing potentially more valid estimates of institutional performance (how well schools and districts are doing).

If we want to know the physical health of a school’s student population, we don’t make them walk around all day with rectal thermometers hangin’ out (or perhaps these days, with a temporal scan duct-taped to their heads). Rather, we might appropriately sample, in time, and across children.

This testing process could be done annually, to result in annual reports on school performance. These annually collected data, if sampled appropriately (using relevant statistical imputation methods), could also be used to estimate gains achieved by children attending specific schools. I would assert that even annual universe data (all kids tested every year) are of minimal value for assigning useful, reliable, or valid “effect” measures to individual teachers.

Here’s the really important part, which also relates to my thermometer example above. The testing measures themselves ARE NOT THE ACTIONABLE INFORMATION. Testing provides information on symptoms, not causes or underlying processes. It is pure folly to look at low test scores for a given institution, and follow up with an action plan to “improve test scores,” or close the school if/when test scores don’t improve, without ever taking stock of the potential causes behind the low test scores. TEST SCORES ARE SYMPTOMS, NOT CAUSES, NOT ACTIONABLE IN AND OF THEMSELVES.

Where testing for system monitoring purposes reveals gaps between groups of students, or low performance in specific sets of schools, our first course of action should be to dig into underlying processes and inputs. Do these low performing schools have equitable resources to meet their children’s needs? If we find that they don’t – that these lower performing schools serve far more children with greater educational needs, have burgeoning class sizes, non-competitive teacher compensation, then we’ve got something actionable-resource disparities to address, at least as a first course of action.

Further, testing data of this type or the diagnostic type are ALWAYS UNCERTAIN – that is, the difference between the 49th and 51st percentile may not be a difference at all. So we shouldn’t call it one! We shouldn’t draw lines in this sand, or apply bold, disruptive consequences to distinctions that in fact may be statistically meaningless!

Testing as a Civil Rights Issue?

How does all of this relate to the recent discussion of whether the presence of annual testing enhances or erodes children’s civil rights, particularly those of disadvantaged minority groups? Well, it all depends on how that testing is used. Used correctly, implemented appropriately, testing for system monitoring purposes is vital to the protection of civil rights. Used inappropriately, as has often been the case, testing can violate children’s civil rights.

The good:

As someone who engages in expert witness work evaluating the equity and adequacy of state education systems, testing information is useful to me in exploring disparities in children’s outcomes that may raise civil rights concerns.

But again, as noted above, the key here is to recognize that testing outcomes are potential indicators of input or opportunity disparities. Testing outcomes themselves are NOT the disparities of interest to which policy leverage can be directly applied. That’s just dumb. One does not fix achievement gaps by setting the goal “fix that achievement gap!”

That said, without testing, we might no-longer have available reliable and valid evidence that those gaps persist.

The bad:

There are certainly cases where the common misuses of testing raise serious civil rights concerns. For example:

  • Applying strict cut scores at the individual level (high stakes exams) that sort and exclude children disproportionately by race and income, while never addressing input/opportunity disparities that might be the cause of disparate outcomes.
  • Applying strict cut scores at the institutional level to lay blame on teachers and their institutions for the disproportionate failure of low income and minority children, while never addressing input/opportunity disparities that might be the cause of disparate outcomes.

Sadly, I’d say that these two abuses of testing data are far more common than the appropriate uses I outline above. We have, for the past decade and half, escalating in recent years, made policy determinations on test scores alone – taking action on test scores alone – never using those test scores to explore underlying causes – and in the process, we have disproportionately limited high school graduation and college matriculation options of poor and minority children, and have disproportionately closed schools based on symptoms not causes, of poor and minority children.

The bad has far outweighed the good in existing policy uses of testing data!

On Common Standards

Finally, about those common standards. For me, the greatest potential virtue of common standards across states, accompanied by a least intrusive system for assessing those standards (as addressed above) is that we might finally get a better handle on the relative adequacy of resources available to children across states. We might then be able to impose some pressure on those states that have arguably thrown their entire public school system under the bus, to invest sufficiently to achieve those standards. For years, for example, Tennessee has spent next to nothing on their public schools and set low enough outcome standards that all still appeared just fine (unless, of course, you look at NAEP, instead of pass rates on their own tests). Yes, this is hugely wishful thinking!

But, without common standards, we can’t even begin to measure the costs of achieving those common standards across settings.

Research Note: Resource Equity & Student Sorting Across Newark District & Charter Schools

Research Note: Resource Equity & Student Sorting Across Newark District & Charter Schools.

Bruce D. Baker

PDF: BBaker.NJCharters.2015

Executive Summary

In this brief, I present preliminary findings that are part of a larger, national analysis of newly released federal data, a primary objective of which is to evaluate the extent to which those data yield findings consistent with findings arrived at using state level data sources. In this brief, I specifically explore variations in student characteristics and resources across schools in Newark, NJ.

I begin by reflecting on my most recent policy brief on charter and district school performance outcomes – growth percentile data from 2012 and 2013 – noting that on average, Newark Charter schools remain relatively average in student achievement gains given their student populations. But as noted on previous occasions, Newark Charter school student populations are anything but average.

Next, I use longitudinal data from the NCES Common Core of Data, public school universe (the source of underlying demographic data for the newly released federal data) characterizing changes in Newark Charter market share (share of children served in Charter Schools) and the share of low income children served in Newark Charter schools.

Next, I explore what the newly released (albeit already dated) federal data say about Newark Charter school demographics, compared to district schools serving similar grade distributions.

Next, I explore resource distributions and teacher characteristics across Newark schools, charter and district. The question at hand here is whether across district and charter schools, those schools serving needier and more costly student populations also have more (or fewer) resources with which to serve those children. Further, whether among schools serving similar student populations, resource levels are similar.

Forthcoming analyses of charter schools in New York City found that those schools tended to serve less needy populations (than district schools) and were able to do so with substantially more resources that district schools serving similar populations. Because the share of children in the district served by charters remained small, their disruptive effect on equity remained small. By contrast, in Houston, charter schools both served more comparable student populations, and did so, on average, with more comparable resource levels, resulting in less disruption of equity. In each case, the more interesting story, however, was the extent of variation among charter schools, both in students served and in resource levels.

Here, I explore similar questions in the City of Newark, first with the newly released Federal data and then with the most recent four years of available state data (2010 to 2014).

Conclusions & Policy Implications

To summarize:

  • Recently released federal data, confirmed by more recent state data indicates that student population differences between Newark district and charter schools persist.
    • Newark charter schools continue to serve smaller shares of children qualified for free lunch, children with limited English language proficiency and children with disabilities, than do district schools serving similar grade ranges.
    • While charter school market share has remained relatively small (through 2013), the effect of charters underserving lower income students on district school enrollments has remained relatively modest.
  • Charter school total staffing expenditures, either as reported in federal data or as compiled from state data appear to fall in line with student needs in charter schools.
    • Charter schools serve less needy populations and do so with relatively low total salary expense per pupil.
    • But, there exists significant variation in resources among charter schools, with some outspending otherwise similar district schools and others significantly underspending otherwise similar district schools.
  • Charter school wage competitiveness varies widely, with some charters paying substantially more than district schools for teachers of specific experience and degree levels. But these wages do not, as of yet, substantially influence total staffing costs.
  • Charter schools have very high concentrations of 1st and 2nd year teachers, which lowers their total staffing expenditure per pupil but only to the point where those staffing expenditures are in line with expectations (not lower, as one might expect for schools with so many novice teachers).

Finally, comparisons between the newly released Federal data collection and updated state data sources appear both relatively stable over time and relatively consistent across sources even as the charter sector rapidly grows and evolves and as the district continuously morphs.

Two issues require consideration by policymakers and local officials if reliance on charter schooling and expansion of charter schooling are to play a significant role in the future of schooling in Newark. The first is the active management of the potential deleterious effects of student sorting on district schools – that is, as market share increases and the tendency remains for charters to enroll (or keep) fewer of the lowest income children, district schools may be more adversely affected.

An appropriately designed centralized enrollment system can partially mitigate these issues. But (at least) two factors can offset the potential benefits of such a system. First, individual choices of differently motivated and differently informed parents influence who signs up to attend what schools, leading to uneven distribution of initial selections. Second, centralized enrollment affects only how students are sorted on entry, but does not control who stays or leaves a given school.

Perhaps more importantly, however, it may be the case that some charter schools are simply not cut out to best serve some students (as with the district’s own Magnet schools). It would likely be a bad policy choice to create a centralized enrollment system that requires schools to serve children they are ill-equipped to serve.

The second issue requiring consideration is whether the staffing and expenditure structure of charter schools is sustainable and/or efficient. As I’ve shown in my previous report, charter schools are a relative break-even on state achievement growth outcomes, given their resource levels and student characteristics.[1] But, the current staffing expenditure levels (which are merely average, not low) of charters in Newark depend on maintaining a very inexperienced workforce. Again, current novice teacher concentrations may be a function of recent enrollment growth.

As growth slows, these schools will either have to a) shed more experienced teachers to maintain their low-expense staff, b) lower their wages, potentially compromising quality of recruits, c) reduce staffing ratios, potentially compromising program quality or d) increase their spending levels. If charter operators choose “a” above – relying on high attrition, it remains questionable whether the supply of new teachers, even from alternative pathways, would be sufficient to maintain the present model at much larger scale.

[1] https://njedpolicy.files.wordpress.com/2014/10/research-note-on-productive-efficiency.pdf

“Urban” Poverty and “Racial” Achievement Gaps are so Yesterday? Not!

I’ve been meaning to write about this issue for some time. But every time I get around to it, it seems no-longer timely. It’s also not a hugely popular issue, nor one that’s very sexy or controversial from a policy standpoint. But it lingers and re-emerges every now and then.

Well, thanks to this article in The Atlantic, it’s timely again! Once again we are presented with the assertion that suburban poverty is the major policy concern of today – implicitly overshadowing “urban” poverty. Urban poverty is so yesterday! And again, we are presented with a picture of “suburban poverty” as flourishing among abandoned and decaying suburban sprawl (2,500 to 3,000 sq ft single family homes built on ¼ acre lots, perhaps in the 1990s on cul-de-sacs, etc.).[1] This picture is deceptive at best, and amazingly this portrayal is not uncommon.

In this post, I address two examples of what I consider statistical smoke and mirrors (in one case coupled with false imagery) used in recent years to re-frame debates over economic and educational “equality” – toward a “post-urban” and “post-racial” domestic policy agenda.

Two oft repeated claims in recent years are that:

  1. suburban poverty now exceeds urban poverty, in total numbers and thus as a policy concern; and
  2. income achievement gaps far exceed racial achievement gaps on standardized tests.

Thus, we must appropriately refocus the policy agenda, and related resources toward the suburbs and toward mitigating (and for that matter, measuring and evaluating) income achievement gaps, rather than racial achievement gaps. These claims may seem innocuous enough, but they do have significant implications for policy design, including the targeting and allocation of precious resources, and how we view (and respond to) the sorting and reshuffling of student populations by race and income status (state and federal aid to schools).

Suburban Poverty?

Assertions of suburban poverty overtaking urban poverty as a central policy concern are often validated by one form or another of this graph:

Figure 1. The “Suburban Poverty” Validation Graph

Slide3

A version of this graph is used to make the claim in the Atlantic article that we now have three million more poor Americans living in suburbs than in cities! And thus, suburban poverty now rules the day!

It may be true that the aggregate number of individuals in poverty in areas classified by the Census Bureau as “suburbs” exceeds the aggregate number of individuals in poverty living in areas classified as “urban.” But, it’s still the case that the RATE of poverty in urban areas, as classified by the Census Bureau, continues to exceed the RATE of poverty in suburban areas.

Figure 2 displays urban and suburban poverty rates for the families of children between the ages of 5 and 17. Notably, just as with the overall population, in the past few years, the total number of children living in poverty in areas classified as “suburbs” has surpassed the total number in areas classified as “urban.”

By 2013, poverty rates for the total population in urban areas in the U.S. were 19.1% compared to 11.1% in suburbs, with change patterns over time mirroring those for children (Figure 2) but a somewhat lower levels. [in fact, these data suggest that “suburban” poverty never actually surpassed urban poverty, even in aggregate numbers] Long term trends below.

Figure 2. Rate of Poverty among children in Suburban and Urban Areas (American Community Survey)

Slide4

An even more fine grained perspective is presented for school enrollment data, nationally, in Figure 3, breaking out areas into more fine grained categories:

Figure 3. School Enrollments of Children from Low Income Families by Urban Centric Locale Code

Slide5

Clearly, as indicated in Figure 3 – poverty in the suburbs is not the “big” problem. Suburban schools, ON AVERAGE, remain the lowest in poverty concentration (with the exception of rural districts on the fringes of metropolitan areas). Large City schools have by far the highest concentrations of low income children.

What’s a suburb anyway?

Note that the distinction “suburban” itself, which then dictates how these tallies play out, is a Census distinction that may or may not align well with the general public’s view of “suburbia.” Suburbs range from the nearest inner urban fringe immediately adjacent to large cities, to outermost exurbia, where sprawl dominates.

Most older inner-urban-ring suburbs certainly are not dominated by sprawling 2,500 to 3,000 sq ft homes on cul-de-sacs. To clarify, for New Jersey folks, East Orange and Irvington are “Suburbs” by Census designation. For Chicago readers,  all of those small segregated enclaves on the south side are Suburbs. And in St. Louis, Ferguson, Normandy, Wellston, Riverview Gardens, etc. are Suburbs, at least in a Census Bureau technical, geospatial sense.

I’ll set aside for another day the measurement of poverty and how appropriate corrections may alter these findings!

Let’s take a stroll through a handful of our nation’s major metropolitan areas. Again, I’m going to jump to school data, because that’s what I had readily available for mapping. First, let’s look at school level concentrations of low income children in the Chicago metro area, without any artificial boundaries drawn to identify the city, or its suburbs. High poverty schools are concentrated on some areas, low poverty ones in others. The contrasts are pretty striking. We can guess what’s  a suburb and what’s a city, but the official classifications might not line up so well with our guesses.

Figure 4. Chicago School Poverty without Boundaries

Slide6

Now let’s throw in some boundaries and identifiers of “suburbs” versus “urban” or central city areas. Are we really to assume that poverty in aging south side suburbs is the form of “suburban” poverty being characterized in popular media? Long term under-employed, white couples with 2.5 children going upside-down on their mortgages on over-sized homes built on undersized lots? Uh, no! I suspect (actually, I’ve got a whole bunch of related data on this) that the “poverty” we are seeing here, in what are classified as “suburbs” is more what the average Joe would characterize as “urban” poverty (where  “urban” is often code for “black”).

Figure 5. Chicago with Boundaries

Slide7

Now let’s look at the Philly metro area, over the past 10 years (of available data). In Figure 6, the upper left are school level low income concentrations in 2001 and in the upper right, in 2011. Yes, we do see increases in low income concentrations in many schools. But what we especially see, are increases from high, to very high poverty in the urban core, and in two outlying “small cities” (Allentown and Reading to the northwest). Yes, we do see some moderate increases in poverty rates among previously lower poverty suburban schools. But, as the graph in the lower half of the figure shows, urban poverty rates remain much higher, and suburban poverty very low – on average.

Figure 6. Philly Metro School Low Income Concentrations over Time

Slide9

And what do the neighborhoods in these inner urban fringe “suburbs” actually look like – and how do they compare with media portrayals of “suburbs” versus “urban” areas? Much of the public confusion here is fueled by perceptions of “urban” meaning high-rise, very high density low income housing projects and “suburban” meaning sprawling cul-de-sacs. But here are a few visuals I gathered from street level view on Google Maps, showing areas classified as “urban” versus nearby areas classified as “suburban.”

Figure 7. Newark vs. East Orange

Slide10

Figure 8. Chester vs. Philadelphia

Slide11

Figure 9. St. Louis vs. University City

Slide12

In other words, there’s a fine line – if any, and a few blocks walking distance – between the high poverty settings of major cities and the high poverty areas formally classified as suburbs.

While I don’t mean to totally blow off concerns over rising poverty in the “suburbs” I certainly do mean to cast doubt – serious doubt – on how suburban poverty has been portrayed in the media and popular think tanky reports (and web sites), and the policy implications of that messaging. Put bluntly –

  • Poverty rates remain much higher in urban areas than in “suburbs”
  • That which we are now counting as “suburban” poverty is in many if not most cases more like the “urban” poverty that has plagued large and mid-sized city neighborhoods for decades
  • What we are characterizing as suburban poverty – in media imagery – isn’t the same thing we are counting!

Income Achievement Gaps?

This issue is inextricably linked to the previous, in that issues surrounding child poverty and all that flows from it, historically, AND TO THIS DAY, have strong racial correlates. The assertion that income achievement gaps now dwarf racial achievement gaps is most often rooted in this finding from Sean Reardon of Stanford University [from a chapter in an exceptional book titled: Whither Opportunity]:

The achievement gap between children from high- and low-income families is roughly 30 to 40 percent larger among children born in 2001 than among those born twenty-five years earlier.” Further, “the income achievement gap (defined here as the average achievement difference between a child from a family at the 90th percentile of the family income distribution and a child from a family at the 10th percentile) is now nearly twice as large as the black-white achievement gap.[2]

And thus, income achievement gaps are of greater policy importance. Except that the income achievement gap measured here is based on setting arbitrary and rather extreme cut-points along the income distribution. Reardon’s comparison of an income achievement gap between the 90th and 10th percentile family income to the black-white achievement gap is deceptive, and largely inappropriate. One would certainly expect the achievement gap among children from the highest and lowest income groups to be larger than the achievement gap between black and white children, if the family income gap between black and white children is nowhere near as large. Further, achievement gaps at the income extremes would be expected to grow faster to the extent that income gaps at the achievement extremes are growing faster.

Figure 10 shows that the income gap at the poles is, clearly much larger, and growing faster than the income gap between blacks and whites.

Figure 10. Black White Income Gaps vs. 90/10 Income Gaps

Slide13

Not surprisingly, income gaps matter for explaining achievement gaps – where income gaps are larger, so too tend to be outcome gaps. The same is true at the intersection of income and race. Where the income gap between blacks and whites is larger, the racial achievement gap is larger. For example, Figure 11 shows that in 2011, states with larger income gaps between families of black and white children have larger test score gaps between those children. States with very larger income gaps between black and white families, like Connecticut, Wisconsin and Pennsylvania, have large achievement gaps and states with smaller income gaps, like West Virginia, have smaller achievement gaps between black and white children.

Figure 11. Black White Income Gaps Determine Black White Achievement Gaps

Slide14 

Notably, Sean Reardon has written extensively over time about racial achievement gaps. But it is this recent work that seems to have caught media attention and has been used to urge a shift in emphasis from racial to income achievement gaps. This may not be Reardon’s intent, but the inappropriate comparison of gaps made above lends itself to this argument.

Policy Implications

But why does any of this matter? We know there are achievement gaps in relation to income. We know there are income gaps by race. We know there’s poverty in urban areas and the suburbs, and we know that poverty is merely a blunt classification of income status. So why can’t we just leverage policies to tackle income inequality and all that flows from it, while being urbanicity-neutral and race-blind in the process?

Good policy design requires more nuance than this. A substantial body of literature validates that race in-and-of-itself matters (for educational and economic outcomes), above and beyond income variation that may be associated with race. Further, when it comes to leveraging resources, context matters, including the intersections of poverty, race and population density (determined by distribution of housing stock). For example, my own research and that of others has found that racial composition and population density independently influence the costs of achieving common outcome goals.[3]

Developing effective and efficiently targeted policies for achieving more equitable educational outcomes, and ultimately more equitable economic outcomes requires that we understand what we mean when we say “suburban” or “urban” and don’t just rely on media images (or the sloppy research claims behind them), and perhaps, if necessary, that we construct relevant classifications that do appropriately characterize what we mean to characterize.

The current policy agenda and media interest around suburban poverty in particular is largely misinformed by a complete failure to even bother to understand the operational definition of “suburb” (or for that matter, the measurement of “poverty”). Thoughtful policy design requires that we better understand how income and poverty measures relate to the context in which they are measured.[4] Perhaps most importantly, we should always scrutinize carefully that which is presented as some sort of sea change in the demographic, economic or policy landscape. Most large scale change of this sort occurs slowly, not quickly. It’s boring (for many) and requires patient, long term observation. That’s just the way it is.

Notes: 

[1] Notably, a recent report from the Center for American Progress paints a somewhat more accurate view, at least in terms of the housing stock, age and location where “suburban poverty” has been on the rise.

[2] http://t.www.skylinecollege.edu/sparkpoint/about/documents/reardonwhitheropportunity-chapter5.pdf

http://nppsd.fesdev.org/vimages/shared/vnews/stories/525d81ba96ee9/SI%20-%20The%20Widening%20Income%20Achievement%20Gap.pdf

[3] Baker, B. D. (2011). Exploring the sensitivity of education costs to racial composition of schools and race-neutral alternative measures: A cost function application to Missouri. Peabody Journal of Education, 86(1), 58-83.

[4] Baker, B. D., Taylor, L., Levin, J., Chambers, J., & Blankenship, C. (2013). Adjusted Poverty Measures and the Distribution of Title I Aid: Does Title I Really Make the Rich States Richer?. Education Finance and Policy, 8(3), 394-417.

 Long Term Trends

Slide1

Slide2