At the Intersection of Money & Reform Part III: On Cost Functions & the Increased Costs of Higher Outcomes


In my 2012 report Does Money Matter in Education, I addressed the education production function literature that seeks to establish a direct link between resources spent on schools and districts, and outcomes achieved by students. Production function studies include studies of how variation in resources across schools and settings is associated with variations in outcomes across those settings, and whether changes in resources lead to changes in the level or distribution of outcomes.

I have written previously on this blog about the usefulness of education cost functions.

The Education Cost Function

The education cost function is the conceptual flip side of the education production function. Like production function research, cost function research seeks to identify the link between spending variation and outcome variation, cross-sectionally and longitudinally. The goal of the education cost function is to discern the levels of spending associated with efficiently producing specific outcome levels (the “cost” per se) across varied geographic contexts and schools serving varied student populations. Most published studies applying cost function methodology use multiple years of district-level data, within a specific state context, and focus on the relationship between cross-district (over time) variations in spending and outcome levels, considering student characteristics, contextual characteristics such as economies of scale, and labor cost variation. Districts are the unit of analysis because they are the governing unit charged with producing outcomes, raising and receiving the revenues, and allocating the financial and human resources for doing so. Some cost function studies evaluate whether varied expenditures are associated with varied levels of outcomes, all else being equal, while other cost function studies evaluate whether varied expenditures are associated with varied growth in outcomes.

The existing body of cost function research has produced the following (in some cases obvious) findings:

  1. The per-pupil costs of achieving higher-outcome goals tend to be higher, across the board, than the costs of achieving lower-outcome goals, all else being equal.[1]
  2. The per-pupil costs of achieving any given level of outcomes are particularly sensitive to student population characteristics. In particular, as concentrated poverty increases, the costs of achieving any given level of outcomes increase significantly.[2]
  3. The per-pupil costs of achieving any given level of outcomes are sensitive to district structural characteristics, most notably, economies of scale.[3]

Researchers have found cost functions of particular value for evaluating the different costs of achieving specific outcome goals across settings and children. In a review of cost analysis methods in education, Downes (2004) explains: “Given the econometric advances of the last decade, the cost-function approach is the most likely to give accurate estimates of the within-state variation in the spending needed to attain the state’s chosen standard, if the data are available and of a high quality” (p. 9).[4]

Addressing the critics

This body of literature also has its detractors, including, most notably, Robert Costrell, Eric Hanushek and Susanna Loeb (CHL), who, in a 2008 article, assert that cost functions are invalid for estimating costs associated with specific outcome levels. They assert that one cannot possibly identify the efficient spending level associated with achieving any desired outcome level by evaluating the spending behavior of existing schools and districts, whose spending is largely inefficient (because, as discussed above, district expenditures are largely tied up in labor agreements that, according to these authors, are in no way linked to the production of student outcomes). If all schools and districts suffer such inefficiencies, then one cannot possibly discern underlying minimum costs by studying those institutions. However, CHL’s argument rests on the assumption that desired outcomes could be achieved while spending substantially less and entirely differently than any existing school or district spends, all else being equal. Evidence to this effect is sparse to nonexistent.[5]

Authors of cost function research assert, however, that the goal of cost modeling is more modest than exact predictions of minimum cost, and that much can be learned by better understanding the distribution of spending and outcomes across existing schools and districts, and the varied efficiency with which existing schools and districts achieve current outcomes.[6] That is, the goal of the cost model is to identify, among existing “outcome producing units” (districts or schools), the more (and less) efficient spending levels associated with given outcomes, where those more efficient spending levels associated with any given outcome provide a real-world approximation, approaching the minimum costs of achieving those outcomes.

CHL’s empirical critique of education cost function research centers on a falsification test, applying findings from a California study by Jennifer Imazeki (2008).[7] CHL’s critique was published in a non-peer-reviewed special issue of the Peabody Journal of Education, based on testimony provided in the state of Missouri and funded by the conservative Missouri-based Show-Me Institute.[8] The critique asserts that if, as it would appear conceptually, the cost function is merely the flip side of the production function, then the magnitude of the spending-to-outcomes relationship should be identical between the cost and production functions. But, in Imazeki’s attempt to reconcile cost and production functions using California data, the results differed dramatically. That is, if one uses a production function to identify the spending associated with certain outcome levels, and then the cost function, the results differ dramatically. CHL use this finding to assert the failure of cost functions as a method and, more generally, the uncertainty of the spending-to-outcomes relationship.

Duncombe and Yinger (2011), however, explain the fallacy of this falsification test, in a non-peer-reviewed special issue of the same journal.[9] They explain that while the cost and production functions are loosely flip sides of the same equation, they are not exactly such. Production models are estimated using some outcome measure as the dependent variable—that which is predicted by the equation. In an education production function studying the effect of spending on outcomes, the dependent variable is predicted as a function of (a) a measure of relevant per-pupil spending; (b) characteristics of the student population served; and (c) contextual factors that might affect the value of the dollar toward achieving outcomes (economies of scale, regional wage variation).

Outcomes = f(Spending, Students, Context)

The cost model starts out similarly, switching the position of the spending and outcomes measures, and predicting spending levels as a function of outcomes, students and context factors.

Spending = f(Outcomes, Students, Context)

If it was this simple, then one would expect the statistical relationship between outcomes and spending to be the same from one equation to the next. But there’s an additional piece to the cost function that, in fact, adds important precision to the estimation of the input to outcome relationship. The above equation is a spending function, whereas the cost function attempts to distill “cost” from spending by addressing the share of spending that may be “inefficient.” That is:

Cost = Spending – Inefficiency, or Spending = Cost + Inefficiency

That is, some of the variation in spending is variation that does not lead to variations in the outcome measure. While we don’t really know exactly what the inefficiency is (which dollars are being spent in ways that don’t improve outcomes), Duncombe and Yinger suggest that we do know some of the indirect predictors of the likelihood that school districts spend more than would be needed to minimally achieve current outcomes, and that one can include in the cost model characteristics of districts that explain a portion of the inefficient spending. This can be done when the spending measure is the dependent variable, as in the cost function, but not when the spending variable is an independent measure, as in the production function.[10]

Spending = f(Outcomes, Students, Context, Inefficiency Factors)

When inefficiency factors are accounted for in the spending function, the relationship between outcomes and spending more accurately represents a relationship between outcomes and costs. This relationship would be expected to be different from the relationship between spending and outcomes (without addressing inefficiency) in a typical production function.

In Summary

In summary, while education cost function research is not designed to test specifically whether and to what extent money matters, the sizeable body of cost function literature does suggest that achieving higher educational outcomes, all else being equal, costs more than achieving lower educational outcomes. Further, achieving common educational outcome goals in settings with concentrated child poverty, children for whom English is a second language and children with disabilities costs more than achieving those same outcome goals with less needy student populations. Cost models provide some insights into how much more money is required in different settings and with different children to achieve measured outcome goals. Such estimates are of particular interest in this period of time when more and more states are migrating toward common standards frameworks and common assessments but are still providing their schools and districts with vastly different resources. Cost modeling may provide insights into just how much more funding may be required for all children to have equal opportunity to achieve these common outcome goals.

Notes

[1]W. Duncombe and J. Yinger, “Financing Higher Student Performance Standards: The Case of New York State,” Economics of Education Review 19, no. 4 (2000): 363-386; A. Reschovsky and J. Imazeki, “Achieving Educational Adequacy through School Finance Reform,” Journal of Education Finance (2001): 373-396;
J. Imazeki and A. Reschovsky, “Is No Child Left Behind an Un (or Under) Funded Federal Mandate? Evidence from Texas,” National Tax Journal (2004): 571-588; J. Imazeki and A. Reschovsky, “Does No Child Left Behind Place a Fiscal Burden on States? Evidence from Texas,” Education Finance and Policy 1, no. 2 (2006): 217-246; and J. Imazeki and A. Reschovsky, “Assessing the Use of Econometric Analysis in Estimating the Costs of Meeting State Education Accountability Standards: Lessons from Texas,” Peabody Journal of Education 80, no. 3 (2005): 96-125.

[2]T. A. Downes and T. F. Pogue, “Adjusting School Aid Formulas for the Higher Cost of Educating Disadvantaged Students,” National Tax Journal (1994): 89-110; W. Duncombe and J. Yinger, “School Finance Reform: Aid Formulas and Equity Objectives,” National Tax Journal (1998): 239-262; W. Duncombe and J. Yinger, “Why Is It So Hard to Help Central City Schools?,” Journal of Policy Analysis and Management 16, no. 1 (1997): 85-113; and W. Duncombe and J. Yinger, “How Much More Does a Disadvantaged Student Cost?,” Economics of Education Review 24, no. 5 (2005): 513-532.

[3]For a discussion, see B. D. Baker, “The Emerging Shape of Educational Adequacy: From Theoretical Assumptions to Empirical Evidence,” Journal of Education Finance (2005): 259-287. See also M. Andrews, W. Duncombe and J. Yinger, “Revisiting Economies of Size in American Education: Are We Any Closer to a Consensus?,” Economics of Education Review 21, no. 3 (2002): 245-262; W. Duncombe, J. Miner and J. Ruggiero, “Potential Cost Savings from School District Consolidation: A Case Study of New York,” Economics of Education Review 14, no. 3 (1995): 265-284; J. Imazeki and A. Reschovsky, “Financing Adequate Education in Rural Settings,” Journal of Education Finance (2003): 137-156; and T. J. Gronberg, D. W. Jansen and L. L. Taylor, “The Impact of Facilities on the Cost of Education,” National Tax Journal 64, no. 1 (2011): 193-218.

[4]T. Downes, What Is Adequate? Operationalizing the Concept of Adequacy for New York State (2004), http://www.albany.edu/edfin/Downes%20EFRC%20Symp%2004%20Single.pdf.

[5] For a recent discussion, see: Baker, B., & Welner, K. G. (2012). Evidence and rigor scrutinizing the rhetorical embrace of evidence-based decision making. Educational Researcher, 41(3), 98-101. See also: Baker, B. D. (2012). Revisiting the Age-Old Question: Does Money Matter in Education?. Albert Shanker Institute.

[6]See, for example, B. D. Baker, “Exploring the Sensitivity of Education Costs to Racial Composition of Schools and Race-Neutral Alternative Measures: A Cost Function Application to Missouri,” Peabody Journal of Education 86, no. 1 (2011): 58-83.

[7]Completed and released in 2006, eventually published as J. Imazeki, “Assessing the Costs of Adequacy in California Public Schools: A Cost Function Approach,” Education 3, no. 1 (2008): 90-108.

[8]See the acknowledgements at http://files.eric.ed.gov/fulltext/ED508961.pdf. Final published version: R. Costrell, E. Hanushek and S. Loeb, “What Do Cost Functions Tell Us about the Cost of an Adequate Education?,” Peabody Journal of Education 83, no. 2 (2008): 198-223.

[9]W. Duncombe and J. Yinger, “Are Education Cost Functions Ready for Prime Time? An Examination of Their Validity and Reliability,” Peabody Journal of Education 86, no. 1 (2011): 28-57. See also W. Duncombe and J. M. Yinger, “A Comment on School District Level Production Functions Estimated Using Spending Data” (Maxwell School of Public Affairs, Syracuse University, 2007); and W. Duncombe and J. Yinger, “Making Do: State Constraints and Local Responses in California’s Education Finance System,” International Tax and Public Finance 18, no. 3 (2011): 337-368. For an alternative approach, see T. J. Gronberg, D. W. Jansen and L. L. Taylor, “The Adequacy of Educational Cost Functions: Lessons from Texas,” Peabody Journal of Education 86, no. 1 (2011): 3-27.

[10]W. Duncombe and J. Yinger, “Are Education Cost Functions Ready for Prime Time? An Examination of Their Validity and Reliability,” Peabody Journal of Education 86, no. 1 (2011): 28-57. See also W. Duncombe and J. M. Yinger, “A Comment on School District Level Production Functions Estimated Using Spending Data” (Maxwell School of Public Affairs, Syracuse University, 2007). For an alternative approach, see T. J. Gronberg, D. W. Jansen and L. L. Taylor, “The Adequacy of Educational Cost Functions: Lessons from Texas,” Peabody Journal of Education 86, no. 1 (2011): 3-27.