The Perils of Favoring Consistency over Validity: Are “bad” VAMS more “consistent” than better ones?

This is another stat-geeky researcher post, but I’ll try to tease out the practical implications. This post comes about partly, though not directly in response to a new Brown Center/Brookings report on evaluating teacher evaluation systems. From that report, by an impressive team of authors, one can tease out two apparent preferences for evaluation systems, or more specifically for any statistical component of those evaluation systems to be based on student assessment scores.

  1. A preference to isolate as precisely as statistically feasible, the influence of the teacher on student test score gains;
  2. A preference to have a statistical rating of teacher effectiveness that is relatively consistent from year to year (where the more consistent models still aren’t particularly consistent).

While there shouldn’t necessarily be a conflict between identifying the best model of teacher effects and having a model that is reliable over time, I would argue that the pressure to achieve the second objective above may lead researchers – especially those developing models for direct application in school districts – to make inappropriate decisions regarding the first objective.  After all, one of the most common critiques levied at those using value-added models to rate teacher effectiveness is the lack of consistency of the year to year ratings.

Further, even the Brown Center/Brookings report took a completely agnostic stance regarding the possibility that better and worse models exist, but played up the relative importance of consistency, or reliability, of the teacher’s persistent effect over time.

There are “better” and “worse” models

The reality is that there are better and worse value-added models (though even better ones remain problematic). Specifically there are better and worse ways to handle certain problems that emerge from using value-added modeling to determine teacher effectiveness. One of the biggest issues is how well the model corrects for problems of the non-random assignment of students to teachers across classrooms and schools. It is incredibly difficult to untangle teacher effects from peer group effects and/or any other factor within schooling at the classroom level (mix of students/ lighting/heating/ noise/ class size). We can only better isolate the teacher effect from these other effects if each teacher is given the opportunity to work across varied settings and with varied students over time.

A fine example of taking an insufficient model (LA Times, Buddin Model) and raising it to a higher level with the same data are the alternative modeling exercises prepared by Derek Briggs & Ben Domingue of the University of Colorado.  Among other things, Briggs/Domingue shows that by including classroom level peer characteristics in addition to student level dummy variables for economic status and race, significantly reduces the extent to which teacher effectiveness ratings remain influenced by the non-random sorting of students across classrooms.

In our first stage we looked for empirical evidence that students and teachers are sorted into classrooms non-randomly on the basis of variables that are not being controlled for in Buddin’s value-added model. To do this, we investigated whether a student’s teacher in the future could have an effect on a student’s test performance in the past—something that is logically impossible and a sign that the model is flawed (has been misspecified). We found strong evidence that this is the case, especially for reading outcomes. If students are non-randomly assigned to teachers in ways that systemically advantage some teachers and disadvantage others (e.g., stronger students tending to be in certain teachers’ classrooms), then these advantages and disadvantages will show up whether one looks at past teachers, present teachers, or future teachers. That is, the model’s outputs result, at least in part, from this bias, in addition to the teacher effectiveness the model is hoping to capture.

Later:

The second stage of the sensitivity analysis was designed to illustrate the magnitude of this bias. To do this, we specified an alternate value-added model that, in addition to the variables Buddin used in his approach, controlled for (1) a longer history of a student’s test performance, (2) peer influence, and (3) school-level factors.

Clearly, it is important to include classroom level and peer group covariates to attempt to  identify more precisely the “teacher effect,” and remove the bias in teacher estimates that results from the non-random ways in which kids are sorted across schools and classrooms.

Two levels of the non-random assignment problem

To clarify, there may be at least two levels to the non-random assignment problem, and both may be persistent problems over time for any given teacher or group of teachers under a single evaluation system. In other words: Persistent non-random assignment!

As I mentioned above, we can only untangle the classroom level effects, which include different mixes of students, class sizes and classroom settings, or even time of day a specific course is taught, if each teacher to be evaluated has the opportunity to teach different mixes of kids, in different classroom settings and at different times of day and so on. Otherwise, some teachers are subjected to persistently different teaching conditions.

Focusing specifically on the importance of students and peer effect, it is more likely than not, that rather than having totally different groups and types of kids year after year, some teachers:

  • persistently work with children coming from the most disadvantaged family/household background environments;
  • persistently take on the role of trying to serve the most disruptive children.

At the very least, statistical modeling efforts must attempt to correct for the first of these peer effects with comprehensive classroom level measures of peer composition (and a longer trail of lagged test scores for each student). Briggs showed that doing so made significant improvements to the LAT model. And Briggs showed that the LAT model contained substantial biases, and failed specific falsification tests used to identify those biases. Specifically, the effectiveness of a student’s subsequent teacher could be used to predict the effectiveness of their previous teacher. Briggs/Domingue note:

These results provide strong evidence that students are being sorted into grade 4 and grade 5 classrooms on the basis of variables that have not been included in the LAVAM (p. 11)

That is, a persistent pattern of non-random sorting which affects teachers’ effectiveness ratings. And, a persistent pattern of bias in those ratings that was significantly reduced by Briggs’ improved models.

At this point, you’re probably wondering why I keep harping on this term “persistent.”

Persistent Teacher Effect vs Persistent Model Bias?

So, back to the original point, and the conflict between those two objectives, reframed:

  1. Getting a model consistent enough to shut up those VAM naysayers;
  2. Estimating a statistically more valid VAM, by including appropriate levels of complexity (and accepting the reduced numbers of teachers who can be evaluated as data demands are increased).

Put this way, it’s a battle between REFORMY and RESEARCHY. Obviously, I favor the RESEARCHY perspective, mainly because it favors a BETTER MODEL! And a BETTER MODEL IS A FAIRER MODEL!  But sadly, I think that REFORMY will too often win this epic battle.

Now, about that word “persistent.” Ever since the Gates/Kane teaching effectiveness report, there has been new interest in identifying the “persistent effect of teachers” on student test score gains. That is, an obsession with focusing public attention on that tiny sapling of explained variation in test scores that persists from year to year, while making great effort to divert public attention away from the forest of variance explained by other factors. “Persistent” is also the term du jour for the Brown/Brookings report.

A huge leap in those reports referring to “persistent effect” is to expand that phrase from the persistent classroom level variance explained to: “persistent year to year contribution of teachers to student achievement.” (p. 16, Brown/Brookings) It is assumed that any “persistent effect” estimated from any value added model – regardless of the features of that model – represents a persistent “teacher effect.”

But the persistent effect likely contains two components – persistent teacher effect & persistent bias – and the balance of weight of those components depends largely on how well the model deals with non-random assignment. The “persistent teacher effect” may easily be dwarfed by the “persistent non-random assignment bias” in an insufficiently specified model (or one dependent on crappy data).

AND, the persistently crappy model – by failing to reduce the persistent bias – is actually quite likely to be much more stable over time.  In other words, if the model fails miserably at correcting for non-random assignment, a teacher who gets stuck with the most difficult kids year after year is much more likely to get a consistently bad rating. More effectively correct for non-random sorting, and the teacher’s rating likely jumps around at least a bit more from year to year.

And we all know that in the current conversations – model consistency trumps model validity. That must change! Above and beyond all of the MAJOR TECHNICAL AND PRACTICAL CONCERNS I’ve raised repeatedly in this blog, there exists little or no incentive, and little or no pressure from researchers (who should no better) for state policy makers or local public school districts to actually try to produce more valid measures of effectiveness. In fact, too many incentives and pressures exist to use bad measures rather then better ones.

NOTE:

The Brookings method for assessing the validity of comprehensive evaluations works best/only works with a more stable VAM model.  This means that their system provides an incentive for using a more stable model at the expense of accuracy.  As a result, they’ve sort of built into their system – which is supposed to measure accuracy of evaluations – an incentive for less accurate VAM models.  It’s kind of a vicious circle.


Research Warning Label: Analysis contains inadequate measurement of student poverty

I’ll likely regret writing this post at some point. But this is a really, really important issue and one that undermines a very large number of prominent research studies on the effectiveness of various school reforms, especially when evaluated in high poverty contexts.

I blogged about this a few weeks back – the problems of poverty measurement in educational research. But this issue continues to come up in e-mails and other conversations. And it’s a critically important issue that so many researchers callously overlook. My sensitivity to this issue is heightened by the potential problems emergent from using bad poverty measurement in models to be used for rating and comparing teacher effectiveness.

Here, I pose a challenge to my research colleagues out there.

3 Reporting Rules for Studies/Models Using Crude Poverty Measures

Rule 1: Descriptive/Distribution Reporting of Poverty Measure

If using a single dummy variable to identify kids as qualifying for free or reduced price lunch, include sufficient descriptive statistics to show just how much or how little variance you are actually picking up with this measure.  For example, if using this single “low income” indicator, report how many students qualify, and how many students within each nested group.
If, for example, you’ve got 70% of more of your sample identified with this single “low income” dummy variable, then you are assuming that 70% to be statistically equally poor. If, 60% of the classrooms in your sample have 80% or more students who qualify, you are essentially classifying all of those classrooms as being statistically similar. HAVE THE INTEGRITY TO POINT THAT OUT.

Remember, here’s the variance in % free or reduced lunch across Cleveland Schools.Not very useful, is it?

Is Cleveland just a huge outlier?

Well, in Texas, in 2007:

93% of Dallas elementary schools had over 80% free + reduced lunch

84% of Houston elementary schools had over 80% free + reduced lunch

100% of San Antonio elementary schools had over 80% free + reduced lunch

As such, any analysis which uses only this measure to capture variations in economic status of students across schools within these districts should be interpreted with caution.

Rule 2: Reporting of Relationships between Variance in Poverty and Outcome Measures

If using a single dummy variable to identify kids as qualifying for free or reduced lunch, report the relationship between that variable and student outcome measures. We know from various studies that gradients of poverty and household resources do have strong relationships with student outcome measures. If, at the classroom or school level, the percent of children who qualify for free or reduced lunch has only a modest to weak relationship with classroom or school level outcomes, chances are your poverty measure is junk (That is, there is a greater likelihood that this finding represents a flaw in the poverty measure – lack of variance – than in the likelihood that you are evaluating a system where the poverty-outcome relationship has been completely disrupted. Further, to be confident of the latter, we have to fix the former).

In high poverty settings, your measure may be junk because the range of shares of kids who qualify for free or reduced lunch only varies from about 70% or 80% up to 100%. That is, across nearly all classrooms, nearly all students are from families fall below the 185% income level for poverty. Much of the remaining variation between 80% and 100% is just reporting noise or error.

Any legitimate measure of child poverty or family income status, when aggregated to the classroom or school level will likely be significantly, systematically related to differences in student outcomes. Report it! If it’s not, the measure is likely insufficient. HAVE THE INTEGRITY TO POINT THAT OUT.

EXAMPLE

The following two graphs show us how important it can be to explore using alternative poverty thresholds, such as looking at numbers of children falling below the 130% income threshold versus the 185% threshold in a high poverty setting. The goal is to find the measure that a) works better for picking up variation across school settings or classrooms and b) as a result, picks up poverty variation that may explain differences in student outcomes.

Figure 1 shows the relationship between school level % free OR reduced lunch and 8th grade math proficiency in Newark in 2009

While there appears to be a relationship, most schools fall above 80% free or reduced lunch and the relationship between this poverty measure and student outcomes seems surprisingly weak. On the one hand, we could draw the conclusion that this means that all NPS schools are just so high in poverty that it really doesn’t matter (a ridiculous assertion, to say the least). That all of the kids are poor, and these high poverty levels affect their outcomes similarly, and those remaining variations are all about good and bad teaching, and charter versus traditional public schools.

Figure 2 shows the relationship between school level % free lunch only and 8th grade math proficiency in Newark in 2009

When we use a more sensitive measure, we nearly double the amount of variation we explain in student outcomes, and we severely undermine those conclusions above. From 40% to 80% free lunch there exists a pretty darn strong relationship with student outcomes. Above that, it still erodes somewhat. But this too might be clarified by using an even stricter poverty threshold or a continuous measure of family income. CLEARLY, IT WOULD BE INSUFFICIENT TO USE THE FIRST MEASURE OF POVERTY – FREE + REDUCED – AS A CONTROL VARIABLE IN AN ANALYSIS OF NEWARK SCHOOLS, OR FOR THAT MATTER AN EVALUATION OF NEWARK TEACHERS.

Rule 3: Reporting of Numbers/Shares of Cases Potentially Affected by Omitted Variables Bias (extent to which crude poverty measure compromises validity of model results)

Let’s say you or I have taken each of these first steps, but we decide to go ahead and conduct our analysis of charter school effectiveness, or ratings of individual teacher value added anyway, using the single student level dummy variable for “poorness” (based on free or reduced price lunch).  After all, we’ve got to publish something? Now it is incumbent upon you (or I), the researcher, to appropriately represent the extent to which these data shortcomings may bias your (or my) analyses.

For example, in an analysis of teacher effects, it would be relevant to report the number and share of teachers with classrooms having 80% or more children who qualify. Why? Because you’ve chosen statistically to assume that every one of their classrooms full of children are statistically the same in terms of economic disadvantage – EVEN WHEN THEY ARE NOT!  Those teachers with the lowest income children may be significantly disadvantaged by this “omitted variables” bias in the model.

Why not just report the overall correlation between effectiveness ratings and classroom level % free or reduced lunch? Yeah… You’re banking on getting that low correlation between teacher effectiveness ratings and % low-income, so you can say your ratings aren’t biased by poverty. Not so fast. You’re likely wrong in making that assertion, given the data. Instead, what you’re showing is that your really crappy poverty measure simply failed to pick up real differences in economic status across classrooms and thus failed to correct for differences in true economic status of students when determining teacher ratings. And then, your crappy poverty measure remained uncorrelated with the biased estimates it helped produce. Really helpful? eh?

Fess up to reality, and report the numbers of teachers across which your model does not effectively control for economic status differences among students – all teachers with classrooms that are say, 80% or more, free or reduced price lunch. HAVE THE INTEGRITY TO POINT THAT OUT.

Here are the factors in the NYC Value-added model. How many teachers have classrooms that are treated as statistically equivalent when they are not? Any teacher effectiveness model applied in a high poverty setting – like a large urban district – that relies solely on the single “low-income” dummy variable – is likely entirely invalid for making comparisons across very large shares of teachers included in the model.

EXAMPLE

So, could we really draw wrongheaded conclusions by using insensitive poverty measurement, and by not checking and fully reporting on distributions? Here’s one example how we might make stupid assertions, using data from 2007 on schools in the Cleveland metro and in the City of Cleveland.

Figure 3 shows the relationship across all elementary schools in the metro and in Cleveland city between % free or reduced lunch and percent passing state assessments

Now, lets assume that we are trying to figure out if for some reason, Cleveland has been unusually successful at disrupting the relationship between % free or reduced lunch and student outcomes, and we wish to compare the relationship within Cleveland to the relationship across all schools surrounding Cleveland. If we didn’t do the visual above, we might miss something huge (actually, given the Cleveland quirk – 100% of schools 100% free or reduced, we likely wouldn’t miss this, but in other less extreme cases we might).  Here the pattern shows a very strong relationship between % free or reduced lunch and student outcomes across all schools, and absolutely no relationship between free or reduced lunch and outcomes in Cleveland – A freakin’ miracle! BUT IT’S ENTIRELY BECAUSE THERE’S NO FREAKIN’ VARIATION IN THE POVERTY MEASURE WITHIN CLEVELAND!

We can easily use this same pattern to our advantage to show that the state of Ohio has made progress on the distribution of funding by poverty across schools, but that Cleveland and other cities have not followed through, and are the real problem. That is, that funding per pupil is more tightly related to poverty between districts than across schools within districts. States have fixed the between district problem, but cities have not fixed the within district problem. This is a common Center for American Progress and Ed Trust claim (which is completely unfounded).

Figure 4 shows the estimation of the within and between district funding-poverty relationships for the Cleveland area, in a (completely bogus) way that supports the CAP and Ed Trust claim.

Yes, Cleveland provides and absurd extreme. But, this same problem occurs when comparing any city where variation in the poverty measure across schools ranges from 80% to 100% and where variation in the poverty measure across districts ranges from 0% to 100% (See Newark example above).

No more excuses

The problem for researchers and evaluators is that states maintain multiple data systems that don’t always include the same gradients of data precision. We can find in STATE SCHOOL REPORTS – SCHOOL AGGREGATE DATA systems, information on the numbers and shares of school enrollment that are free lunch, reduced lunch, and sometimes other indicators such as homelessness. But, these data are not included in the STUDENT LEVEL DATA SYSTEM LINKED TO ASSESSMENT OUTCOMES. Instead, those data systems which must be used for value-added modeling or for measuring effectiveness of specific reforms, such as enrollment in charters, include only a handful of simple indicator variables about each student.

Therein lies the typical research excuse – one that I use as well. “It’s what we have! You can’t expect us to use something better if we don’t have it!” No, I can’t. No, we can’t expect you (or I) to use something better if we don’t’ have it. BUT WE CAN EXPECT AN HONEST REPRESENTATION OF THE SHORTCOMINGS OF THESE DATA. And those shortcomings are HUGE, and the stakes are HIGH, especially when we are using these data to compare teacher effectiveness and determine who should be fired, or when we are asserting that charter schools more effective with low income students (if they aren’t actually serving the lower income students).

Readers: Please send along examples of recent prominent studies where the reported statistical model uses only a single indicator for free or reduced lunch to control for either or both a) differences across individual students and b) differences in peer groups, or classroom level effects.

WHAT ABOUT LOS ANGELES, WHERE THE LA TIMES MODEL USED ONLY A SINGLE DUMMY VARIABLE ON FREE+REDUCED LUNCH (actually, the technical report refers ambiguously to students qualified for Title I, with no definition of the variable at all! http://www.latimes.com/media/acrobat/2010-08/55538493.pdf)?

Well, the vast majority of Los Angeles elementary schools have over 80% children qualifying for free + reduced lunch, suggesting that this measure simply won’t capture relevant variation across settings. The majority of LA schools (and classrooms within them) will be treated as statistically equivalent in terms of poverty in a model which only identifies poverty  by free + reduced lunch. (data are from the NCES Common Core for 2008-09)

Simply adjusting the poverty threshold downward to the free lunch cut off, spreads the distribution – capturing considerably more variation across schools:

Still the majority of LAUSD elementary schools are over 80% free lunch, indicating that even this measure is likely not sufficiently sensitive to underlying differences in poverty/economic status. Again, it is simply an irresponsible assertion to claim that these schools which have over 80% of children who fall below the 130%, or 185% income level for poverty are pretty much the same. Using a statistical model that claims to correct for economic status, but uses only this measure to do so – depends on that irresponsible assertion! At the very least, this is an assertion that requires considerably more investigation.

Blank Slate: Private School Leaders Step Up!

I’ve noted on several occasions on Twitter (@schlfinance101) and on my blog that I am actually a supporter of high quality private independent schools. In the 1990s, I was a middle school science teacher at The Fieldston School in Riverdale, NY.  That experience sticks with me to this day as I write about public education policy issues. In fact, Fieldston helped provide the financial support for the pursuit of my doctoral studies at Columbia University, and for that I thank them. Yes, they helped pay for the meaningless advanced degree that eventually led me to leave (so perhaps it was worth it for them?).  Being at a school that supported my own academic/intellectual endeavors was important for me, and I expect I’m not alone in that regard. In high school, I summered at Phillips Exeter Academy after my sophomore year. I attended an expensive, competitive small liberal arts college (Lafayette, in PA – more on that at a later point in time).  I’ve spent much of my time around private education, in particular, the more elite tiers of private education. I have no shame in those affiliations (generally speaking), and on some occasions, I’m actually proud of it.

I have a genuine appreciation for what these institutions can offer.  I am by no stretch of the imagination a private- school-basher (as some would characterize anyone who dares point out that good private schools often spend much more per child than nearby public schools).  Anything but. I am a realist. I am an analyst. I have written extensively about private school spending and characteristics in this report: http://nepc.colorado.edu/publication/private-schooling-US

That said, this blog post is intended to START a conversation. This blog post is an invitation and is specifically an invitation to headmasters, deans and other administrators and board members at leading private independent schools around the country.  You can e-mail me officially, by name and school affiliation, or you can, if you choose to, remain anonymous, as long as you are willing to allow me to at least list your “title” and a brief descriptor of the school you represent (for example: Head of Upper School, Highly Selective Independent Day School in Northeastern City).  There are two issues I invite you to address:

  1. What is your perspective on the importance of class size, either from the perspective of “effectiveness” (on student outcomes) or marketing? Do you feel that class size is important? Why? What drives your decisions about class size in your school? Feel free to stray outside these narrow questions.
  2. What are your thoughts on the recruitment, selection, retention, evaluation and compensation of teachers? (yeah… that’s a lot, but feel free to focus on one or two). What is your ideal approach to teacher evaluation?  What is the current approach in your school, and what are the strengths/weaknesses? Have you changed that approach over time? Who are the key players in the evaluation process and what are their roles? How are evaluations used (dismissal?).   How is compensation structured? Is it performance based and if so, by what types of measures?  Feel free to elaborate on other related issues not listed here.

You may use the comment section below, or you may e-mail me at educpolicy@gmail.com.  If you post in the comments below, you must provide me with a valid e-mail for determining that you are, in fact, who you claim. Comments are held for approval. If you wish to remain anonymous, send an e-mail to the above address and provide me with the relevant – Title – School Descriptor – for how you wish to be identified (that is, not identified). Identify specifically which information in your e-mail you wish for me to post (more importantly, if there’s anything you want to say, but don’t want posted).

Thanks!

Bruce D. Baker

============================

Ron Reynolds of The California Association of Private School Organizations Responds, with a focus on CLASS SIZE:

============================

Dr. Baker,

Sorry this took me so long, but work beckons…

In the interest of full disclosure, I am neither the headmaster, dean, administrator, or board member of “a leading private independent school,” nor do my views necessarily reflect or represent those of any such persons.  I am the executive director of the California Association of Private School Organizations, a statewide association of private school administrative units and service agencies affiliated with the Council for American Private Education.  The views that follow are my own.

Setting aside the question of what criteria designate a “leading” private independent school, independent schools, whether “leading” or otherwise, comprise a relatively small, if remarkable segment of the nation’s private school universe.  Such schools (which I regard as schools affiliated with the National Association of Independent Schools) account for roughly 5 percent of all private schools in the United States, and 11 percent of all private school students enrolled in any of grades K-12.

Journalists frequently write of independent schools as if they were representative of the entire private school universe.  While misleading, the tendency is, to some extent, understandable, given that the National Association of Independent Schools collects and maintains an impressive array of data that is largely inaccessible, or nonexistent for the broader U.S. private school universe.

You, Professor Baker, are no stranger to this problem.  As Willie Sutton did with banks, so did you resort to using private school tax returns as your principal source of information for the paper referenced in your invitation (“Private Schooling in the U.S.: Expenditures, Supply, and Policy Implications”) because that’s where the data are.  While the use of figures contained in IRS Form 990 reflects an admirable degree of ingenuity, such creativity comes at the cost of generalizability – a lacuna which you, admirably, observed.  One piece of information I believe you failed to mention, however, is that private schools operating on a for-profit basis appear to be completely excluded from your analysis.  Such schools do not comprise an insignificant sub-group.  In the state of California, for example, for every independent school there are more than five private schools operating on a for-profit basis (though independent schools, in the aggregate, enroll a greater number of students).

You, of course, are in no way responsible for the absence of such data, and I, to the extent that I am a representative of the broader private school community owe you and others a mea culpa.  Regrettably, the lack of such data also complicates determinations of class size.  In order to address the issue of class size from a broadly inclusive private school perspective, it is necessary to use student-to-teacher ratios as a proxy measure.  I am cognizant that such a proxy presents certain problems, just as you recognized that the use of IRS Form 990 data was less than ideal.

With the preceding caveat in mind, NCES data place the student-teacher ratio for all U.S. public schools in 2007-08 at 15.7.  Some readers will reflexively respond to this datum by saying, or thinking that such a figure is misleading.  After all, not all teachers included in the computation of the ratio are assigned to (regular) classrooms.  A great many, for example, work with children presenting various types of special needs.

While such a qualification undoubtedly possesses merit, it must also be noted that the reduction in class size attributable to the inclusion of special education teachers comes at a considerable cost.  The federal government, for example, currently allocates $11.3 billion (the vast majority of which flows to public schools) through the Individuals with Disabilities Education Act, to support the provision of special education and related services.  I’m not sure whether you, Professor Baker, included such funding in your computation of public school expenditures, but $11.3 billion would provide roughly enough tuition to nearly double the current national Catholic school enrollment.

All of the above is offered by way of cautionary preface to the qualification that any comparative discussion of class size is subject to various contextual and methodological considerations that can prove problematic.  That having been said…

While smaller class sizes, relative to public schools, has long been a hallmark of American private education, significant variability can be inferred to exist within the private school universe.  For example, the (FTE) student-teacher ratio for all California private schools in 2009-10 was 12.5.  Among independent schools, the figure was 9.4.  In for-profit private schools the ratio was 7.8, while among the state’s Catholic schools it was 18.7 – eclipsing the national public school ratio cited above, but falling short of California’s public school student-teacher ratio of 20.8.  (These ratios have been computed using California Department of Education data for 2009-10.)

Magnitude of enrollment appear to be positively correlated with student-teacher ratios.  (Yeah, I know.  D’uh!  But independent schools may present an exception to this observation which, if true, invites comment.)  Among all California private schools with a total enrollment in excess of 100 students, the student-teacher ratio was 13.9, while the ratio was 14.5 for schools with enrollments of more than 250, and 14.8 for schools with enrollments exceeding 500.

Religious orientation would also appear to be a significant factor.  Fully fifty percent of California’s total private school enrollment is located in religious schools with total enrollments exceeding 250.  In these schools, the student-teacher ratio is 16.1 – a figure that is higher than the NCES national public school figure cited above.  Obviously, the generally lower levels of tuition charged by schools whose religious mission includes making their educational program available to every family seeking access tends to reduce financial barriers to enrollment and contribute to presumptively larger class sizes.  Which points to the expectation that an inverse relationship exists between tuition and class size.  (Alas!  If only comprehensive tuition data were available.)

Independent schools, in which tuition is generally higher than that associated with most religious schools (though it must be noted that some religious schools are also classified as independent schools) serve to underscore the preceding observation.  Among California independent schools with enrollments in excess of 500 students, the overall student-teacher ratio is 9.9, fully a third lower than the figure for the remainder of all California private schools with similarly robust enrollments.

At this point, I envision you, Professor Baker, muttering: Now you know why I focused on independent schools in the first place!  So, allow me to tell you, at long last, what I think it is that is being offered/purchased for the money.

The premium paid by private school parents is part of a complex value proposition in which inducements to participation must outweigh associated sacrifices.  Several components of this value proposition involve class size.  As I see it, these components include the provision of an augmented curricular program, and increased access to instructional staff by both students and parents.

You, Professor Baker, taught at The Fieldston School.  A check of that school’s website reveals that Fieldston offers its lower school students a wood shop program, provides dance and visual arts classes to its middle school students, and affords its high school students the opportunity to study Greek, Latin, and/or Mandarin Chinese, in addition to French and Spanish.  Along with its additional courses, the school offers a specialized, pervasive approach to instruction that is expressed as follows:  “At every grade we teach common beliefs such as understanding multiple perspectives, seeing the world beyond the self, creativity and imagination, developing habits of justice, fairness, and empathy, respect for all people and points of view, and a critical approach to decision-making.”

I don’t think it’s much of a stretch to assume that most independent schools offer a more robust variety of classes across the curriculum, and particularly in the arts and humanities, than is generally the case in both public schools and other private schools.  The provision of an augmented curriculum is driven by a combination of factors that include various visions of what is entailed by a robust humanistic education that endeavors to shape the whole person, market demand, and resources.

Your research found that Jewish day schools tended to spend more, per pupil, than other categories of private schools.  These schools generally offer not only a full complement of secular studies courses, but instruction in Hebrew language, bible, rabbinic literature, Jewish history, customs and holidays, and Israel studies.

To some extent, then, I would argue that smaller class size in private schools is a by-product of a more robust prescribed curriculum, coupled with a great number of elective offerings.

I also believe that in exchange for the tuition premium paid by parents there exists an expectation of greater access – both by students and parents – to the instructional staff.  While many in the private school community often view teachers in religiously-oriented schools as members of a family – a voluntary community that coalesces around a shared faith and common core of values, the same is often true of independent school faculty members who identify deeply with the culture and vision associated with their particular school.

In both independent and religious private schools, the expectation of enhanced teacher availability, responsiveness, and commitment is often deeply engrained in the culture of the institution.  Private school parents often possess teachers’ phone numbers and e-mail addresses, and frequently contact them after school hours.  Teachers are expected to provide more robust and time consuming forms of student evaluation, ranging from extended homework assignment feedback, to participation in more frequent student and parent conferences, to in-depth written assessments of student portfolios and/or journals, participation in child study meetings, and extensive written documentation of student growth and academic progress.   Smaller class size is thus, to some extent, a by-product of enhanced labor-intensive expectations held by parents, those involved in school governance, and teachers, themselves.

Best,

Ron

Dr. Ron Reynolds

Executive Director

California Association of Private School Organizations

15500 Erwin St., #303

Van Nuys, CA 91411-1017

==================

My personal response on few points above:

==================

Yes, a major issue in making comparisons between public and private schools is that private schools – because they are less regulated – are simply more varied. This point is, as Ron Reynolds notes, missed by most. As my report discusses, some private schools significantly outspend publics and some spend much less. Some have much smaller classes, and some much larger. Some pay their teachers much less, and some comparable (few private schools pay their teachers much more – the additional money is more often leveraged to broader/deeper curriculum).

Also, it is often the case that the biggest differences in private school class size are not so much a function of smaller elementary grade classes (they are smaller, but not usually half the size), but rather a function of private schools offering a diverse array of elective courses at the secondary level.

Now, from a personal perspective, I agree that the expectation of parental involvement/interaction is greater in the private school setting, especially in a private independent school like Fieldston. However, I would argue that there are some significant counterbalancing factors. For example, at Fieldston, my teaching time consisted of 16 45 minute periods per week – 4 sections meeting 4 times weekly each. Each section typically had fewer than 20 students. On top of that I had 10 to 12 advisees – from among my total student load of less than 80. Maintaining contact with the parents of 12 students, actively, and being responsive to the needier parents from among the 80 students is much less of a task than most public (or Catholic school) teachers would face if expectations were similar. It was far fewer students than I would have had if I was teaching middle school science in a public school with 6 classes, meeting every day of the week, and 25 kids per class. And there was a lot more time in my day to make contacts.  These may be important structural issues to explore. But they all come back to pupil to teacher ratio.

Graph of pupil to teacher ratios over time: https://schoolfinance101.com/wp-content/uploads/2010/10/slide23.jpg

Bruce

==================

Dr. Baker,
“What is your perspective on the importance of class size, either from the perspective of “effectiveness” (on student outcomes) or marketing? Do you feel that class size is important? Why? What drives your decisions about class size in your school? Feel free to stray outside these narrow questions.
STAR Prep Academy is a small school by design and we cap all classes at ten students.  We do this for the following reasons: 1) We believe quite strongly in differentiation.  With a smaller class we can use information about student interests and abilities to differentiate instruction within the class.  With a larger group, teachers have difficulty working on diverse projects that match student needs.  Furthermore, smaller class sizes reduce paperwork, total student load, time spent passing out papers, etc.  Many schools use small class size as a marketing tool, but if they do not actually utilize those small class sizes, it is just a number.
“What are your thoughts on the recruitment, selection, retention, evaluation and compensation of teachers? (yeah… that’s a lot, but feel free to focus on one or two). What is your ideal approach to teacher evaluation?  What is the current approach in your school, and what are the strengths/weaknesses? Have you changed that approach over time? Who are the key players in the evaluation process and what are their roles? How are evaluations used (dismissal?).   How is compensation structured? Is it performance based and if so, by what types of measures?  Feel free to elaborate on other related issues not listed here.”
Teacher evaluation should be done in an developmental manner, allowing veteran teachers an opportunity to share their knowledge base and guide their own development, while new teachers receive more formative guidelines.  While we do not currently use evaluations for compensation, we do consider this as a key component in the future.  We would also add adjunct duties, participation in outside events and other criteria to the compensation model.  Annual raises, outside of COLA do not seem to be appropriate within our environment.
Regards,
Zahir Robb
Head of School
STAR Prep Academy
10101 Jefferson Blvd.
Culver City, CA 90232
(310) 842-8808


ConnCan Cluelessness

Or is it just a school finance Conn-job, in a CAN?

In their response to my Think Tank Review of Spend Smart: Fix Our Broken School Funding System,  ConnCan asserts that I claim that Connecticut’s school finance formula is not broken.  (see: http://ht.ly/4BknI)

As I state in my report, it’s not that the formula is not problematic, but that ConnCan fails to make any reasonable case that it is – even though it is. Their analysis is simply too shoddy, weak, incompetent to validate that it is broken, or how it is broken. I explain:

There may in fact be legitimate concerns over the equity and adequacy of funding to Connecticut schools as a result of significant problems with the Education Cost Sharing Formula. However, the ConnCAN Spend Smart report provides little or no supporting evidence for their claim that the system is broken or how their proposals would be an effective solution if it indeed is in need of repair.23

I actually show some of the problems in my brief, and have shown these problems in the past. My point in the critique is that ConnCan’s shoddy brief does little to help one understand the problems with the CT school finance system, and in fact provides multiple distractions and significant misinformation.

ConnCan also asserts that I claim that their proposal would harm low income children. Rather, I assert that ConnCan recommends only a relatively low weight for children qualifying for free or reduced price lunch and that they ignore entirely districts with high concentrations of LEP/ELL children.

ConnCan argues that I unfairly suggest that they oppose weighting for LEP/ELL children. While they do hold open the possibility that those children might receive supplemental funding in the future, they also suggest that they have done analysis already, or know of analysis, that indicates that it probably isn’t necessary. This suggestion is not backed by anything, and is completely irresponsible.

Here’s their footnote on this point:

―The formula could also hypothetically provide weights for other student needs, such as English Language Learner status. However, data shared by Connecticut State Department of Education with the State‘s Ad Hoc Committee to Study Education Cost Sharing and School Choice show that the measure for free/reduced price lunch also captures most English language learners. In other words, there is a very strong correlation between English language learner concentration and poverty concentration in Connecticut. In addition, keeping the formula simple allows a more generous weight for students in poverty‖ (p. 7, FN 12).

And here’s my response to their footnote:

This finding is cited only ambiguously in a footnote to data shared by CTDOE. In some states, a strong relationship between the two measures might warrant collapsing supplemental aid for LEP and low-income children into one student-need factor—with sufficient additional support to meet the combination and concentration of needs. However, a quick check of the data in Connecticut shown in Figure 1 (below) reveals that several districts have disproportionately high LEP concentrations relative to their low-income concentrations—specifically Norwalk, Danbury, New London, Windham, Stamford and New Britain. (figure in review)

And:

The overall correlations between ELL concentrations and subsidized lunch rates are not sufficiently strong (only a 0.50 correlation in 2008-2009) to select a single factor for addressing both needs. Nor does the report offer any actual analysis in drawing this conclusion (see Table A1, Appendix). Table A1 in the Appendix to this review provides a quick check of the correlations between wealth measures, income measures and student populations for 2005 and 2009.

That nitpicking aside, my big concern with the ConnCan report in this regard is that they provide absolutely no support for any of their recommendations, and in some cases state as fact, conclusions that turn out to be FLAT OUT WRONG.

I explain:

Further, some of the statements and recommendations made in the report, such as those pertaining to LEP/ELL children, are simply wrong. And these factual mistakes have significant consequences for the validity of the report‘s recommendations. By combining the ELL mistake with the proposal that ―money follow the child‖ (the weighted student funding formula), the report‘s recommendations would apparently be a boon to advocates for charter expansion. However, the weighted funding formula is a tangential argument at best, not supported by any of the claims in the report, and one that seeks to divert significant resources from schools with the highest demonstrated needs.

Finally, regarding the issue of poverty and driving money to charters, ConnCAN seems to not fully understand how their own proposal works – which I guess doesn’t really surprise me. Let’s break it down:

  1. CT charters serve fewer free lunch kids (<130% poverty level)  than their host districts, but serve relatively more free or reduced price lunch (<185% poverty level) kids (they have more of the less poor among the poor)
  2. Take any given sum of money and distribute it by free + reduced lunch kids and charters make out better. At a zero sum re-allocation, providing a smaller weight on free or reduced lunch kids versus a larger weight on free only shifts some of that money to charters.
  3. CT charters have very few ELL/LEP kids, so they wouldn’t benefit from a weight on these kids.
  4. Arguing to not have a weight on ELL/LEP kids and to instead reallocate that sum of money to the free or reduced price lunch weight, drives that money into charters – as well as other districts with higher free and reduced price lunch shares but fewer ELL/LEP kids. THIS IS EXACTLY WHAT THEY ARGUE FOR! (see red above)

If we assume state finance formulas to work within fixed budget constraints (which they do), this strategy, based on a lie of no need for an ELL/LEP weight, is effectively robbing the ELL/LEP populations to subsidize the less poor among the poor.  This is a classic weight shifting game.

For the complete review, see: http://nepc.colorado.edu/files/TTR-ConnCan-Baker-FINAL.pdf

Previous policy brief on CT School Finance & Money Follows the Child: CT and Money Follows the Child

Graph of the Day: Private School Day Tuition vs. Public School Expenditures (Boston Metro 2009)

I’ve written extensively in the past about private school tuition and expenditures.

Here is a link to a report on private school expenditures I produced in 2009. http://nepc.colorado.edu/publication/private-schooling-US

The graph below is actually stacked heavily in favor of showing that public schools have higher spending than private schools. Why? Because I am comparing private school tuition to public school total expenditures per pupil.

TUITION DOES NOT COVER TOTAL COSTS AND DOES NOT REPRESENT TOTAL SPENDING. The tuition figures included below include only DAY SCHOOL TUITION (or day component for boarding schools) which is only a share of current operating expenditures.

That out of the way, let’s take a look at the distribution of day tuition for the 57 private schools identified in 2009 by Boston Magazine as the “best” in the Boston Metro – broadly defining the Boston Metro (extending pretty far out). These schools collectively serve over 24,000 students. Let’s put their tuition into context by comparing it with the distribution of total per pupil expenditures as reported for public districts in the Boston Metro by the Massachusetts Department of Education.

Note that the vast majority of private independent schools had tuition in 2009 exceeding $30,000, yet few if any public districts spent anywhere near that much. A handful of Catholic private schools charged tuition under $10,000. Whereas the majority of public districts in the Boston metro spent closer to $10,000 than to $20,000.

The average pupil weighted total expenditure per pupil for public districts in the figure is$12,966 (with Boston as a stand out, but under $20k)

The average pupil weighted day tuition for private schools in the figure is $22,337 (but clearly bimodal)

A trip to the Reformy Education Research Association?

So, as I head off to AERA in New Orleans, I’ve been pondering what it would be like if there was a special education research conference for reformy types.  What would we find at the Reformy Education Research Association, RERA? How would the research be conducted or presented? What kinds of research thinking might we see?

Well, here are a few examples.

Reformy Study #1

First, here’s a table from the widely distributed paper from a team of renowned authors at the Forum on Understanding Core Knowledge in EDucation.

As you can see, the study endeavors to identify the determinants of school failure, in part, to identify those specific policies that must be changed in order to eliminate failing schools from our society. Failing schools are, after all, an abomination.  The researchers ranked New Jersey schools from highest to lowest proficiency rates and took the top and bottom 10%. They then mined the content of the negotiated contractual agreements for each district, looking for key elements of those contracts for explanations for why some districts fail but others perform quite well (as good as Finland!). They also gathered basic demographic data on students, having been dinged by reviewer #3 (an outsider) on their proposal in which they had not included such data. The authors note, however, that including this data did not alter their original conclusions or policy implications.

Conclusion: The cause of some schools failing and others succeeding is clearly the absence of regular use of clear metrics for teacher evaluation and the absence of mutual consent school assignment policies. It is also likely that basing salaries on experience or degree level adds to the dysfunction of low performing schools.

Policy recommendation: Immediately implement a new teacher evaluation system based 50% on student assessment data. Prohibit the use of experience or degree level as a basis for compensation.

Reformy Study #2

In this next study, authors from the Belltower Institute for Technology Education and Modern Enterprise explore the scalability of a nationally recognized model for charter schooling. Specifically, the goal of the study is to determine whether the model, which has received accolades in major newspapers and on network television (Reformy Nation) over the past year, might be a useful model for replacing entire urban school systems.  Table 2 below shows the characteristics of one successful charter school (sufficient data unavailable on the 3 less successful charters in the same network) operating the model, and the characteristics of the urban host district of that charter school. Deliberations are under way in that district to grant the charter operators full control of all schools in the district. Data in the table focus specifically on children in Grades 6 to 8, the only grades served by the charter.

Clearly, the charter not only outperforms the host district schools in grade 6, but by an even larger margin in grade 8, which can only be interpreted (emphasis in original manuscript) as the charter school adding more value to students with each year that they stay (setting aside the possibility that large shares of those students who are nolonger in attendance by 8th grade may have been lower performers).

Again, original analyses included only student assessment scores, and no further information student population characteristics. Amazingly, the original proposal got dinged by the same reviewer #3 as the study above, but reviewers #1 and #2 found the proposal to represent the highest standards of reformy rigor.

The authors continue to maintain that this information is unimportant because the charter populations are necessarily representative of the host district because a lottery is used for admission to the charter. Nonetheless, the authors contend that the reported differences in student populations and cohort attrition are “trivial.”

Conclusion: Clearly, the charter school has proven that it is able to produce far better results than host district schools while serving the very same children (emphasis in original manuscript) as those served by host district schools, and by using its “no excuses” approach.  Further, children’s performance improves the longer they attend the charter school.

Policy recommendation:  Set in place a strategy to turn over all host district schools, across all grade levels to the charter operator.

Reformy Study #3

In the third and final paper, economists from the the Measuring Yearly Advancements in Social Science project released preliminary findings from a massive privately funded study on teacher effectiveness. Specifically, the study endeavors to determine the correlates of effective teaching, in order to guide public school district personnel policies – specifically hiring, retention and compensation decisions. The study involved 22,543 teachers (326 of whom had complete data on all observations) across 6 cities (4 of which failed to provide sufficient data in time for this preliminary release).  Using two years of data on students assigned to each teacher (using only the 4th grade math assessment data, because correlations on language arts assessments were too unreliable to report), the study investigated which factors are most highly related to a TRUE measure of teaching effectiveness – where true “effectiveness” was defined as the contribution of Teacher X, to achievement growth in 4th grade math on the STATE assessment for students S1 – Sy, linked to that teacher in the given year (Equation expressed in Appendix A, pages 69-74).  The same students were also given a second math assessment. School principals conducted observations 5 times during the year and filled out an extensive evaluation matrix based on teacher practices and student – teacher interactions. Students were also administered surveys, as were parents of those students, requesting extensive feedback regarding their perceptions of teacher quality. The correlations are shown in Table 3.

Conclusions & Implications: The strongest correlate of true teaching effectiveness was the estimate of teacher contribution to student achievement on the same test a year later. However, this correlation was only modest (.30). All other measures including effectiveness measures based on alternative tests and student, parent and administrator perceptions of teacher effectiveness were less correlated with the original value-added estimate, thus raising questions about the usefulness of any of these other measures. Because the value-added measure turns out to be the best predictor of itself in a subsequent year, this estimate alone trumps all others in terms of usefulness for making decisions regarding teacher retention (especially in times of staffing reduction) and should also be considered a primary factor in compensation decisions. Note that while it may appear that school administrators, students and their parents have highly consistent views regarding which teachers are more and less effective (note the higher correlations across administrator ratings of teachers, and student and parent ratings), we consider these findings unimportant because none of these perception-based ratings were as correlated with the original value-added estimate as the value-added estimate was with itself (which of course, is the TRUE measure of effectiveness).

School Funding Deception Alert! (in a CAN)

I’ve noticed a pattern in a few recent school funding proposals, mostly emanating from shoddy, haphazard proposals developed on behalf of the CANs (ConnCAN & its close relatives) and often with “technical support” of Bryan Hassel of Public Impact. Let’s call it school finance reform in a CAN.

These new simplified school funding formula proposals, framed under the “money follows the child” ideology are intended to make state school funding formulas more “transparent” and to allow for more equitable and predictable flow of funding to charter schools or other non-district schools.

In each proposal (ConnCAN’s Spend Smart & The Tab, or Rhode Island’s new formula [albeit laced with other problems unique to RI-see post]), among a variety of other major overlooked factors, arbitrary and unfounded recommendations, exists a seemingly innocuous proposal regarding how to target aid for variations in student needs across districts.

As the authors of ConnCan’s recent Spend Smart brief explain deeply embedded in a footnote… you really only need to use a single factor to get state aid targeted to the right schools and that factor is the share of children qualifying for FREE OR REDUCED PRICED LUNCH. There’s no need for a special factor for limited English proficient/English language learner populations, or anything else. It’s all pretty much correlated to free and reduced lunch. (Hassel’s previous report for ConnCan, The Tab, included a trivially small LEP/ELL weight instead of none at all).

First, this assumption is patently wrong to begin with and is never actually validated by the authors of these proposals. But let’s set that aside for the moment. I’ll have a future post where I use actual data to show just how freakin’ wrong the assumption is.

But why would they propose this anyway? Well, it turns out to be really simple. If a state has a fixed sum of money to distribute (generally how it works), the CAN game is to figure out on what basis might charter schools get the maximum share of that money – regardless of who really needs it most. That is, what measures CAN they choose for weightings which will drive money to charters. Charter schools do tend to operate in poorer communities (relative to state averages), but a) serve the less poor among the poor, b) serve few or no LEP/ELL children, and c) incidentally, also serve few or no children with disabilities (as has been addressed on my blog regarding NY and NJ charter schools, and will be addressed soon regarding CT charters – numbers already run, charts forthcoming).  I’ll set aside c) for now.

So, the way to maximize charter funding, is to give a single weight for children qualified for free OR REDUCED PRICE LUNCH, and to negate any weight for LEP/ELL children (or make it as small as possible). That way, charters will get the same weight for kids whose family income falls between the 130% poverty level and 185% poverty level as neighborhood schools get for children below the 130% poverty level (This distinction is NOT TRIVIAL), where neighborhood schools have far more of the lower-income children. Any money that would have gone to LEP/ELL children can be rolled into a bigger weight for free/reduced lunch children, channeling a larger share of the total funding available to charter schools.

While not specifically addressed in these proposals, one would imagine that the same pundits would also favor flat, lump sum, or census based funding for special education, not differentiated by disability type, such that every school or district gets a specific dollar amount for special education based on a fixed share of their enrollment – a) whether they serve any special education students at all, or b) whether they only serve mild specific learning disability students, and none more severe. Watch out for this one as well!

===

Note: I’m sure that many will respond to this post by arguing that charters get severely shortchanged on state aid anyway, and that even if they make out okay on these adjustments, the lack of funding for such things as capital outlay and facilities more than offsets the difference. That’s a topic for another day. But, suffice it to say, existing comparisons like those made in the recent Ball State/Public Impact (imagine that) study are grossly oversimplified (as I explain regarding NYC schools, here: http://nepc.colorado.edu/publication/NYC-charter-disparities (page 23)). For example, typical crude comparisons never address whether having few or no special education children (relative to averages for district schools) result in cost reductions (per pupil) that might actually be greater than facilities average annual expenses per pupil.

===

Follow up figure for those who asked:

Note that using only a weight on free or reduced lunch would drive the same amount of supplemental funding to Torrington as to Norwalk or Danbury, despite large differences in LEP/ELL populations. So too would any charter school that has comparable low income population to Danbury, Stamford or Norwalk, even if that school had no LEP/ELL children. There may be other valid differences that require additional attention. Even this graph is too crude to give us the full story. The bottom line is that one must at least evaluate the distributions of children by need categories across districts and settings before making such bold, but oversimplified policy recommendations.

Here’s Rhode Island:

The issue here is similar in that Central Falls in particular (imagine that) gets shafted by failure to independently address differences in ELL/LEP concentration. While RI has few districts, and has a specific cluster of high poverty districts, the rates of LEP/ELL children across those districts vary from 5% to 20%. But, as I’ve explained previously, the RI formula and logic behind it have numerous other empirical and logical gaps. see: https://schoolfinance101.wordpress.com/2010/07/01/the-gist-twists-rhode-island-school-finance/

Distilling Rhetoric & Research on NY State Education Spending

This is another one of those mundane school finance formula posts. This one is focused on media and political spin in New York State around the recently adopted state budget and proposed school aid cuts.

Yesterday, I had the displeasure of reading a New York Post piece in which New York Governor Cuomo and the Post were validating how and why the proposed budget cuts would not and should not compromise the quality of New York State public schools. But this article – both the Post explanations and especially the Governor’s spokespersons explanations present a massive distortion of how the proposed cuts actually affect different types of districts across New York State.

Let’s break it down:

Political Spin

Here’s the public appeal, political spin on why cutting state aid to schools in New York really causes no harm:

The state’s student population dropped to 2.7 million from 2.8 million — or 4.6 percent — during that period.

And during that same span, the number of rank-and-file teachers grew to 214,000 from 194,957 — a 9.8 percent increase.

As a result, overall public-school expenditures more than doubled, from $26 billion to $58 billion statewide.

And:

“The huge growth in school bureaucracy and overhead is disturbing, especially since many schools are threatening to fire teachers,” Cuomo spokesman Josh Vlasto said. “School districts clearly have more than enough to do more with less.”

Read more: http://www.nypost.com/p/news/local/supervisor_bloat_hikes_overhead_gnbt3xbRu6hnqPRqrCTZvO#ixzz1IkEkATFa

Very simple: New York State school districts have added a whole bunch of administrators they don’t need – administrators who are obscenely high paid, and really just a massive waste. That is, if we accept the numbers reported above. But I won’t go after those in this post, because the argument is flawed on so many other levels. I will say that it is a foolish stretch to argue that administrative bloat has caused a doubling of per pupil spending across New York school districts.

Essentially, the argument here is that since there is so much bloat and waste – REGARDLESS OF WHERE THAT BLOAT AND WASTE EXISTS, if we cut aid to districts, they can simply cut that bloat. Of course that logic doesn’t work so well if the proposal is to cut aid from districts which are not the ones with the reported bloat.

Academic Analysis on Relative Efficiency and State Aid in New York

It is indeed interesting that the NY Post and Governor’s office have chosen to focus on spending increases since 1997.  Spending in many New York school districts, teacher salaries and administrative salaries in many New York school districts did escalate over this period. But why? What’s going on there? In what districts and what parts of the state is spending increasing, and does state aid play any role in those increases? Perhaps most directly on the question above, are some of those increases in spending actually leading to inefficiency, and is there any component of state aid that might be encouraging inefficient spending in school districts? If that was the case, we’d probably want to look first at those state aid programs as a place to cut.

Here are some summaries of findings from studies on New York State’s STAR tax relief program, which provides a sizeable chunk of financial support in systematically larger amounts to affluent communities:

Eom:

We test this hypothesis by examining the introduction of New York State’s large state-subsidized property tax exemption program, which began in 1999. We find evidence that, all else constant, the exemptions have reduced efficiency in districts with larger exemptions, but the effects appear to diminish as taxpayers become accustomed to the exemptions.

http://bk21gspa.snu.ac.kr/datafile/downfile/%EC%97%84%ED%83%9C%ED%98%B8%28GSPA-SD%2907_1.6.8.pdf

Public Budgeting & Finance / Spring 2006

Eom & Killeen:

Similar to many property tax relief programs, New York State’s School Tax Relief (STAR) program has been shown to exacerbate school resource inequities across urban, suburban, and rural schools. STAR’s inherent conflict with the wealth equalization policies of New York State’s school finance system are highlighted in a manner that effectively penalizes large, urban school districts by not adjusting for factors likely to contribute to high property taxation. As a policy solution, this article presents results of a simulation that distributes property tax relief using an econometrically based cost index. The results substantially favor high-need urban and rural school districts.

http://eus.sagepub.com/content/40/1/36.full.pdf+html

Education and Urban Society November 2007

Rockoff:

I examine how a property tax relief program in New York State affected local educational spending. This program, which lowered the marginal cost of school expenditure to homeowners, had statistically and economically significant effects on local government behavior. A typical school district, which received 20% of its revenue through the program in the school year 2001- 2002, raised expenditure by 4.1% and local property taxes by 6.8% in response. I then examine how the preferences of various groups of local taxpayers affect educational spending by identifying systematic variation across districts in the response to fiscal incentives. These results support the hypothesis that homeowners are more influential on local expenditure decisions than renters, owners of second homes, or owners of non-residential property.

http://www0.gsb.columbia.edu/faculty/jrockoff/papers/local_response_draft_january_10.pdf

Recap of the Research

So, let’s recap. What do we know about NY state aid and the potential link to the supposed inefficiencies to which the NY Post article and governor’s spokesman refer:

  1. That STAR aid in particular is allocated disproportionately to more affluent downstate school districts;
  2. That STAR aid, by reducing the price to local homeowners of raising an additional dollar in taxes to their schools, encouraged increased local spending on schools;
  3. That when the relative efficiency of school districts is measured in terms of increases in measured test scores, given additional dollars spent, STAR aid appears to have encouraged less efficient spending. STAR aid enabled affluent suburban districts to spend on other things not directly associated with measured outcomes, but things those communities still desired for their schools.
  4. That STAR aid contributes to inequities across districts in a system that is already highly inequitable.

What’s happening now?

As I have shown here, in recent years, STAR aid continues to be allocated inequitably, benefitting systematically wealthier districts.

https://schoolfinance101.wordpress.com/2011/02/04/where%E2%80%99s-the-pork-mitigating-the-damage-of-state-aid-cuts/

https://schoolfinance101.com/wp-content/uploads/2011/02/figure3.jpg

Funding inequities persist across New York state districts, with affluent suburban districts far outspending their poorer urban neighbors.

See www.schoolfundingfairness.org

But, the proposed funding cuts are not targeted at the districts which are most likely contributing to “inefficient” spending growth (if it is really inefficient).

The state aid cuts are not targeted to the state aid which seems to be stimulating less efficient spending and exacerbating inequity.

Rather, the proposed state aid cuts fall disproportionately on general foundation formula aid for those districts which have already been left in the dust by their more affluent neighbors. https://schoolfinance101.wordpress.com/2011/02/04/where%E2%80%99s-the-pork-mitigating-the-damage-of-state-aid-cuts/

https://schoolfinance101.com/wp-content/uploads/2011/02/figure5.jpg

How does that make sense?

Quite honestly, the argument made in the Post, and by the governor’s spokesperson is really obnoxious and misguided, given the distribution of the planned cuts.

Analogy for the day

Let’s say we have a state aid program for personal transportation and we have some really rich communities and some really poor communities.

Let’s assume no mass transportation exists.

Let’s say we (the state) decide to give individuals in the poor communities $200 per month to help them purchase, insure and maintain a personal vehicle –  a freakin’ car… and pretty damn cheap car that is minimally functional and questionably reliable. The $200 per month is pretty much all they’ve got. They’ve got few or no personal resources to contribute to an upgrade, and pretty much live month to month on maintenance and insurance.

We use another pot of aid to give $100 per month to residents of the rich community, who’ve already gone out and purchased Bentleys and Ferraris, and mostly use that money for occasional detailing of their vehicles which they might otherwise forgo (perhaps not) or perhaps an enhanced satellite radio subscription they might not have otherwise chosen and one that includes channels the never really expect to use (typically, they would have gotten the most expensive subscription anyway. As the truly rich like to point out, no-one who’s truly rich would ever dare ask how much it costs to maintain a yacht).

All of the sudden, the state budget is tight and a new report from some think tank comes out showing that in the past 10 years, more and more NY residents are driving Ferraris and Bentleys and more and more of them get their cars detailed on a monthly basis and have the most expensive satellite radio subscription. It’s an abomination. Therefore, cutting aid certainly causes no harm.

So policymakers pass their first on-time budget in years, cutting 10% of that $200 per month that currently supports basic car purchases in the poor communities! They ignore entirely that the $100 per month to the rich communities even exists.

Of course, once we’ve cut that money and ignored the other, what we now have is a set of poor communities that is less able to insure and maintain their economy vehicles. And about those Ferraris and Bentleys? We haven’t even touched their detailing subsidy.

Public Impact’s Persistent Pattern of Shoddy Analysis

Alternative title: Why Hassel with research, data and facts?

I was called up on this past week to review a new policy brief on reforming Connecticut’s education funding system – or Education Cost Sharing formula. The brief, titled Spend Smart: Fix Connecticut’s Broken School Funding System seemed simple enough on its face, but as I looked deeper, ended up being among the most offensively shallow and poorly documented reports I have ever seen.

Further, some of elements of the report which were stated as fact, but entirely unsubstantiated would actually lead to funding policies that significantly disadvantage some of the state’s highest need children. Even worse, this brief was accompanied by submitted legislation that included these ill-conceived policies.

But this post is only partly about this new brief produced by ConnCan, with an eclectic mix of authors put forth in reformy manifesto style. Nearly every attempt to ground “facts” in the brief was tied to previous ConnCan briefs, which themselves included little or no substantiation.

The common denominator in this brief and those on which it relies, as well as the accompanying legislation, appears to be Bryan Hassel of Public Impact. Hassel has also played a role on previous haphazard manifesto-like school funding reports including Fund the Child. Bryan Hassel has also been mentioned as the outside expert to advocate on behalf of ConnCan for school funding reform in Connectictut, including testifying in favor of the proposed legislation. See: http://blog.ctnews.com/kantrowitz/2009/12/03/1208/, or the ConnCan tweet:

Brian Hassel, co-dir. Public Impact: SB 1195 would “catapult Connecticut into a national model for schools” #edreform #getsmartct

http://twitter.com/#!/conncan/status/51061576467361792

Tangentially, Bryan Hassel and Public Impact were also involved in the production of the deeply problematic analysis of charter school funding disparities released last year, which I critique in part, in my recent work on New York City charter schools.

There comes a point where I encounter enough different reports linked to single organization and author, where those reports are so shockingly bad that I simply can’t hold back anymore.

The following three examples, all connected back to Public Impact and Bryan Hassel, provide evidence of the utter methodological incompetence of this organization and their/his complete disregard for a) existing rigorous research, b) legitimate analytical methods and data, and perhaps most disturbingly, c) significant adverse consequences of performing shoddy analysis and making bold but haphazard policy recommendations.

Below are three of my related critiques of policy “research” (used as loosely as I can imagine) with ties to Public Impact and Bryan Hassel. I offer these critiques in particular to any policy makers who might believe it reasonable to rely on this junk, or the organization that produces it.

Example 1: Public Impact and ConnCan’s Funding Reform Proposals

http://nepc.colorado.edu/files/TTR-ConnCan-Baker-FINAL.pdf

Here are just a few examples from my review of Spend Smart. The Spend Smart brief essentially argues that the Connecticut finance system is broken (it may well be, and I think it is), and that it should be fixed with a simple school funding formula with a single weight on children qualified for free or reduced price lunch.

This particular brief stated a number of supposed “facts” about the status of the current system, few or none of which could be substantiated with information provided, and some which were clearly unchecked and simply wrong, with significant consequences.

Here are some quoted claims from the brief and a tracing of the factual basis for those claims:

Claim 2: “Moreover, our current system was designed to direct 33 percent more dollars to students in towns with high poverty, but actually provides only 11.5 percent more funding for these students.” (Page 2)

Claim 2 posits that the current ECS formula leads to an average of 11.5% additional funding per low-income child across Connecticut school districts. That claim is cited to a previous ConnCan report, The Tab, authored by Bryan Hassel of Public Impact (specifically Page 18 of The Tab). Page 18 of The Tab cites this claim in Footnote 18 as “Authors’ analysis using 2007-08 data from the State Department of Education.  See Appendix for Details.” However, the appendix of the report provides no such justification and no further reference to the 11.5 figure. Rather the appendix provides only listings of data sources supposedly used and no explanation of how those sources might have been used.[i]

Claim 5: “For example, students at Connecticut’s charter schools are funded at only 75 cents on the dollar compared with traditional public schools.” (page 3)

Claim 5 is perhaps most perplexing, and like Claim 1, an example of the evidentiary black hole. The claim that Connecticut charter schools receive, on average, about 75% of state average funding is cited to a previous ConnCan report [not a Hassel/Public Impact product] titled Connecticut’s Charter School Law and Race to the Top. [ii] This ConnCan report was previously reviewed by Robert Bifulco for NEPC, who explained:[iii]

“The brief provides no indication of how it was determined that charter schools end up with only 75% of per-pupil funding that districts receive, or how, if at all, this comparison accounts for in-kind services or differences in service responsibilities.” [p. 3, Bifulco Critique]

And finally, for now:

Claim 6:“The formula could also hypothetically provide weights for other student needs, such as English Language Learner status. However, data shared by Connecticut State Department of Education with the State’s Ad Hoc Committee to Study Education Cost Sharing and School Choice show that the measure for free/reduced price lunch also captures most English language learners. In other words, there is a very strong correlation between English language learner concentration and poverty concentration in Connecticut. In addition, keeping the formula simple allows a more generous weight for students in poverty.” (p. 7, FN 12)

Claim 6 is particularly disconcerting, both because it includes a statistical finding which is never validated and because it is used to inform a policy solution which would produce substantial inequities harmful to a specific student population – children with limited English language skills. The authors claim outright that there is no need for additional adjustment for districts serving large shares of limited English proficient children because:

“there is a very strong correlation between English language learner concentration and poverty concentration in Connecticut.” (p. 7, FN 12)

This finding is cited only ambiguously in a footnote to data shared by CTDOE.  In some states, a strong relationship between the two measures might warrant collapsing supplemental aid for LEP and low-income children into one student need factor – with sufficient additional support to meet the combination and concentration of needs. However, a quick check of the data in Connecticut shown in Figure 1 (below) reveals that several districts have disproportionately high LEP concentrations relative to their low-income concentrations – specifically Norwalk, Danbury, New London, Windham and New Britain. These districts would be substantially disadvantaged by a formula with no additional weighting for LEP children, coupled with an arbitrary, small weighting for low-income status. In fact, the proposal to include only a relatively small weight for free or reduced price lunch and ignore the concentrated needs of these districts is most likely a back-door way to reduce the overall cost of the formula, and limit the extent that the formula truly redistributes funding where it is needed.

Figure 1

Relationship between Subsidized Lunch Rates and ELL Concentrations 2009


Data source: CTDOE 2009, Student need (free or reduced lunch: http://sdeportal.ct.gov/Cedar/WEB/ct_report/StudentNeedDT.aspx) and LEP data files (http://sdeportal.ct.gov/Cedar/WEB/ct_report/EllDT.aspx)

Note: From 2005 to 2009, the r-squared for this relationship ranges from .25 to .62, and is generally around .5.

The bottom line – The authors clearly never checked. The authors clearly don’t know what they are talking about, even at the most basic level. Yet they are willing – all who signed on to this brief, including Hassel, Hawley-Miles and Paul Hill – to go out on a limb and make these proclamations – proclamations and policy proposals which are simply bad, wrong, misguided – and irresponsible.

Example #2: Public Impact ConnCan’s The Tab

Much of the content of the Spend Smart brief seems to be grounded in, and some of it directly cited to, the previous ConnCan finance report titled The Tab, on which Bryan Hassel was listed as lead author.

I have written previously about The Tab, which is of equal quality to Spend Smart. Here’s a copy and paste of my previous post on The Tab.

https://schoolfinance101.wordpress.com/2009/11/23/why-is-it-ok-for-think-tanks-to-just-make-stuff-up/

==========Original Blog Post

This topic comes to mind today because ConnCan has just released a report (http://www.conncan.org/matriarch/documents/TheTab.pdf)    on how to fix Connecticut school funding which provides classic examples of just makin’ stuff up (page 25). The report begins with a few random charts and graphs showing the differences in funding between wealthy and poor Connecticut school districts and their state and local shares of funding. These analyses, while reasonably descriptive are relatively meaningless because they are not anchored to any well conceived or articulated explanation of “what should be.” Such a conception might be located here or even here (Chapters 13, 14 & 15 are particularly on target)!

The height of making stuff up in the report is the recommended policy solution to the problem which is never clearly articulated. There are problems in CT, but The Tab, certainly doesn’t identify them!

The supposed ideal policy solution involves a pupil-based funding formula where each pupil should receive at least $11,000 per pupil (made up), and each child in poverty (no definition provided – just a few random ideas in a footnote) should receive an additional $3,000 per pupil (also made up) and each child with limited English language proficiency should receive an additional $400 per pupil (yep… totally made up). There is minimal attempt in the report (http://www.conncan.org/matriarch/documents/TheTab.pdf) to explain why these figures are reasonable. They’re simply made up.

The authors do provide some back-of-the-napkin explanations for the numbers they made up – based on those numbers being larger than the amounts typically allocated (not necessarily true). They write off the possibility that better numbers might be derived by way of a general footnote reference to a chapter in the Handbook of Research on Education Finance and Policy by Bill Duncombe and John Yinger which actually explains methods for deriving such estimates.

The authors of The Tab conclude: “Combined with federal funding that flows on the basis of poverty and (in some cases) the English Language Learner weight of an additional $400, the $3,000 poverty weight would enable districts and schools to devote considerable resources to meeting the needs of disadvantaged students.” I’m glad they are so confident in their “made up” numbers! I, however, am less so!

It would be one thing if there was no conceptual or methodological basis for figuring out which children require more resources or how much more they might actually need. Then, I guess, you might have to make stuff up. Even then, it might be reasonable to make at least some thoughtful attempt to explain why you made up the numbers you… well… made up. But alas, such thinking seems beyond the grasp of at least some “think tanks.” Guess what? There actually are some pretty good articles out there which attempt to distill additional costs associated with specific poverty measures… like this one, by Bill Duncombe and John Yinger: How much more does a disadvantaged student cost?

It’s not like the title of this article somehow conceals its contents, does it? Nor is the journal in which it was published (Economics of Education Review) somehow tangential to the point at hand. This paper, prepared for the National Research Council provides some additional insights into additional costs associated with poverty and methods for estimating those costs.

Rather than even attempt to argue that these figures are somehow founded in something, the authors of The Tab seem to push the point that it really doesn’t matter what these numbers are as long as the state allocates pupil-based funding.  That’s the fix! That’s what matters… not how much funding or whether the right kids get the right amounts. In fact, the reverse is true. The potential effectiveness, equity and adequacy of any decentralized weighted funding system is highly contingent upon driving appropriate levels of funding and funding differentials across schools and districts!

Example #3: Public Impact Charter Disparity Analysis

Finally, there’s the report done by Public Impact with Ball State University on charter school funding disparities, which remains fresh in my mind because it keeps coming back up again and again. And it is because of the connection between the shoddy methods of that report, and the absurdly shoddy analysis in The Tab and Spend Smart, that this post is focuses on Bryan Hassel and Public Impact.

When digging deeper on financial differences among charter and non-charter schools in New York City, and looking at what the Public Impact/Ball State study had said about New York charter schools, my coauthor and I were shocked at how poorly the Public Impact/Ball State study had been conducted. Here’s a short section of our critique:

From: Baker, B.D. & Ferris, R. (2011). Adding Up the Spending: Fiscal Disparities and Philanthropy among New York City Charter Schools. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/NYC-charter-disparities.

This section returns to the issue of disparities in funding between non-charter and charter schools. As already noted, the Ball State/Public Impact study identified New York State as having large financial disparities between traditional public schools and charter schools. In contrast, the NYC independent budget office concluded that charters with department of education facilities had only negligibly fewer resources than non-charter public schools. One of these accounts is incorrect.

Ball State/Public Impact study claims that NYC traditional public school per-pupil expenditures were $20,021 in 2006-07, and that charter school expenditures were $13,468, for a 32.7% difference.[iv] However, the first figure appears to be inflated; the only figure that closely resembles $20,021 is the total expenditure, including capital outlay expense. This amounts to 19,198,[v] according to the 2006-07 NCES fiscal survey.[vi] This amount includes spending that is clearly not for traditional public schools—it includes not only transportation and textbooks allocated to charter schools, but also the city expenditures on buildings used by some charter schools.[vii] In essence, this approach attributes spending on charters to the publics they are being compared with—clearly a problematic measurement.

After offering these figures and the crude comparisons, the Ball State/Public Impact study argues that the purportedly severe funding differential is not explained by differences in need, because on average 43.5% of the students in public schools in New York State qualify for free or reduced-price lunch, while on average 73.3% of those in charter schools in New York State do. But, as was demonstrated earlier, there are three problems: (a) the focus on state rates, rather than NYC rates; (b) the inclusion of reduced-price lunch rates rather than just free-lunch rates as a measure of poverty (when focused on comparisons within NYC); and (c) the failure to compare only schools serving the same grade-levels. When these details are addressed, a different picture emerges. At the elementary level in NYC, for example, charter school free lunch rates were 57% and non-charter public school rates were 68%.

The NYC IBO report offers figures that are more in line with the data. For 2008-09, traditional public schools are found to have expenditures of $16,678, while charters that are provided with facilities are at nearly the same level ($16,373). Public expenditures on charters not provided facilities are found to be about $2,700 per pupil lower ($13,661). But even this comparison is not necessarily the most precise or accurate that might be made, because it does not attempt to compare schools that are (a) similar in grade level and grade range and (b) similar in student needs. The IBO analysis provides a useful, albeit limited, comparison of charter schools in their aggregate to district schools in their aggregate. Importantly, the IBO charter school funding figures do not include funds raised through private giving to schools or monies provided by their management organizations.

Once the cost differences associated with student populations are factored in, the IBO analysis changes significantly. In fact, the cost associated with student population differences is the same as the per-pupil cost associated with lack of a facility: $2,500. After adding the $2,500 low-need-population adjustment to charters, those not in BOE facilities can be seen to have funding nearly equal to that of non-charters ($16,171 vs $16,678) while those in BOE facilities have significantly more funding than non-charters (see Table 3).[viii]

One might try to argue that these problems we identify with the NY estimates, which render them entirely meaningless, are specific to New York, but that the rest of the states are reasonably estimated. The reality is that when it comes to estimating these types of funding differentials, each state and each local district, depending on the charter funding formula has its own peculiarities. If the crude method used by Hassel and colleagues completely missed the boat on New York, it is highly likely that comparable problems exist across many other settings. Without further, more detailed an appropriate analysis it would be unwise to base any conclusions on the existing Ball State/Public Impact study.


[i] In the recent report Is School Funding Fair, 2007-08 update (http://www.schoolfundingfairness.org/SFF_2008_Update.pdf) , Baker, Farrie and Sciarra show that the differential between very high and very low poverty districts in Connecticut is about 15% (Table 1), however, it is important to understand that in Connecticut, these patterns are not systematic. Rather, as I show in Figure A3 of the appendix herein, there exist substantial irregularities in current spending per pupil with respect to poverty. Among high need districts in particular, funding levels vary widely. Arguably, in this regard the system is indeed broken. But the ConnCan reports fail to provide any legitimate evidence to this effect.

[ii] http://www.conncan.org/sites/default/files/research/CTCharterLaw-RTTT2010-Web-2.pdf.  Interestingly, the authors of the current brief, including Bryan Hassel, choose not to anchor this conclusion to other recent work co-authored by Hassel, which describes funding disparities between host districts – New Haven and Bridgeport – and charters in those cities as “severe.” However, Baker and Ferris (2011) explain substantial methodological flaws in the characterization of charter funding gaps by Hassel and colleagues, pertaining to their analysis of New York State and New York City charter schools. There is little reason to believe that Hassel and colleagues analyses of Connecticut are any more valid than those for New York. For the state and district summaries of charter disparities, see: Batdorff, M., Maloney, L., May, J., Doyle, D., & Hassel, B. (2010). Charter School Funding: Inequity Persists. Muncie, IN: Ball State University. see: p. 10-11,Table 5. For a thorough critique of Hassel and colleagues mis-steps in this report when characterizing charter disparities in New York, see: Baker, B.D. & Ferris, R. (2011). Adding Up the Spending: Fiscal Disparities and Philanthropy among New York City Charter Schools. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/NYC-charter-disparities.

[iii] Bifulco, R. (2010). Review of “Connecticut’s Charter School Law & Race to the Top!” Boulder and Tempe: Education and the Public Interest Center & Education Policy Re-search Unit. Retrieved [date] from  http://nepc.colorado.edu/files/TTR-ConnCan-Bifulco.pdf

[iv] See: Batdorff, M., Maloney, L., May, J., Doyle, D., & Hassel, B. (2010). Charter School Funding: Inequity Persists. Muncie, IN: Ball State University, bottom of Table 5

[v] Depending on how one chooses to calculate this figure, the range is from 19,199 to about 20,162. The reported total expenditures for the district are $20,144,661,000 and enrollment figures range from 999,150 (as reported in the fiscal survey) to 1,049,273 (implied enrollment from current expenditure per pupil calculation in fiscal survey).

[vi] From the Census Bureau’s Fiscal Survey of Local Governments, Elementary and Secondary Education, F-33.  http://www.census.gov/govs/www/school.html

[vii] The New York State Education Department reports several versions of expenditure figures. Total expenditures per pupil for NYC in 2007-08 were $18,977—much lower than the total reported by Batdorf and colleagues. But the IBO correctly points out some expenses would be appropriately excluded from this number. For instance, the NYC Department of Education provides facilities for about half the city’s charter schools as well as many other forms of support for some charter schools, including authorizer services, food service, transportation services, textbooks, and management services:

Pass-through Support for Charter Schools. Charter schools are eligible to receive goods such as textbooks and software, as well as services such as special education evaluations, health services, and student transportation, if needed and requested from the district. In NYC there is a long-established process for non-public schools to access these services, and charter schools have access to similar support from DOE. For these items, charter schools receive the goods or services rather than dollars to pay for them. Most of these non-cash allocations are managed centrally through DOE.

IBO report, 2010: Retrieved December 13, 2010, from
http://schools.nyc.gov/community/planning/charters/ResourcesforSchools/default.htm.

It is simply wrong to compare the city aggregate spending per pupil to the school-site allotment for charters, as was done by Batdorf and colleagues (who also use the most inflated available figure for the city aggregate spending). In 2007-08 (a year earlier than the IBO comparison figure, but likely a reasonable substitute), NYSED estimates for the instructional/operating expenditures per pupil in NYC were $15,065 (this uses the instructional expenditure share, including expenditures on employee benefits [IE2%, Col. AP] times the total expenditures.  Retrieved December 13, 2010, from http://www.oms.nysed.gov/faru/Profiles/datacolumns1.htm). This figure may be far more relevant than that chosen by Batdorf and colleagues, but is still potentially problematic.

[viii] Again, we are unable to adjust precisely for differences in special education populations, due to lack of sufficiently detailed data.

School Funding Myths & Stepping Outside the “New Normal”

I’ve been writing quite a bit lately about rather complex state school finance formula issues. That is much the point of this blog.  But it may be hard for some to see how my recent posts on school finance relate back to the broader reform agenda, and to understand the implications of these posts for state policies. Let me try to summarize these posts – posts on spending bubbles, the “New Normal” and school finance PORK.  My overarching goal in these posts is to explain that much of the reformy rhetoric about budget cuts and the “New Normal” is based on myth about how school funding works and what we should be doing in these catastrophic economic times.

Here are the myths and some of the realities.

Reformy myth #1: That every state has done its part and more, to pour money into high need, especially poor urban districts. It hasn’t worked, mainly because teachers are lazy and overpaid and not judged on effectiveness, measured by value-added scores. So, now is the time to slash the budgets of those high need districts, where all of the state aid is flowing, and fire the worst teachers. And, it will only help, not hurt.

Reality: Only a handful of states have actually targeted substantial additional resources to high need districts. See www.schoolfundingfairness.org. And the effort of states to finance their elementary and secondary education systems varies widely. Some states have in fact systematically reduced their effort to finance local public schools for decades. That is, the tax burden to finance public schools in some states is much lower now than it was decades ago. Very few states apply much higher effort than in the past.  See: https://schoolfinance101.wordpress.com/2010/12/23/is-it-the-new-normal-or-the-new-stupid/

Reformy myth #2: The only aid to be cut, the aid that should be cut, and the aid that must be cut in the name of the public good, is aid to high need, large urban districts in particular. The argument appears to be that handing down state aid cuts as a flat percent of state aid is the definition of “shared sacrifice.” And the garbage analysis of district Return on Investment by the Center for American Progress, of course, validates that high need urban districts tend to be least efficient anyway. Therefore, levying the largest cuts on those districts is entirely appropriate.

Reality: As I have discussed in my series of recent posts, if there are going to be cuts – if states really believe that cuts to state aid are absolutely necessary, many state aid formulas include aid that is more appropriate to cut. That is, aid to districts who really don’t need that aid. Aid to districts that can already spend well above all others with less local effort. Aid to districts that will readily replace their losses in state aid with additional local revenues (or even private contributions). That’s the pork, and that’s where cuts, if necessary, should occur.

Reformy myth #3: The general public is fed up and don’t want to waste any more of their hard earned tax dollars on public schools. They are fed up with greedy teachers with gold plated benefits and fed up with high paid administrators. They don’t care about small class sizes and…well… are just fed up with all of this taxing and spending on public schools that stink. As a result, the only answer is to cut that spending and simultaneously make schools better.

Reality: The reality is that local voters in a multitude of surveys rate their own local public schools quite highly and that local voters when given the opportunity, even during the recent economic downturn, show very high rates of support for school budgets – including budgets with significant increased property taxes (the most hated tax). As I noted in a previous post, when New Jersey handed down state aid cuts to 2010-2011 school budgets and when- for the first time in a long time- the majority of local district budgets statewide failed to achieve approval from local voters, it was still the case that the vast majority (72%) of local budgets passed in affluent communities – in most cases raising sufficient local property tax resources to cover the state aid cuts. In another case, local residents in an affluent suburban community raised privately $420,000 to save full day kindergarten programs. Meghan Murphy’s analysis of Hudson Valley school districts shows that New York State districts also have attempted to counterbalance state aid cuts with property tax increases, but that the districts have widely varied capacity to pull this off.  Parents in a Kansas district are suing in federal court requesting injunctive relief to allow them to raise their taxes for their schools (they use faulty logic and legal arguments, but their desire for better schools should be acknowledged!).

Reformy myth #4: None of this school funding stuff matters anyway. It doesn’t matter what the overall level of funding is and it doesn’t matter how that funding is distributed. As evidence of this truthiness, reformers point to 30+ years of huge spending growth coupled with massive class size reduction and they argue… flat NAEP scores, low international performance and flat SAT scores. Therefore, if we simply cut funding back to 1980 levels (adjusted only for the CPI) and fire bad teachers, we can achieve the same level of outcomes for one heck of a lot less money.

Reality: First of all, these comparisons of spending now to spending then are bogus. I address the various factors that influence the changing costs of achieving desired educational outcomes in this post: https://schoolfinance101.wordpress.com/2011/01/12/understanding-education-costs-versus-inflation/. Second, rigorous peer reviewed studies do show that state school finance reforms matter. Shifting the level of funding can improve the quality of teacher workforce and ultimately the level of student outcomes and shifting the distribution of resources can shift the distribution of outcomes. http://www.tcrecord.org/content.asp?contentid=16106 Similarly, constraining education spending growth over the long term can significantly harm the quality of public schools. See: https://schoolfinance101.wordpress.com/2010/04/22/a-few-quick-notes-on-tax-and-expenditure-limits-tels/

An opportunity for states?

I would argue that now… right now… represents a real opportunity for those states who actually want to step up, and really invest in the quality of their education systems and use the quality of their education systems to drive their economic futures.

I stumbled across this article http://www.foxnews.com/us/2011/02/13/states-offer-tax-breaks-guarantee-jobs/ on the Fox News website the other day, and it presents some useful insights for state policy makers regarding tax policy decisions and economic growth. I’ve written about the same point very early in my blogging (that the Small Business Survival Index in particular, misses some big points about location selection). Here’s a short section of the Fox News piece:

But there’s a catch to the anti-tax, pro-business rhetoric: Businesses consider a range of factors when deciding where to locate, including the quality of schools, roads and programs that rely on a certain level of public spending and regulation. And evidence suggests there is little correlation between a state’s tax rate and its overall economic health.

“Concerns about taxes are overstated,” said Matt Murray, a professor of economics at the University of Tennessee who studies state finance. “Labor costs, K-12 education and infrastructure availability are all part of a good business climate. And you can’t have those without some degree of taxation.”

States’ tax rates also do not predict their resilience during an economic downturn.

Arguably, no time is better than now. Other states are jumping on board with the “New Normal” reformy logic that slashing education budgets, increasing class sizes and narrowing curriculum around tested content areas is the only way to go. Yet, educated parents invariably want small class sizes (often topping the list of preferences for private or public schools) and want their children to have intellectually stimulating and broad, enriched curriculum. The current environment presents a great opportunity for some states to step outside the “New Normal” and truly race to the top with real investment in their public schooling systems.