Blog

New Jersey Opportunity Scholarship (NJOSA) Study Notes & Review

It’s kind of like an end of semester blogging time here – a good time to review various posts on specific topics related to New Jersey education policy. My apologies to those of you looking for issues of national/broader interest. I’ll get back to those issues after this post.

In this post, I provide a brief summary of my previous posts related to the New Jersey Opportunity Scholarship Act. I have a handful of posts specifically related to this proposed legislation. But I have many others related to the private school marketplace, private school costs and quality.

In short, NJOSA is a “neo-voucher” policy which provides tax breaks to corporations that contribute to a scholarship pool, which then provides vouchers to children to attend private or other schools. Currently (NJOSA is being reworked as I write this), those vouchers would be made available to a combination of children attending “failing” schools and other income qualified children across New Jersey. In my series of posts on NJOSA, I point out that:

Finding #1) One of if not the biggest beneficiary of NJOSA is not a) the children trapped in poor urban (Newark, Camden, Jersey City) schools, or b) cash-strapped urban Catholic Schools (which lack sufficient other private contribution support to keep afloat), but rather, the highly racially and religiously segregated Lakewood Orthodox Jewish community and its schools. They constitute the largest number – by far – of “income qualified” current private school enrolled children in the state.

NJOSA & THE LAKEWOOD EFFECT

This finding was reported a few days ago in the Asbury Park Press

Finding #2) The premise that children will be saved from failing public schools with these paltry payoffs to low-end private schools is a stretch at best. Good private schools are expensive, and often more expensive than even the highest spending nearby public schools. The Milwaukee studies provide useful insights as well, showing little or no effect after much more than a trial period.

Would Scholarships Help Sustain NJ Private Schools?

NJOSA Must Read Items

Finding #3) Providing these vouchers might (would likely) increase private school enrollment, making certain private schools more accessible to low-income families. And, some students may benefit from this (while others may not). But, such a program will likely do little to cure the fiscal woes of cash strapped private schools. In fact, some have argued specifically in reference to Catholic schools that parishioner philanthropy to the schools may decline as those schools take on more non-Catholic students through vouchers, causing the school’s mission to drift.

This finding was covered by AP and reported in a handful of NJ outlets

Would Scholarships Help Sustain NJ Private Schools?
For more information on private school markets, costs and quality, see:

Major National/Regional Study on the Costs of Private Schooling by Type and Location, and Relationship to Quality Measures
http://www.epicpolicy.org/files/PB-Baker-PvtFinance.pdf

See also:

Washington Post Coverage of National Study
http://www.washingtonpost.com/wp-dyn/content/article/2009/08/30/AR2009083002335.html

Education Week Op-Ed on National Study:
http://www.edweek.org/login.html?source=http://www.edweek.org/ew/articles/2009/08/19/01baker.h29.html&destination=http://www.edweek.org/ew/articles/2009/08/19/01baker.h29.html&levelId=2100

Cap 2.5 Study Notes & Review

In this post, I review my various previous posts related to the proposal for a constitutional 2.5% property tax limit in New Jersey. Below are some summary points from previous posts, with links to those posts.

Flawed Argument #1) The need for Cap 2.5 is premised on the argument that New Jersey is by far the highest taxed state in the nation, therefore warranting not only a cap on growth rates of property taxes but also a cap on future state spending. I tackle the assumption that New Jersey taxes are out of control, highest in the nation, and that teacher and school administrator salaries are the cause here:  https://schoolfinance101.wordpress.com/2010/03/17/just-the-facts-nj-taxes-teacher-salaries-and-spending-fluff/

I point out that:

  • New Jersey is not, in fact, the highest taxed state in the nation. Our property taxes are high, but our income and sales taxes are modest by comparison. We’re also not number one in property taxes when all states are considered and when property taxes are measured as a percent of income.
  • The Tax Foundation report which is often used to support these claims is flawed at multiple levels, and that their estimates cannot be replicated with the supposed data from whence they came.

Flawed Argument #2) The argument has been made in many ways and on many occasions that property tax limits bring spending into line, make governments more efficient, and have no downside in terms of the quality of local public services. This argument is often based on comparisons to Massachusetts three decades following its implementation of a similar tax limit. This particular argument is often tied to the Manhattan Institute report which attempted to argue that Proposition 2.5, passed in 1980, had no adverse effect on Massachusetts public schools. Rather, it helped lead to Mass. schools being more productive than NJ schools, at much lower per pupil expense.

This topic has required several posts over time. First, the good empirical research, in good peer-reviewed economics journals (not the Manhattan Institute schlock) finds consistently that tax and expenditure limits harm public sector service quality – specifically public school quality.  I post relevant information here: https://schoolfinance101.wordpress.com/2010/05/26/manhattan-institute-study-provides-bogus-interpretation-of-massachusetts-prop-2-%C2%BD/ and here: https://schoolfinance101.wordpress.com/2010/04/22/a-few-quick-notes-on-tax-and-expenditure-limits-tels/

Here’s a sampling of the related research:

  • Author David Figlio in a study of Oregon’s Measure 5 (National Tax Journal  Vol 51 no. 1 (March 1998) pp. 55-70) finds that: Oregon student-teacher ratios have increased significantly as a result of the state’s tax limitation.
  • David Figlio and Kim Rueben in the Journal of Public Economics (April 2001, Pages 49-71) find: Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.
  • In a non-peer reviewed, but high quality working paper, Thomas Downes and David Figlio “find compelling evidence that the imposition of tax or expenditure limits on local governments in a state results in a significant reduction in mean student performance on standardized tests of mathematics skills.” (http://ase.tufts.edu/econ/papers/9805.pdf)
  • Context also matters. The effects of tax and expenditure limits may differ if implemented during bad rather than good economic times. Andy Reschovsky, in a 2004 article in State and Local Government Review (volume 36, pp. 86-102) suggests that the existence of fiscal constraints created by tax limitations could serve to exacerbate the impact of downturns on education spending, both by limiting the ability of localities to respond to state aid cuts and by shifting local revenue away from a stable source, the property tax, to less stable sources.
  • Of particular interest in New Jersey are the effects of Massachusetts Proposition 2 ½ implemented in 1980. A handful of studies have explored various aspects of that particular property tax limit which included an option for local communities to override the cap. Katherine Bradbury and colleagues, in a 1998 article in the New England Economic Review (July/August Issue 3-20) point out several interesting direct and indirect effects of Proposition 2 ½ in Massachusetts. First, they find that the share of the potential student population served by the public schools is lower in districts in which more initial cuts were necessary when the limits were first imposed. This result suggests that the limits could increase dropout rates or could result in students switching from the public to the private sector. Second, they find that Proposition 21⁄2 made constrained communities relatively less attractive to families with children, both in the early 1980s and the early 1990s.  Bradbury and colleague note that the distortion effects of the property tax limits on mobility of families into and out of different municipalities and school districts were “troubling.”

Regarding the bogus Manhattan Institute assertions that Prop 2.5 did not harm, and may have helped promote the rise of Massachusetts public schools…

Flawed Argument #3) Finally, there was the argument that implementing property tax limits would also increase the likelihood that municipalities and school districts would consolidate, to save money and live within their caps. But, as I point out here https://schoolfinance101.wordpress.com/2010/06/17/comment-on-property-tax-limits-and-consolidation/, the caps would lead to greater awareness of the differences in tax capacity among communities and of differences in the ability of communities to override caps.  The end result:

  • The cap would make these differences far more apparent and, as a result, would decrease the likelihood that a municipality that has room under its cap and/or the ability to override if necessary would ever consider merging with a town that would reduce its cap flexibility and/or dilute its pool of “yes” votes on an override.

Negotiating Points for Teachers on Value-Added Evaluations

A short time back I posted an explanation of how using value-added student testing data could lead to a series of legal problems for school districts and states.  That post can be found here:

https://schoolfinance101.wordpress.com/2010/06/02/pondering-legal-implications-of-value-added-teacher-evaluation/

We had some interesting follow-up discussion over on www.edjurist.com.

My concerns regarding legal issues arose from statistical problems and some practical problems associated with using value-added assessment to reliably and validly measure teacher effectiveness. The main issue is to protect against wrongly firing teachers on the basis of statistical noise, or on the basis of factors that influenced the value-added scores that were not related to teacher effectiveness.

Among other things, I pointed out problems associated with the non-random assignment of students, and how non-random assignment of students across classrooms of teachers can influence significantly – bias that is – value-added estimates of teacher effectiveness. Non-random assignment could, under certain state policies or district contracts, lead to the “de-tenuring” and/or dismissal of a teacher simply on the basis of students assigned to that teacher. Links to research and more detailed explanation of the non-random assignment problem are provided on the previous post above.

Of course, this also means that school principals or superintendents – anyone with sufficient authority to influence teacher and student assignment – could intentionally stack classes against the interest  of specific teachers. A principal could assign students to a teacher with the intent of harming that teacher’s value-added estimates.

To protect against this possibility, I suggest that teachers unions or individual teachers argue for language in their contracts which requires that students be randomly assigned and that class sizes be precisely the same – along with the time of day when courses are taught, lighting, room temperature , nutrition and any other possible factors that could compromise a teacher’s value added score and could be manipulated against a teacher.

The language in the class size/random assignment clause will have to be pretty precise to guarantee that each teacher is treated fairly – in a purely statistical sense. Teachers should negotiate for a system that guarantees “comparable class size across teachers – not to deviate more than X” and that year to year student assignment to classes should be managed through a “stratified randomized lottery system with independent auditors to oversee that system.” Stratified by disability classification, poverty status, language proficiency, neighborhood context, number of books in each child’s home setting, etc. That is, each class must be equally balanced with a randomly (lottery) selected set of children by each relevant classification.  This gets out of hand really fast.

KEEP IN MIND THAT THIS SPECIAL CONTRACT STILL APPLIES TO ONLY SOMEWHAT FEWER THAN 20% OF TEACHERS – THOSE WHO COULD EVEN REASONABLY BE LINKED TO SPECIFIC STUDENTS’ READING AND MATH ACHIEVEMENT.

I welcome suggestions for other clauses that should be included.

Just pondering the possibilities.
A recent summary of state statutes regarding teacher evaluation can be found here: http://www.ecs.org/clearinghouse/86/21/8621.pdf

See also: http://www.caldercenter.org/upload/CALDER-Research-and-Policy-Brief-9.pdf

This is a thoughtful read from a general supporter of using VA assessments to create better incentives to improve teacher quality. Read the “Policy Uses” section on pages 3-4.

Comment on Property Tax Limits and Consolidation

One of the new arguments in favor of implementing the 2.5% constitutional limit on property taxes for New Jersey municipalities and school districts is that it would not only force these municipalities and school districts to operate within their means and much more efficiently (an unfounded argument I address here), but that the caps would also encourage consolidation because of the fiscal constraints. This logic is wrongheaded for a variety of reasons, a few of which I will touch on here.

For starters, I have written on the topic of consolidation and potential cost savings on several previous posts and have spoken on this issue around the state. There are certainly savings to be found by consolidating very small school districts – especially those with fewer than 300 students. My slides on this topic, and its relation to racial isolation of towns in NJ can be found here: https://schoolfinance101.com/wp-content/uploads/2009/08/race-cost-in-nj1.ppt

What we know about property tax limits with an override option from states like Massachusetts is that those property tax limits tend to highlight the differences in property taxing capacity of towns and of the ability of local voters to override caps if they wish to maintain high quality schooling or other public services. Some towns hit the cap sooner than others, having little ability to improve services within the cap and other towns have much more latitude to raise revenues before hitting their cap. Some towns have little difficulty overriding the cap, while others find it near impossible.  In New Jersey, even without these caps differences in tax base and voter behavior (preferences for public service quality) are relatively obvious to local voters in adjacent municipalities that fall into different categories. As it is, these differences create substantial barriers, insurmountable barriers to consolidation when left to votes among each municipality.

The cap would make these differences far more apparent and, as a result, would decrease the likelihood that a municipality that has room under its cap and/or the ability to override if necessary would ever consider merging with a town that would reduce its cap flexibility and/or dilute its pool of “yes” votes on an override.

Even if towns did consider merging while caps are in place, it would only be in the interest of affluent towns to merge with other similarly affluent towns (or towns in similar position with respect to the caps and public service preferences), reinforcing the already striking patterns of inter-district racial and socio-economic segregation.

Good schools, low taxes and no activist judges! (?)

….Why Massachusetts is so much cooler than New Jersey… REALLY? (nothing against Mass. Go Celtics!)

Perplexing claims and contradictory information seem to be flooding the New Jersey media of late. Here’s my summary and critique of two of those claims:

  • The claim that the problem with New Jersey schools is that the New Jersey Supreme Court for decades meddled where they didn’t belong, squeezing taxpayers to flood poor urban districts with billions of dollars, none of which ever produced any gains in school quality. It’s still a wretched system! Wretched, I say!
  • The claim that if New Jersey were to implement a property tax cap like Massachusetts, New Jersey schools which currently achieve nearly the same as Massachusetts could do a lot better, and for much less money. That is, the property tax cap implemented by Massachusetts in 1980 is responsible for the slightly higher “quality” and marginally lower spending of Massachusetts schools. (actually, the ever-morphing claim is that Mass blows us away on test scores and spends 26% less per pupil. I debunk those claims in the link above).

Of course, the first contradiction here lies in the acknowledgment that New Jersey schools sit right near the top of the pack along with Massachusetts (and both rank very high in international comparisons as well).

The second contradiction however is far more ironic. In fact, extensive analysis and review of empirical research presented in this forthcoming study shows that New Jersey and Massachusetts are in fact among the national success stories of implementing effective school finance reforms in response to judicial intervention. Yes – increased funding in response to judicial intervention!

Massachusetts 1990s successes in improving educational outcomes and achieving greater equity in educational outcomes across public school districts were a function of sweeping finance and accountability reforms, implemented in response to a court order. In fact, one can argue that a major reason that the Massachusetts courts needed to intervene to improve equity and adequacy of education funding was the property tax limits.

Here are a few excerpts from the forthcoming study. Regarding other recent studies and reports from Think Tanks, Dr. Welner and I conclude:

We conclude that there is arbitrariness in how research in this area appears to have shaped the perceptions and discourse of policymakers and the public. Methodological complexities and design problems plague finance impact studies. Advocacy research that has received considerable attention in the press and elsewhere has taken shortcuts toward desired conclusions, and this is troubling.

Among the studies Dr. Welner and I review in the forthcoming study are a handful on the specific effects of New Jersey and Massachusetts reforms:

For Massachusetts, two independent sets of authors have found positive reform effects. Most recently, Downes, Zabel and Ansel (2009) found:

  • The achievement gap notwithstanding, this research provides new evidence that the state’s investment has had a clear and significant impact. Specifically, some of the research findings show how education reform has been successful in raising the achievement of students in the previously low-spending districts. Quite simply, this comprehensive analysis documents that without Ed Reform the achievement gap would be larger than it is today. (p. 5)

Previously, Guryan (2003) concluded:

  • Using state aid formulas as instruments, I find that increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students. The magnitudes imply a $1,000 increase in per-pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures. (p. 1)

Turning to New Jersey, two recent studies find positive effects of that state’s finance reforms. Alexandra Resch (2008), in research published as a dissertation for the economics department at the University of Michigan, found evidence suggesting that New Jersey Abbott districts “directed the added resources largely to instructional personnel” (p. 1) such as additional teachers and support staff. She also concluded that this increase in funding and spending improved the achievement of students in the affected school districts. Looking at the statewide 11th grade assessment (“the only test that spans the policy change”), she found “that the policy improves test scores for minority students in the affected districts by one-fifth to one-quarter of a standard deviation” (p. 1).

The second recent study was originally presented at a 2007 conference at Columbia University, and a revised, peer-reviewed version was recently published by the Campaign for Educational Equity at Teachers College, Columbia University (Goertz and Weiss, 2009). This paper offers descriptive evidence that reveals some positive test results of recent New Jersey school finance reforms:

  • State Assessments: In 1999 the gap between the Abbott districts and all other districts in the state was over 30 points. By 2007 the gap was down to 19 points, a reduction of 11 points or 0.39 standard deviation units. The gap between the Abbott districts and the high-wealth districts fell from 35 to 22 points. Meanwhile performance in the low-, middle-, and high-wealth districts essentially remained parallel during this eight-year period (Figure 3, p. 23).
  • NAEP: The NAEP results confirm the changes we saw using state assessment data. NAEP scores in fourth-grade reading and mathematics in central cities rose 21 and 22 points, respectively between the mid-1990s and 2007, a rate that was faster than the urban fringe in both subjects and the state as a whole in reading (p. 26).

On balance, Dr. Welner and I find that high quality empirical studies taken collectively support the conclusion that sustained, well designed school finance reforms including those which follow state judicial orders, generally lead to improvements to either or both the level of student outcomes and equity in the distribution of student outcomes. Further, New Jersey and Massachusetts are both examples of these successes.

No, it wasn’t the Massachusetts tax limits that created successful schools. And the only reason those tax limits did not undermine entirely the 1990s successes is that a) the state court was willing and able to intervene and b) the state was able to provide the additional state support needed to implement the reforms. This, however, left Mass schools much more vulnerable to economic downturns and state aid cuts in more recent years (2001-02 downturn and current).

Just trying to get the story straight!

Studies cited above:

Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

Guryan, J. (2003). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

Goertz, M., and Weiss, M. (2009). Assessing Success in School Finance Litigation: The Case of New Jersey. New York City: The Campaign for Educational Equity, Teachers College, Columbia University.

Resch, A. M. (2008). Three Essays on Resources in Education (dissertation). Ann Arbor: University of Michigan, Department of Economics. Retrieved October 28, 2009, from http://deepblue.lib.umich.edu/bitstream/2027.42/61592/1/aresch_1.pdf


Pondering Legal Implications of Value-Added Teacher Evaluation

I’m going out on a limb here. I’m a finance guy. Not a lawyer. But, I do have a reasonable background on school law thanks to colleagues in the field like Mickey Imber at U. of Kansas and my frequent coauthor Preston Green at Penn State. That said, any screw ups in my legal analysis below are my own and not attributable to either Preston or Mickey. In any case, I’ve been wondering about the validity of the claim that some pundits seem to be making that these new teacher evaluation policies are going to make it easier and less expensive to dismiss teachers.

=====

A handful of states have now adopted legislation which mandates that teacher evaluation be linked to student test data. Specifically, legislation adopted in states like Colorado, Louisiana and Kentucky and legislation vetoed in Florida follow a template of requiring that teacher evaluation for pay increase, for retaining tenure and ultimately for dismissal must be based 50% or 51% on student “value-added” or “growth” test scores alone. That is, student test score data could make or break a salary increase decision, but could also make or break a teacher’s ability to retain tenure. Pundits backing these policies often highlight provisions for multi-year data tracking on teachers so that a teacher would not lose tenure status until he/she shows poor student growth for 2 or 3 years running. These provisions are supposed to eliminate the possibility that random error or a “bad crop of students” alone could determine a teacher’s future.

Pundits are taking the position that these new evaluation criteria will make it easier to dismiss teachers and will reduce the costs of dismissing a teacher that result from litigation. Oh, how foolish!

The way I see it, this new crop of state statutes and regulations which include arbitrary use of questionable data, applied in a questionably appropriate way will most likely lead to a flood of litigation like none that has ever been witnessed.

Why would that be? How can a teacher possibly sue the school district for being fired because he/she was a bad teacher? Simply writing into state statute or department regulations that one’s “property interest” to tenure and continued employment must be primarily tied to student test scores does not by any stretch of the legal imagination guarantee that dismissal based on student test scores will stand up to legal challenges – good and legitimate legal challenges.

There are (at least) two very likely legal challenges that will occur once we start to experience our first rounds of teacher dismissal based on student assessment data.

Due Process Challenges

Removing a teacher’s tenure status is denial of a teacher’s property interest and doing so requires “due process.” That’s not an insurmountable barrier, even under typical teacher contracts that don’t require dismissal based on student test scores. Simply declaring that “a teacher will be fired if he/she shows 2 straight years of bad student test scores (growth or value-added)” and then firing a teacher for as much does not mean that the teacher necessarily was provided due process. Under a policy requiring that 51% of the employment decision be based on student value added test scores, a teacher could be wrongly terminated due to:

a) Temporal instability of the value-added measures

http://www.urban.org/UploadedPDF/1001266_stabilityofvalue.pdf

Ooooh…Temporal instability… what’s that supposed to mean? What it means is that teacher value-added ratings, which are averages of individual student gains, tend not to be that stable over time. The same teacher is highly likely to get a totally different value added rating from one year to the next. The above link points to a policy brief which explains that the year to year correlation for a teacher’s value added rating is only about .2 or .3. Further, most of the change or difference in the teacher’s value added rating from one year to the next is unexplainable – not by differences in observed student characteristics, peer characteristics or school characteristics. 87.5% (elementary math) to 70% (8th grade math) noise! While some statistical corrections and multi-year measures might help, it’s hard to guarantee or even be reasonably sure that a teacher wouldn’t be dismissed simply as a function of unexplainable low performance for 2 or 3 years in a row. That is, simply due to noise, and not the more troublesome issue of how students are clustered across schools, districts and classrooms.

b) Non-random assignment of students

The only fair way to compare teachers’ ability to produce student value-added is to randomly assign all students, statewide to all teachers… and then of course, to have all students live in exactly comparable settings with exactly comparable support structures outside of school, etc., etc. etc. That’s right. We’d have to send all of our teachers and all of our students to a single boarding school location somewhere in the state and make sure, absolutely sure that we randomly assigned students, the same number of students to each and every teacher in the system.

Obviously, that’s not going to happen. Students are not randomly sorted and the fact that they are not has serious consequences for comparing teachers’ ability to produce student value-added. See: http://gsppi.berkeley.edu/faculty/jrothstein/published/rothstein_vam2.pdf

c) Student manipulation of test results

As she travels the nation on her book tour, Diane Ravitch raises another possibility for how a teacher might find him/herself out of a job by no real fault of actual bad teaching. As she puts it, this approach to teacher evaluation puts the teacher’s job directly in the students’ hands. And the students can, if they wish, choose to consciously abuse that responsibility.  That is, the students could actually choose to bomb the state assessments to get a teacher fired, whether it’s a good teacher or a bad one. This would most certainly raise due process concerns.

d) A whole bunch of other uncontrollable stuff

A recent National Academies report noted:

“A student’s scores may be affected by many factors other than a teacher — his or her motivation, for example, or the amount of parental support — and value-added techniques have not yet found a good way to account for these other elements.”

http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=1278

This report generally urged caution regarding overemphasis of student value-added test scores in teacher evaluation – especially in high stakes decisions. Surely, if I was an expert witness testifying on behalf of a teacher who had been wrongly dismissed, I’d be pointing out that the National Academies said that using the student assessment data in this way is not a good idea.

Title VII of the Civil Rights Act Challenges

The non-random assignment of students leads to the second likely legal claim that will flood the courts as student testing based teacher dismissals begin – Claims of racially disparate teacher dismissal under Title VII of the Civil Rights Act of 1964.  Given that students are not randomly assigned and that poor and minority – specifically black – students are densely clustered in certain schools and districts and that black teachers are much more likely to be working in schools with classrooms of low-income black students, it is highly likely that teacher dismissals will occur in a racially disparate pattern. Black teachers of low-income black students will be several times more likely to be dismissed on the basis of poor value-added test scores. This is especially true where a statewide fixed, rigid requirement is adopted and where a teacher must be de-tenured and/or dismissed if he/she shows value-added below some fixed value-added threshold on state assessments.

So, here’s how this one plays out. For every 1 white teacher dismissed on value-added basis, 10 or more black teachers are dismissed –  relative to the overall proportions of black and white teachers. This gives the black teachers the argument that the policy has racially disparate effect. No, it doesn’t end there. A policy doesn’t violate Title VII merely because it has racially disparate effect. That just starts the ball rolling – gets the argument into court.

The state gets to defend itself – by claiming that producing value-added test scores is a legitimate part of a teacher’s job and then explaining how the use of those scores is, in fact neutral with respect to race. It just happens to have the disparate effect. Right? But, as the state would argue, that’s a good thing because it ensures that we can put better teachers in front of these poor minority kids, and get rid of the bad ones.

But, the problem is that the significant body of research on non-random assignment of students and its effect of value added scores indicates that it’s not necessarily differences in the actual effectiveness of black versus white teachers, but that the black teachers are concentrated in the poor black schools and that student clustering and not teacher effectiveness is leading to the disparate rates of teacher dismissal.  So they weren’t fired because they were precisely measurably ineffective, they were fired because they had classrooms of poor minority students year after year? At the very least, it is statistically problematic to distill one effect from the other! As a result, it’s statistically problematic to argue that the teacher should be dismissed! There is at least equal likelihood that the teacher is wrongly dismissed as there is that the teacher is rightly dismissed. I suspect a court might be concerned by this.

Reduction in Force

Note that many of these same concerns apply to all of the recent rhetoric over teacher layoffs and the need to base those layoffs on effectiveness rather than seniority. It all sounds good, until you actually try to go into a school district of any size and identify the 100 “least effective” teachers given the current state of data for teacher evaluation. Simply writing into a reduction in force (RIF) policy a requirement of dismissal based on “effectiveness” does not instantly validate the “effectiveness” measures. And even the best “effectiveness” measures, as discussed above, remain really problematic, providing tenured teachers reduced on grounds of ineffectiveness multiple options for legal action.

Additional Concerns

These two legal arguments ignore the fact that school districts and states will have to establish two separate types of contracts for teachers to begin with, since even in the best of statistical cases, only about 1/5 of teachers (those directly responsible for teaching math or reading in grades three through eight) might possibly be evaluated via student test scores (see: https://schoolfinance101.wordpress.com/2009/12/04/pondering-the-usefulness-of-value-added-assessment-of-teachers/)

I’ve written previously about the technical concerns over value-added assessment of teachers and my concern that pundits are seemingly completely ignorant of the statistical issues. I’m also baffled that few others in the current policy discussion seem even remotely aware of just how few teachers might – in the best possible case – be evaluated via student test scores, and the need for separate contracts. But, I am perhaps most perplexed that no-one seems to be acknowledging the massive legal mess likely to ensue when (or if) these poorly conceived policies are put into action.

I’ll save for another day the discussion of just who will be waiting in line to fill those teaching vacancies created by rigid use of test scores for disproportionately dismissing teachers in poor urban schools. Will they, on average, be better or perhaps worse than those displaced before them? Just who will wait in this line to be unfairly judged?

For a related article on the use of certification exams for credentialing teachers, see:

Green, P.C., Sireci, S.G. (2005) Legal and Psychometric Criteria for Evaluating Teacher Certification Tests.  Educational Measurement: Issues and Practice. Volume 19 Issue 1, Pages 22 – 31

Research, Schmresearch – CAP’s misguided analysis… AGAIN!

Center for American Progress has just released a new report titled Comparable, Schmomperable which argues that within-district disparities are the major equity problem of the day. As I have noted previously, I agree that within-district inequities in schooling resources including teacher quality are a concern – A major concern.

However, to ignore and brush aside disparities between districts is absurd.

The schmreport author Reagan Miller argues:

State funding formulas tend to exert an equalizing effect on per pupil revenues between districts, on average, and not by accident. These formulas were sculpted by two generations of litigation and legislation seeking equitable or adequate funding for property-poor school districts.

Sculpted they were? By litigation? With the consistent (they “tend to”) and persistent effect of resolving the vast majority of between-district funding disparities? Interestingly, the Schmreport author Reagan Miller cites these claims to a book by Eric Hanushek and Al Lindseth which had as its singular objective to argue that school funding litigation and school finance reform are invariably ineffective.  A critique of the book’s arguments and review of school finance litigation and its effects can be found here.

Co-author Kevin Welner and I provide a smack-down of the Center for American Progress (and Education Trust) argument that between-district inequities are a thing of the past – solved by years of litigation – here:

  • Baker, B. D., & Welner, K. G. (2010). “Premature celebrations: The persistence of interdistrict funding disparities” Educational Policy Analysis Archives, 18(9). Retrieved [date] from http://epaa.asu.edu/ojs/article/view/718

In the new Schmreport, CAP’s Reagan Miller seems to take the rhetoric to a new level. The  recent Education Trust “Loophole” report took a similar approach, making similar arguments, but I found the language and disturbingly thin research base in the new CAP report particularly troublesome.  Also, the inference that school finance reforms addressing between-district disparities have been ineffective (by way of citing Hanushek and Lindseth as the primary source chronicling state school finance reforms) but that leveraging Title I funding to force school districts to resolve internal disparities is the final frontier of reform seems a bit odd. If between district disparities and state school finance reform don’t matter (as per Hanushek and Lindseth), should we really care about within district disparities? Certainly we should care about both.

After declaring that state funding formulas “tend to” fix those little between-district funding equity problems, the report goes on to point out that New Jersey has achieved a strong positive relationship between state and local revenues and child poverty across districts and does acknowledge that other states have not – using Connecticut as an example. Strangely, while Connecticut’s funding distribution is problematic, it is not one of those strongly “regressive” states. [There are plenty of those. There are also those states which essentially spend nothing! and those where fewer than 80% of school aged children even use the public school system. We call those states RttT winners.] Rather, Connecticut suffers a strange randomness in school funding across districts. The report goes on to point out that even if Connecticut were to preemptively (before the pending case goes to trial) solve between-district funding problems, within-district funding problems could thwart any chance of actually solving the state’s equity problems. At the very least, if you’re going to make a claim about a state, take a few minutes to check that the data on that state you pick is at least somewhat consistent with your claim. This brief, flyover section of the report revealed to me a peculiar disregard for precision or accuracy.

I discuss the sorting of children across districts in Connecticut here, pointing out that between district concerns are by far the dominant issue in that state, merely as a function of the patterns of student segregation across districts (not between them!)

After brushing aside the possibility that between-district disparities remain a major concern, the CAP Schmresearch Schmreport goes on to say:

Scandalous inequity in the distribution of resources within school districts has plagued U.S. education for more than a hundred years.

Indeed… scandalous, but those disparities between school districts in states like Illinois could never reach the height of scandalous? I also found no citation to the hundred year old studies that document these disparities, but now I’m just being picky.

The Schmresearch Schmreport also argues in its introduction that:

“empirical literature documenting the extent of within-district inequity is astonishingly thin.”

Yet, as Kevin Welner and I point out, there is actually quite a large volume of research on within-district disparities in schooling resources, including within-district disparities in teaching quality, and some of it – much of the “research” relied on by CAP to construct their argument is empirically problematic (to be kind). There is also a significant body of good empirical research on the topic, notably absent in the Schmresearch Schmreport. (the report cites a few – very few – good studies on teacher quality distribution)

Without a doubt, there exist some truly problematic – perhaps even scandalous disparities in resources across schools within school districts. It is quite possible that Title I funding could be better leveraged to encourage districts to do a better job at improving equity across schools within districts. But it is completely irresponsible and outright ignorant to suggest that between district disparities have already been largely resolved.

Further, it is completely unnecessary to frame the argument in this way, unless CAP has a political motive to blame districts not states and to argue that no more money is actually needed (forcing districts to re-arrange deck chairs without solving between district disparities can be a ‘revenue neutral’ solution. It’s not much of a real solution though). And yes, states can do a better job of providing data systems that allow more precise tracking of within district inequities. But, I should note that many already do a much better job than Reagan Miller suggests. A wealth of information can be found in statewide personnel data systems which link individual teachers to schools.

This is not an either/or issue. It’s not about solving within-district disparities because between-district disparities are solved – been there, done that. Wrong.  Both are persistent problems, more in some places than others, and it’s worth the time and effort to figure out how to leverage all available policy options to figure out how to fix both within and between-district disparities.

Manhattan Institute Study Provides Bogus Interpretation of Massachusetts Prop 2 ½

Printer friendly draft: Policy Brief – Tax Limits in Massachusetts

Media reports this week touted a Manhattan Institute Study supported by the Common Sense Institute of New Jersey. The “study” (using the term loosely) can be found here:

http://www.manhattan-institute.org/html/cr_62.htm#05

The “study” provides no reference to the vast body of peer-reviewed literature in high quality journals that has addressed the effects of tax and expenditure limits on public expenditures and consequences for public sector service quality. Instead, the “study” pretends to provide definitive evidence, on its own, to a question the authors apparently assume has not already been addressed by more competent researchers. Sadly for them but luckily for the general public, this question has been addressed many times over, in high quality, peer-reviewed publications.

Below are some findings from empirical research studies on the relationship between tax limits and school quality. Perhaps the most notable examples of the effects of TELs (Tax and Expenditure Limits) are in Colorado (under their TABOR) and California (Prop 13). But, TEL’s have been implemented in a number of forms across states, including Massachusetts’ Proposition 2 ½.

This list of research findings is only a start, but illustrates an important point that choosing to limit taxes and expenditures likely means choosing to reduce service quality – increase class sizes and reduce teacher quality in particular. Again, that’s a choice. But we should be well aware of the consequences of these choices.

Other literature also suggests that while TELs reduce service quality by constraining state and local budgets, those service quality reductions are not necessarily accompanied by increased economic growth. This may be because regional economic growth is as related to regional service quality as it is to regional tax environment.

Author David Figlio in a study of Oregon’s Measure 5 (National Tax Journal  Vol 51 no. 1 (March 1998) pp. 55-70) finds that: Oregon student-teacher ratios have increased significantly as a result of the state’s tax limitation.

David Figlio and Kim Rueben in the Journal of Public Economics (April 2001, Pages 49-71) find: Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits.

In a non-peer reviewed, but high quality working paper, Thomas Downes and David Figlio “find compelling evidence that the imposition of tax or expenditure limits on local governments in a state results in a significant reduction in mean student performance on standardized tests of mathematics skills.” (http://ase.tufts.edu/econ/papers/9805.pdf)

Context also matters. The effects of tax and expenditure limits may differ if implemented during bad rather than good economic times. Andy Reschovsky, in a 2004 article in State and Local Government Review (volume 36, pp. 86-102) suggests that the existence of fiscal constraints created by tax limitations could serve to exacerbate the impact of downturns on education spending, both by limiting the ability of localities to respond to state aid cuts and by shifting local revenue away from a stable source, the property tax, to less stable sources.

Of particular interest in New Jersey are the effects of Massachusetts Proposition 2 ½ implemented in 1980. A handful of studies have explored various aspects of that particular property tax limit which included an option for local communities to override the cap.

Katherine Bradbury and colleagues, in a 1998 article in the New England Economic Review (July/August Issue 3-20) point out several interesting direct and indirect effects of Proposition 2 ½ in Massachusetts. First, they find that the share of the potential student population served by the public schools is lower in districts in which more initial cuts were necessary when the limits were first imposed. This result suggests that the limits could increase dropout rates or could result in students switching from the public to the private sector. Second, they find that Proposition 21⁄2 made constrained communities relatively less attractive to families with children, both in the early 1980s and the early 1990s.  Bradbury and colleague note that the distortive effects of the property tax limits on mobility of families into and out of different municipalities and school districts were “troubling.”

Finally, in a policy brief from the Center on Budget and Policy Priorities [http://www.cbpp.org/archiveSite/5-21-08sfp.pdf ] (which might be characterized as left leaning), an admittedly non-peer reviewed source, Phil Oliff and Iris Lav provide some useful insights regarding the transferability of Massachusetts reforms in the 1980s to New Jersey now. Among other things, Oliff and Lav point out that tax caps can be particularly harmful if implemented in a weak economy, noting that “Proposition 2½ took effect during a period of extraordinary economic growth — the “Massachusetts Miracle.” This is entirely consistent with the findings of Andy Reschovsky noted above. State revenues were rising, which allowed the state to boost aid to compensate for constrained property taxes, and construction was expanding, which allowed communities to raise their property tax revenue by more than 2.5 percent per year.” Further, they note that:  “The adoption of Proposition 2 ½ coincided with a decline in Massachusetts’ K-12 enrollment, allowing schools to operate with less revenue.” Neither of these conditions exist in present day New Jersey.

Comparing Massachusetts and New Jersey Education Systems

So then, how does Massachusetts accomplish this dramatic rise to the top in public education while spending so much less than New Jersey? And why didn’t the tax and expenditure limits impede the Massachusetts miracle?

Let’s clear up a few basic factual errors here. First, the expenditure differences between Massachusetts and New Jersey are nowhere near as large as implied by the Manhattan Study. Here’s what per pupil spending looks like between the two states with and without adjustments for regional cost variation.

And here’s what the performance differences look like in 2007, in relation to the funding differences. Yes, Massachusetts is slightly outperforming New Jersey (more so on 8th grade math) and spending marginally less. But there is hardly a striking contrast between either the spending or the performance measures and a plethora of potential factors that might explain these differences.

So, how did those tax limits make Massachusetts schools so darn good? Well, maybe they didn’t. Let’s put the timeline together a little better here. The tax limits were imposed around 1980. Massachusetts schools saw significant improvements beginning in 1993. Is it possible that some other intervening factor played a role in this 13 year period?

Well, as it turns out, as towns faced fiscal constraints under Prop 2 ½ during the 1980s and as many poorer towns suffered from lagging local revenues and insufficient state support, lawsuits concerning the equity and  adequacy of Massachusetts school funding were filed. Schoolfunding.info provides this summary:

“In the Massachusetts education finance case, McDuffy v. Secretary (1993), Massachusetts students claimed that their own less affluent school districts were unable to provide them with an “adequate” education. Based on an analysis of the Massachusetts Constitution’s “Encouragement of Literature” clause, the Supreme Judicial Court concluded that the Commonwealth has an obligation to educate all of its children and held that children in less affluent communities “are not receiving their constitutional entitlement of education as intended and mandated by the framers of the Constitution.” Moreover, the court adopted the guidelines set forth by the Supreme Court of Kentucky in Rose v. Council for Better Education to define the standard of education that the Commonwealth must provide.

At about the same time that the court issued its McDuffy decision, the legislature passed and the governor signed the Education Reform Act (ERA) of 1993, which established a “foundation budget” for each school district to be phased in over seven years.”

Wait… so, in fact, Massachusetts improved its school system, in terms of both equity and adequacy as a result of legislation enacted in 1993 in response to a court order to fix a system that had fallen into disarray during the decade following Prop 2 ½!  That’s not the story told by the Manhattan study. They infer that Prop 2 ½ itself led to improved efficiency of Massachusetts schools and that the decreased spending growth coupled with accountability pressures created the improvement.  The Manhattan Institute study makes absolutely no mention of the court ruling and gives minor addendum mention to the reforms that followed.

So, what do we actually learn from Massachusetts and Proposition 2 1/2 ?

  • First, if you’re going to impose property tax limits at all, it should be done during a strong not weak economy.
  • Second, the state must be prepared to offset losses to property tax revenue with increased state aid, but this makes public school funding even more susceptible to future economic downturn.
  • Third, the public should be prepared for and acknowledge the risk that service quality will decline. Class sizes will likely increase. Teacher quality may decline and student outcomes may follow.
  • Fourth, judicial intervention may be required to straighten out the resultant mess in the end, to insure that all children continue to have access to equitable and adequate educational opportunities.

That’s a somewhat different story than provided by the Manhattan Institute.

For more information regarding school finance reforms in Massachusetts and New Jersey, read: DoReformsMatter_Formatted

NJOSA & the Lakewood Effect

UPDATE: As I understand it, NJOSA has now been revised to specifically target Lakewood as a pilot site for the vouchers (1 of 8 locations). Consider the analysis below in that light. This revision potentially allows for a greater share of overall NJOSA funding to flow specifically to Lakewood students. Further, this revision raises fun/interesting legal questions.

Yes, it is true that the Zelman case found that the Cleveland voucher program did not violate the establishment clause of the first amendment by providing vouchers to children who in large numbers chose to attend religious schools. The court acknowledged that the voucher policy itself was neutral w/respect to religion. However, not too long before Zelman, in Kiryas Joel (NY), the court found that the State of New York had violated the establishment clause when it singled out Kiryas Joel Village in a statute altering the boundaries of local public school districts to specifically serve the exclusively religious community.

So, for example, if a the state of NY were to decide to operate a voucher program, and specifically pilot test that program in Kiryas Joel Village, would such a policy be considered religion neutral, as under Zelman? Or might that policy violate the establishment clause because of the exclusively religious community selected (Kiryas Joel)? Clearly Cleveland (and its private school sector) is far more diverse than Kiryas Joel, providing at least some argument in favor of neutrality. Now, Lakewood is less homogeneous than Kiryas Joel, but clearly more like Kiryas Joel than it is like Cleveland. Thoughts from my legal scholar friends? (consider demographic & pvt. school sector analysis below – written in response to different iteration of NJOSA).


SLIDES: Lakewood Effect Slides

The New Jersey Opportunity Scholarship Act (NJOSA) is being proposed on the basis that $6,000 vouchers for children in grades k-8 and $9000 vouchers for children in grades 9-12 would a) provide much-needed financial relief for financially ailing urban Catholic schools and b) would provide poor and minority children the opportunity to escape chronically under-performing, poor urban New Jersey schools. Implicit in Part B of the claims is that the primary beneficiaries of the voucher program – aside from urban Catholic schools — would be poor and minority children attending so-called “failing” schools in the state’s major urban centers, such as Camden, Newark, Paterson or Jersey City.

I have previously disposed of the first claim – that these vouchers would help financially sustain private schools in New Jersey – here: https://schoolfinance101.wordpress.com/2010/03/23/would-8000-scholarships-help-sustain-nj-private-schools/ Another study that reaches the same conclusion is a 2009 report which found  Milwaukee Vouchers are not yielding big financial benefits for the city’s Catholic schools, even where the voucher level exceeded $6,300 in 2005-06 (see: http://www.edexcellence.net/doc/catholic_schools_08.pdf)

But let’s take a closer look at who will really benefit if the New Jersey voucher proposal becomes law.    Where are most children in New Jersey already in private school?  More specifically, where are most New Jersey children already in private school with a family income  low enough (below 250% of the poverty level) to qualify for the NJOSA vouchers?  It may seem highly unlikely that many low-income children would already be attending private schools. Indeed, NJOSA is not “intended” to underwrite the education of children already in private schools, but rather to assist children “trapped” in low performing public schools to attend nearby private schools.  But, NJOSA includes from the outset a provision that would allow low-income families whose children already attend private schools to access the vouchers, setting aside  at least 25% of the total available voucher resources in any given year for such children (as I understand it).

First, where are the largest numbers of private school children in New Jersey?

As it turns out, the largest total numbers of privately schooled children are indeed in the Newark area and the second largest number in Bergen-Passaic (combined).  The third largest total numbers of private schooled children are in the Lakewood (Ocean County) area.  However, when one looks only at those children who would qualify for the NJOSA vouchers, the largest total number are in Lakewood – over 13,000 in 2008.  Lakewood has more  than Newark, more  than Jersey City, and more  than anywhere else in New Jersey, despite having much smaller total population.  Here are the data:

The next data we look at are from Public Use Micro-data Areas, or PUMAs – a construction of U.S. Census – which are smaller than metropolitan areas and allow for a relatively precise focus on the areas such as Lakewood. (PUMAs have populations of at least 100,000)

Because the Newark metropolitan area has far more than 100,000 residents, Newark would be carved into multiple PUMAs, whereas Lakewood would fall within one. But, the total populations of the areas would be more similar than comparing metro areas like those above.

Ranking PUMAs by private school enrollment, we find that the largest total private school enrollment –  BY FAR – occurs in Lakewood’s PUMA, with about 17,000 children between 5 and 18 in private schools.  About 8% of private schooled children statewide are in this PUMA.  More striking however are the numbers of private schooled children who would qualify for the NJOSA vouchers.  In Lakewood’s PUMA, that number exceeds 10,000 children. The next highest PUMA has under 1,500 who would qualify.  Bottom line: Lakewood’s PUMA has over 20% of the state’s poor children who currently attend private schools.

The following tables present the data:

But this is only the beginning of the Lakewood story.   As it turns out, the vast majority of children in Lakewood are in private, not public schools, which occurs nowhere else (to this extreme) in the state of New Jersey.

And, as it turns out, Lakewood’s public schools are currently “majority minority” (Black and Hispanic students:

And, there are many, many private schools in Lakewood. The vast majority of those schools and the students who attend them are Orthodox Jewish schools which are almost invariably 100% homogenous – listed as “white” – in sharp contrast with the majority minority makeup of the township’s public school system.

As it turns out, the sum total of children attending the Jewish schools in Lakewood is approximately the same as the sum total of the low income (<250% poverty level) children attending private schools in Lakewood, or about 11,000. Needless to say, there is likely significant overlap between the children in Lakewood in private schools from low income families and the children in the Orthodox schools in Lakewood .

Note that 10,395 of the reported 10,470 (99.2%) poor, private schooled children in Lakewood PUMA were listed as “white” in the Census American Community Survey data from 2008. Only Lakewood’s Jewish schools match that demographic (in that geographic area).  And they are nearly (if not entirely) all identified as low-income.

So who cares? Why does matter? What are the implications? Well, as it turns out this intriguing distribution of low-income private schooled children would potentially qualify Lakewood’s Orthodox Jewish schools for a sizeable revenue windfall from the NJOSA voucher program.  It may not necessarily be enough to cover actual “costs” of operating these schools, but the publicly funded vouchers might go a long way to when combined with other resources.

Setting aside phase-in limits on the amount of total voucher funding under NJOSA, if all future children attending Lakewood Orthodox schools received vouchers each year, and if those schools maintained roughly the same enrollment as in 2008, the public revenue provided through vouchers could generate over $60 million per year for these schools.

Let’s sum up.  One of if not the biggest beneficiary of NJOSA is not a) the children trapped in poor urban (Newark, Camden, Jersey City) schools, or b) cash-strapped urban Catholic Schools (which lack sufficient other private contribution support to keep afloat), but rather, the highly racially and religiously segregated Lakewood Orthodox Jewish community and its schools. They constitute the largest number – by far – of “income qualified” current private school enrolled children in the state

This is not quite the narrative about the NJOSA voucher proposal I’ve been hearing about.

NJ Opportunity Scholarship: Must Read Items

Very little time today. Big deadlines and lots of data to analyze. Since the debate is now heating up over the NJ Opportunity Scholarship Program, I thought I’d put out there a few items which really should be part of the debate on this topic.

1) The April 2010 report on the long run effectiveness of the Milwaukee Voucher Program: http://www.uark.edu/ua/der/SCDP/Milwaukee_Eval/Report_15.pdf This report concludes:

The primary finding in all these comparisons is that, in general, there are few statistically significant differences between levels of MPCP and MPS student achievement growth in either math or reading two years after they were carefully matched to each other. In one of the ways of estimating these results, focusing only on those students who have remained in the public or private sector for all three years, private, voucher students are slightly behind MPS students in mathematics achievement growth.

2) My Summer 2009 report on the “cost” and supply of private schooling: http://epicpolicy.org/files/PB-Baker-PvtFinance.pdf It is important to understand that my point in this report was NOT that private schools are either more or less expensive than public schools in the same labor market. They are simply more varied. They are more varied in what they spend, what they provide and what they can achieve. With private schools, you get what you pay for.

I write about the specifics of the New Jersey context here: https://schoolfinance101.wordpress.com/2010/03/23/would-8000-scholarships-help-sustain-nj-private-schools/, pointing out that claims that average private school costs in NJ are $6,000 (elem) and $9,000 (secondary) are entirely unfounded.

Here, I provide a quick snapshot of cost/quality issues in private schooling in response to other recent media reports: https://schoolfinance101.wordpress.com/2010/02/20/stossel-coulson-misinformation-on-private-vs-public-school-costs/

The premise that children will be saved from failing public schools with these paltry payoffs to low-end private schools is a stretch at best. Good private schools are expensive, and often more expensive than even the highest spending nearby public schools. The Milwaukee studies provide useful insights as well, showing little or no effect after much more than a trial period.