Blog

Connecting some Teacher Quality, Leadership & Ed School Dots

A frequent, but much debated conclusion from teacher quality research is that teachers’ own academic ability, measured by test scores or even more bluntly the “competitiveness” of the colleges teachers attended as undergraduates, is associated with student outcomes. This occurs even when we use such crude classifications as the Barrons Guide rating system. In recent, exceptionally methodologically strong piece, Boyd, Lankford, Loeb and Wyckoff found:

“Furthermore, almost half of the teachers in the most effective quintile (based on student outcomes) graduated from a college ranked competitive or higher by Barron’s, compared to only ten percent of the teachers in the least effective quintile.”(p. 23)

http://www.teacherpolicyresearch.org/portals/1/pdfs/Matching_of_Public_School_Teachers_to_Jobs.pdf

Okay, for some this may hurt, and smacks of elitism. But nonetheless, it is a strong and relatively consistent finding which we should likely give some attention.

This finding is highly relevant to Arne Duncan’s talk yesterday at Teachers College, Columbia University where he took aim at the role of university based preparation programs in Education. Notably, Duncan referred to related work by these very authors, but did not mention this finding. In an effort to be egalitarian, Duncan promoted the virtues of institutions like Teachers College but also those like Emporia State in Kansas.

As I noted in my previous post, one thing we know is that the majority of teachers come through relatively non-competitive undergraduate colleges (on a 6 point rating system, from non-competitive, less competitive, competitive, very competitive, highly competitive, most competitive). On average, public school teachers come from the less competitive and competitive categories (about 2/3 of all teachers in these two categories alone) far more so than the highly and most competitive (about 6.5%). So too do most college students generally. That’s just the way the higher education system is distributed.

Teachers also come in large numbers – 42% – from 1994 carnegie classification – Comprehensive I – colleges and increasingly from less selective liberal arts colleges (carnegie 1994 Liberal Arts II colleges). And these Carnegie classifications from 1994 are somewhat associated with Barrons ratings. In short, the system of teacher education, nationwide, is not set up to produce large numbers of teachers who have the attributes that authors above find to be associated with higher student outcomes. A teacher preparation or administrator preparation program is only as good as its students.

Other authors have argued that the dominance of non-selective colleges in preparing teachers and higher costs of pursing teaching through more selective colleges creates a disincentive for academically strong high school students to pursue teaching. Add this to relatively low salaries, and the problem is exacerbated. Perhaps its the PIPELINE and system as a whole and not so much the individual institutions and prep programs that need reforming. Perhaps we need some incentives to encourage academically talented students to pursue teaching and some incentives to encourage the “highly and most competitive colleges” to get in the game of teacher preparation. At the same time, we may need to make some tough policy decisions about academically weak undergraduate and graduate institutions which have increased their role over time.

An interesting twist related to teacher academic preparation is that principals with stronger academic backgrounds seem more likely to recruit and retain teachers with stronger academic backgrounds (http://eaq.sagepub.com/cgi/content/abstract/41/3/449) So, we’ve got to find some way to get stronger principals into schools where they are needed most, and make sure there is a supply of stronger teachers produced through a better pipeline, from which those principals can build strong teams.

One problem here is that rather than becoming more concentrated in strong academic institutions over time, educational administration programs have become more distributed across more diverse… and quite honestly academically weaker institutions. For example, between 1993 and 2003, comprehensive colleges went from producing about 3% of education leadership doctorates to about 25% (http://eaq.sagepub.com/cgi/content/abstract/43/3/279).

So why does that matter? How does this finding relate in any way to the fact that principals with stronger academic backgrounds (measured crudely by Barrons ratings of undergrad colleges) are more likely to hire teachers with stronger backgrounds, and those teachers are shown to make a difference? Isn’t it likely that there exists no relationship between graduate preparation and undergraduate preparation, and that we should be unconcerned that comprehensive colleges are the ones producing the doctorates? Well, again the dots connect logically. As it turns out, we show in the same article above that about 22 to 25% of doctoral recipients from Top 20 ranked (US News, of all things) education schools and 13% to 15% of doctoral recipients at all Research Universities attended highly or most selective undergraduate colleges, compared to only 5% to 10% of doctoral recipients in comprehensive colleges.

Yes, these are relatively harsh and elitist realities. And yes, I am implying that having a strong academic background is likely an important attribute for someone who wishes to lead an educational institution. That seems to make sense.

In his speech yesterday, Arne Duncan invoked the usual comparison to medical training:

http://www.ed.gov/news/speeches/2009/10/10222009.html

The point, of course, was to emphasize the importance of clinical training. But, let us not forget that the medical model relies on two critical prerequisites to clinical training – 1) highly selective entrance criteria and 2) successful completion of rigorous undergraduate + 2 years of rigorous content upload of basic sciences and other relevant curriculum. Without academically strong candidates to begin with, the model fails. Without rigorous up-front information uploading, and students who can handle it, the model fails. The medical model is equally reliant on all of its parts, not just the clinical training.

Just connecting some dots here. Cheers.

Ed Schools as Cash Cows in the University

Secretary Duncan is again on the stump today, at Teachers College (where I attended) where he is expected to make the case that education schools are “cash cows” of the university, generating large sums of tuition revenue which are then diverted to other parts of the university.

http://www.ed.gov/news/speeches/2009/10/10222009.html

This proposition is hardly new, and appears to come from the pages of past TC president Arthur Levine in his report on ed schools a few years back.

http://elan.wallacefoundation.org/SiteCollectionDocuments/WF/ELAN/2007%20Second%20Half/EducatingSchoolLeaders.pdf

(this is the one on preparing school leaders. there was also one on teacher education)

At the time, my colleagues and I were intrigued by a number of the assertions being made and engaged in a series of research projects trying to untangle the “realities”, but actually did not explore specifically the cash cow notion. But, our research from that time does have a few facts to offer with respect to the cash cow argument, as well as important general context issues.

First of all, who is producing the teachers, and administrators? One implication of the current rhetoric is that major universities like state flagship universities and major private universities which offer a diverse array of programs, undergraduate and graduate majors are also producing large shares of all teachers. As it turns out, the major research universities actually produced about 13% to 15% of teachers who were working in public schools in 2003-04 and 2007-08. 42% of public school teachers received their undergraduate training at regional comprehensive colleges, many of which were the former “normal schools” or “teachers colleges.” In many of these schools, education majors are the dominant major, perhaps helping to sustain the institutions, but with a minority of other program areas to draw on education tuition dollars – except by the role that liberal arts and science departments play in providing undergraduate credit hours to teachers in their content areas. But, this is revenue received for credits delivered – not a redistribution of profit margin, per se.

The role of education schools in major research universities is potentially more interesting, but again, ed schools in research universities produce a relatively small share of all teachers and that share appears to be declining. The same is true of graduate degrees in educational administration. In the early 1990s, regional comprehensive colleges produced about 3% of doctorates in educational administration, and now produce about 25% or more (as of 2003). That is, graduate degrees in educational administration are being increasingly produced by institutions whose primary goal is to produce educators and education related professionals. So, from these perspectives, it’s getting harder to see how ed schools or programs are substantially subsidizing other schools or programs within universities, when increasingly, the production of educators and educational leaders is being concentrated in schools focused on education.

I’m unsure whether there’s other evidence to contradict this pattern. I’ve not studied it for a few years. There is some evidence that small cash strapped formerly undergrad only liberal arts colleges have expanded delivery of online certificate programs including administrative masters degrees, but they are hardly a major producer yet. In fact, they’ve expanded production in areas such as MBAs even more so than teacher and administrator education.

Now, on to the basic premise laid out by Levine and echoed now by Duncan, that ed school tuition dollars subsidize the rest of the university. This could be the case if tuition was constant for a credit hour across all students in all units in the university and if the average cost of providing a credit hour to undergrads was lower for education students than for other students in the university. One might imagine this to be the case, if we assume that education faculty are simply less well paid, for example, than engineering, business or economics faculty and that ed school classes are large. Actually, the bigger driver of cost per credit hour produced is the class size piece.

A few years back Chris Morphew and I did an analysis of data from the National Survey of Postsecondary Faculty, estimating wage models and models of “cost per credit hour” by field in which those credit hours were delivered. We accounted for relative salary of similar rank faculty, share of salary to teaching and class sizes of average undergrad load of teaching faculty. We actually found that ed school credit hour costs were about average, comparable to business for that matter. While b-school salaries were higher, ed school class sizes were smaller, on average across the full range of undergrad courses. Next, we linked our credit hour cost estimates to course taking data on students in different majors to come up with estimates of the relative cost of producing and ed major versus an econ major, etc. based on the full mix of courses students take across units in a university and the relative price of those units. Again, cost of producing an ed major was relatively average – not low.

Now, there are factors we could not and did not consider with our limited data – including shares of undergrad credits delivered by teaching assistants  and whether this rate is significantly higher, or lower for ed schools. We also were unable to generate estimates of other “overhead” costs such as equipment that might be necessary in engineering or sciences, but this would hardly seem to compromise comparisons between ed schools and other areas such as social sciences that would seemingly have comparable non-faculty expenses.

That said, I’d be curious as to what other evidence is now out there to support, or refute this assertion that Duncan is now making. Here’s my preliminary reading list for anyone interested:
Morphew, C., Baker, B.D. (2007) On the Utility of National Data for Estimating Generalizable Price and Cost Indices in Higher Education. Journal of Education Finance 33 (1) 20-49

Baker, B.D, Orr, M.T., Young, M.D. (2007) Academic Drift, Institutional Production and Professional Distribution of Graduate Degrees in Educational Administration. Educational Administration Quarterly 43 (3)  279-318

Baker, B.D., Wolf-Wendel, L.E., Twombly, S.B. (2007) Exploring the Faculty Pipeline in Educational Administration: Evidence from the Survey of Earned Doctorates 1990 to 2000. Educational Administration Quarterly 43 (2) 189-220

Wolf-Wendel, L, Baker, B.D., Twombly, S., Tollefson, N., & Mahlios, M.  (2006) Who’s Teaching the Teachers? Evidence from the National Survey of Postsecondary Faculty and Survey of Earned Doctorates.  American Journal of Education 112 (2) 273-300

A few quick NJ Charter School Facts & Figures

AFTER READING THIS, PLEASE SEE CORRECTIONS AT: https://schoolfinance101.wordpress.com/2009/11/04/charter-averages-worse-than-originally-estimated/

If one watches the trailer clips from the Cartel movie on two highly successful New Jersey charter schools, one might be misled to believe that Charter schools are simply uniformly freakin’ awesome. They can do no wrong. They are clearly the answer to all of our problems in urban schooling in New Jersey.  Indeed there is some, if not much solid empirical research literature out there which finds favorable results for charter schools and much which finds that charters on average, are pretty much a break even option.

For an exceptional review of charter school research, I would recommend Robert Bifulco and Katrina Bulkley’s chapter on Charter Schools in the Handbook of Research on Education Finance and Policy. Neither of these scholars are charter school naysayers, yet they conclude:

Research to date provides little evidence that the benefits envisioned in the original conceptions of charter schools – organizational and educational innovation, improved student achievement, and enhanced efficiency – have materialized.”

I am also not a Charter school naysayer, having written in my own previous work that leaders of charter schools seem more likely to recruit or select teachers with stronger academic credentials than traditional public schools in the same state. But, I’m also a realist when I look at data on charter schools, their students and their outcomes.

For starters, let’s look at how New Jersey charter schools begin with a public subsidy disadvantage – which may explain some of the mixed results that follow. Current expenditures from NJDOE annual financial reports through 2005, show charters spending less than many districts, organized by factor group (A being generally poor urban districts, through I & J, being relatively affluent suburbs).

Per Pupil Spending by DFG

In many parts of the country, Charter schools make up for this difference with private fund raising. In fact, most infrastructure costs are covered by such fund raising especially where states fail to provide any facilities support to charter schools. A few years back, I was able to compile the tax returns of the non-profits that support Washington DC charters to show that they received, on average, 14% of their revenue through private contributions. I ran an extract the other day of New Jersey Charter school IRS 990 forms, but few reported their data. Still working on that.

Now, on to the raw outcomes of charter schools in New Jersey based on 2008 assessments. Again, based on the cherry picking in the Cartel movie, one would think that all charters in NJ are kicking butt like North Star Academy. However, prior Charter research and the logic of deregulation lead to more realistic assumptions that – some do well – some not so well – and on average, there may be little difference (if the system, either the “market” or the accountability system, does not shut down those who do not do well). Under less regulation one would simply expect more dispersion. Higher highs perhaps, but also lower lows.

Here’s a quick run down. I begin with the “averages” by grade level and by district factor group. Here’s the % proficient or advanced by DFG, with Charters labeled “R.”

Charters labeled "R"
Charters labeled "R"

Charter schools, most though not all of which serve relatively poor student populations, hang right down there, across grade levels with DFG A and B poor schools – especially at both the beginning and end grades. Charters look little different when viewing only those who score advanced and higher.

% Advanced 2008

Okay, so these are the averages which conceal the really fun and interesting variations and drag down the superstars. Here’s the 3rd grade assessment data for two groups of schools – those in District Factor Group A and Charters. Schools are sorted by poverty. DFG A – Poor traditional publics are Blue Cirlces and Charters are hollow red diamonds.

Red Diamonds are Charters, other are DFG A (Poor)
Red Diamonds are Charters, other are DFG A (Poor)
Red Diamonds are Charters, Others are DFG A (poor)
Red Diamonds are Charters, Others are DFG A (poor)
Red Diamonds are Charters
Red Diamonds are Charters
Red Diamonds are Charters
Red Diamonds are Charters

In each case above, the schools are sorted by poverty along the horizontal axis and by proficiency rates on the vertical axis. In each case above, charters are represented by the red diamonds and traditional public schools including only those schools in the poorest district factor groups are represented as blue circles.

The bottom line is that Charter school performance varies widely and varies as widely as traditional public school performance in poor districts. What we do not know yet, because of lack of data is whether the successful charter schools are, in part, successful due to their ability to raise substantial additional resources for their schools. It may be the case that the unsuccessful charters – those that do much less well than even the worst traditional publics are suffering from lack of resources.

Sadly, rather than address these real, substantive issues, organizations such as NJ E3 and individuals like Bob Bowdon have decided to pitch a load of baseless propaganda on public audiences that deserve better. In fact, by pitching this schlock that all charters can do no harm (just look at these 2 really awesome ones!), ignorant pundits like Bowdon are arguably compromising the market decisions of parents – leading them to believe that all charters must necessarily be better than all traditional publics. Market decisions must be based on good information regarding product quality. In this case, it would appear that at least some pundits are creating a new “market for lemons” (knowingly marketing bad charters) at the expense of parents and children for purely political gain (or to sell movie tickets and build reputation). This sales pitch may encourage parents to continue choosing low performing charters, sustaining those schools and holding down the charter average – making the case for charters harder to argue.

Recession & State Tax Revenues

Here’s a link to a new report on the effect of the economic downturn on state tax revenues. Particularly interesting is the table ranking overall budget impact across states on Page 20 (Table 12).

http://www.rockinst.org/pdf/government_finance/state_revenue_report/2009-10-15-SRR_77.pdf

More Cartel Garbage! Bowdon still vacuous!

See updated post on this topic: https://schoolfinance101.wordpress.com/2010/04/16/cartel-recap/

=========

Bob Bowdon is once again proving his numerical wizardry with ads for his schlockumentary The Cartel (which I refuse to see because I will likely blow my top in the middle of it, correcting every damn wrong and misguided supposed fact spewed in the film’s narration). There’s a commercial out for the Cartel Movie which lists about 2 or 3 supposed “facts” about New Jersey schools and how they compare to schools in other states – on funding and on graduation rates. 2 or 3 numbers in the commercial, and you’d think Bowdon could get at least one right… or interpret at least one in a way that is not completely misguided schlock. We’re not talkin’ any high level of manipulation here… but rather… at the same childish, buffoon-ish level as previous Bowdon brilliance (the claim that higher state spending lowers SAT scores… Yeah… you go Bob! Awesome. Cool. Freakin’ amazing! https://schoolfinance101.wordpress.com/2009/05/30/idiot-of-week-award-the-cartel-check-this-out/ )  Okay, so what am I complaining about this time?

The claim in the commercial goes… New Jersey is the highest spending state in the nation when it comes to schools… and, even though the NJDOE claims that we are tops in graduation rates, if you count only those kids who actually pass the high school graduation exam we’re really 24th. Yep, we spend all that (first, by a long shot, apparently) and all we can get is 24th in graduation rate.

First of all, graduation rates aren’t particularly a great statistic for comparing across states because graduation is highly dependent not only on varied state standards but varied local rigor. That is partly the (missed) point of the commercial. But the commercial goes on to imply that NJ is necessarily softer on grads than other states. Thus, we must correct those distorted NJ grad rates by the numbers who actually pass the state test… and then and only then… compare against all those rigorous states that really whoop NJ  %$$. Now, if we were going to “correct” New Jersey graduation rates to represent only those kids who can pass the NJ exam, the only legitimate way to compare against other states would be against the same standard – the ability of kids in other states to pass the NJ graduation test. Fun idea… but I don’t think other states are giving the NJ tests.

Hey… you know what… there is actually a report out there (by the gov’t agency charged with doing such analyses) that shows how states’ individual tests compare to specific cut points on the NAEP test (the one national standard assessment)… and thus to each other. But you’d have to do research… reading… actual numbers to find such a crazy thing. You wouldn’t want that kind of thing to taint the “facts” in a “documentary.” Here it is for future reference.

http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2007482

This particular report shows the NAEP score that would be associated with scoring proficient or higher on each state’s tests. So, for NJ, a proficient student on 8th grade reading would be expected to score 250 on NAEP 8th grade reading (the national test). The standard was higher (NAEP score associated with proficiency on state test) in about 13 states, but a heck of a lot lower in many states (lower in 19). NJ’s position was similar for 8th grade math. So, the state assessments in NJ fall near the upper portion of the pack among states, though certainly not near the top. Sadly there is no direct comparison of which I am immediately aware for the HS tests.

But, in NJ and many other states, these tests really don’t mean a whole lot to individual children, whether the test is rigorous or not. Yes… if we wish for anyone to take these tests seriously and if they are good and rigorous tests, we should expect kids to pass them to graduate. And we should do what we can to compensate for inequities in the preparation of high school students to succeed on the tests. But this has little or no bearing on state-to-state comparisons of graduation rates and is no excuse for Bowdon’s contorted reasoning (putting a generous spin on it).

As I have pointed out in previous posts (https://schoolfinance101.wordpress.com/2009/06/17/vacuous_bowdon/) , when it comes to NAEP assessments, NJ students do very well, in spite of, or perhaps in part because of our funding levels and distribution of that funding. NAEP is the tool we have for making state by state comparisons, of students on average and by subgroups. Also in my previous post (above), I show how New Jersey compares with countries based on an analysis linking NAEP and international assessment scores.

Here’s a tool for comparing state NAEP scores.  Have fun! http://www.nces.ed.gov/nationsreportcard/statecomparisons/

Other groups produce indices such as “college readiness” indices, and NJ does very well on these (http://www.edequality.com/content/map/ ), though I’m not fully confident in the rigor of such reports and measures.

Oh… and on that funding thing, when comparing total state and local revenues per pupil by state, without any adjustment for regional cost variation or other factors, NJ actually falls behind NY and VT, and barely ahead of WY based on my own run of the district level Census Fiscal Survey for 2006-07 (most recent year: http://www.census.gov/govs/www/school07doc.html ). After cost adjustment NJ drops lower (cost adj. here: http://www.nces.ed.gov/edfin/adjustments.asp ). So, even using the “bad” version (unadjusted) of the numbers, NJ is not #1. NJ was #1 in Current Expenditures per Pupil (including expenditure of federal dollars) in 2005 with no adjustments for regional cost variation, but fell behind Vermont and Wyoming and in a dead heat with New York and Maine in that year when cost adjustment is applied (the NCES Comparable Wage Index).

The point is that NJ is not the undisputed, year-after-year #1 and does not stand out as this huge outlier on public school spending as Bowdon’s grossly distorted and completely a-contextual references would have one believe. NJ does indeed stand out as a state that has put substantial additional funds into large urban, high poverty districts as a function of years of litigation. But, most rigorous accounts (actual research, with real statistics) find positive effects of the infusion of resources on student outcomes (see: https://schoolfinance101.wordpress.com/2009/10/12/real-info-re-nj-and-abbott-districts/ as just one example)

Table 1 and Table 2 of this report on school finances provide some perspective on school funding from the most recent NCES summary of the most recent Census fiscal survey (http://nces.ed.gov/pubs2009/2009338.pdf).

Here are a few related resources which discuss dropout and graduation rates, based on reasonable (albeit still problematic) approaches for measuring such things and comparing across states.

NCES Freshman Graduation Rates

http://nces.ed.gov/programs/coe/2009/section3/table-scr-1.asp

Report on Dropout and Graduation Rates

http://nces.ed.gov/pubs2009/2009064.pdf

Yeah. I know I’m talking to the wall here. Don’t let data and reasonable analysis get in the way. That’s just too geeky. Wouldn’t want a good number or some relevant context to taint the message of a “documentary.”

Cheers!

Real Info. Re: NJ and Abbott Districts

Here’s a link to a solid, recent dissertation out of the University of Michigan regarding New Jersey’s Abbott School districts and the effects of Abbott litigation on overall outcomes and outcome gaps.

http://deepblue.lib.umich.edu/bitstream/2027.42/61592/1/aresch_1.pdf

Here are some highlights quoted from the Intro of the dissertation:

The first essay is an empirical analysis of the e ffects of the Abbott school finance reform on educational expenditures in New Jersey. This reform dramatically increased the funding available to poor, urban schools with the goal of improving achievement in those districts. My analysis suggests that districts directed the added resources largely to instructional personnel. They hired additional teachers and support sta ff.

The second essay asks the obvious next question: Did this increase in funding and spending improve the achievement of students in the a ffected school districts? I focus primarily on the statewide 11th grade assessment that is the only test that spans the policy change. I find that the policy improves test scores for minority students in the aff ected districts by one- fifth to one-quarter of a standard deviation.

Resch’s third essay (following the common 3 essay structure of econ dissertations) is on higher education.

Resch’s analysis is not entirely uncritical of Abbott reforms. Resch expresses concerns that emphasis in later rounds of Abbott on early grades reforms may have diverted resources to early grades education at the expense of upper grades (where early gains had been shown – see above). Resch also questions whether the approach to financing Abbott’s was sustainable, but notes that SFRA introduces significant structural changes to school funding in NJ (I could go on about this part, but not now). Resch is most critical of the state’s own lack of efforts to generate and maintain data useful for evaluating the effects of reforms – either prior Abbott reforms or SFRA.

In any case, if you happen to be trying to decide whether to spend an hour or two with a sound analysis of actual data treated with reasonable rigor, or whether to go see the Cartel Movie filled with misguided and intellectually sloppy assumptions, misleading if not outright fabricated numbers, I would recommend the econ dissertation. Yeah… it’s not as sexy – doesn’t have the slick production – and won’t be in a theater near you. But you can just click and download above, and actually get some decent real numbers and analyses. Cheers.

If you do read my previous posts regarding the Cartel, please see all 3, along with the original materials that prompted me to post (http://www.njecea.org/cartel/?page_id=10)

Post 1: https://schoolfinance101.wordpress.com/2009/05/30/idiot-of-week-award-the-cartel-check-this-out/

Post 2: https://schoolfinance101.wordpress.com/2009/06/02/i-just-cant-let-go-of-this-one/

Post 3: https://schoolfinance101.wordpress.com/2009/06/17/vacuous_bowdon/

Note also that if you disagree with my framing of the film’s objectives, please see the “crisis” page here, where the movie is framed largely as I have framed it.

Dear DOE – Wrong Again!

After starting my day with this NPR brief:

http://www.npr.org/templates/story/story.php?storyId=113533704

I am again perplexed by what Department of Ed Officials are thinking, who is advising them and what analyses are actually being done before certain states are identified as “good” and others as “evil.” In this story, DOE officials are chastising the states of Pennsylvania, Massachusetts and Connecticut for playing a shell game with ARRA funds – filling budget holes with those funds and not using the funds to prop up/increase public education support.

Here’s the link to the report from DOE:

http://media.npr.org/assets/news/2009/10/06/stimulus.pdf

The problem here is that the DOE’s metrics for evaluating whether a state is “good” or “evil” are, well, entirely screwed up and meaningless. I can’t think of  softer way to phrase that. As such, the DOE continues to criticize states like MA and PA, which are doing reasonably well (now that the PA budget is nearing adoption) and the DOE is missing entirely those states which have done particularly “evil” things with ARRA funds.

For example, DOE’s primary concern regarding Massachusetts is that the state percent of total education funding will not be the same as it was in 2006.

In order to meet the requirements for the MOE waiver, a State must show that it is spending at least as much State money on education, as a percentage of total revenues, as it did in the previous year.

Given DOE phrasing, it appears that they mean the percent of total state revenues that are allocated to education. This is hardly a meaningful metric because it has little to do with the availability of resources to children in school districts and little to do with measuring a state’s “effort” for public education. A state could simply have slashed taxes and cut dramatically their total budget, slashing all public services left and right, including public schools – all the while, still spending the same share on public schools. Silly.

A more reasonable perspective would be to look at whether cumulative state, local and ARRA resources are actually assisting districts in maintaining and/or expanding services over prior year. Looking only at the state aid apportionment tells us very little. Based on district by district runs of 2008-09 and 2009-10 Massachusetts Chapter 70 aid program, it would appear for2010 that districts will receive modest per pupil increases in the sum of state and local (with ARRA) funds. The increases are partially funded by expected increases in minimum local contributions (which might easily be considered state resources).

http://finance1.doe.mass.edu/chapter70/chapter_10.xls

Pennsylvania is a unique case since the state was a budget impasse until very recently. However, while at impasse, the high end in the debate included a plan to continue substantial increases to support the new school finance formula which begins to resolve substantial disparities among PA school districts. The low end was to merely hold districts at prior year Basic Education Funding Levels. To the best of my understanding, the final solution is nearer the high end than the low end and does support some significant increases toward the phase in of the new formula. I may be wrong since I’ve yet to see the district by district run of state and local BEF resources. So, even if PA had landed on the low end scenario, it would have been the same as New York for 2010 and 2011. New York, also phasing in a new formula, stopped phase in entirely, and froze foundation funding for 2010 and 2011. So how is PA worse than NY? Should NY be on the hit list for DOE?

Many states did far worse things with their stabilization funds than NY, PA or MA (or perhaps even CT) which used them to… well… stabilize! For example, Kansas actually implemented per pupil cuts in foundation budgets – actual reductions over prior year cumulative per pupil resources – not just failing to meet an increase target. Even worse, the per pupil cuts are systematically larger in higher poverty than in lower poverty districts. http://www.ksde.org/LinkClick.aspx?fileticket=J%2bZiki0vnrc%3d&tabid=119&mid=8049

The poorer the district, the larger the per pupil cut.

How is that better than PA and MA? Alabama also cut districts substantially, though not necessarily systematically by poverty.

Nebraska made a really fun move. Nebraska altered the primary aid formula which ARRA funds were to flow through and then used the modified formula to provide per pupil increases to the affluent and middle class suburban districts around Omaha, but held Omaha roughly constant over prior year funding. So, Nebraska used ARRA funds to restore inequities that had persisted before Omaha fought back in recent years. http://ess.nde.state.ne.us/SchoolFinance/StateAid/Default.htm

Guess what DOE – you can maintain the same state share of funding if you just cut everyone’s budget! Cut everyone’s state aid and their local contribution toward foundation aid and state share can stay constant. Even more fun, you can actually use additional state resources to drive more funds to those districts with less need and create even greater inequities? And you can prop up those inequities with ARRA funds? No harm, no foul under current DOE metrics.

DOE, am I missing something here? I’ll gladly help out for a nominal fee. But this is just getting absurd!

Cordially,

SchoolFinance101

=========

A quick lesson for DOE. What matters for the operation of local public school districts is the sum of the resources available. In many if not most states, Foundation Aid formulas are the formulas that identify the “sum” of state and local resources to be provided for annual operating budgets. The state share of that sum is backed out after determining the funds that would be raised by applying a specific local property tax or required local effort rate. That local minimum requirement toward the sum may as well be considered state funding (to the extent that it actually is required). What matters to districts and the children they serve, is the SUM here! The foundation budget (and other add-ons), adjusted for various needs and costs.

On a related note, DOE should also recognize that some states actually determine that SUM in an inequitable way (see: https://schoolfinance101.wordpress.com/2009/01/27/the-fine-art-of-inequitable-school-finance-policy/).  For states that have foundation formulas that promote inequity, running ARRA funds through their formulas means using ARRA funds to advance inequity (Nebraska pulled a bait-and-switch for 2010).

Should NJ really try to be like DE, MD, MO, GA & WA?

I had relatively modest expectations for the Gannett series in New Jersey on state taxes to begin with. Sadly, this series managed to fall way short of these expectations by trying too hard to construct the point that New Jersey’s taxes are simply way out of line and that, for example, NJ would much better off if it behaved like all of those  smart, business friendly states out there like Delaware, Maryland, Missouri, Georgia and Washington. These states are indeed strange bedfellows.

http://www.app.com/article/20091002/NEWS/310020010/NJ+Tax+Crush++How+five+states+keep+their+tax+burden+down

First, lets get some numbers squared away. I like the fact that this article listed in the right margin, the average state and local per capita taxes, the average state per capita expenditure, and the average per capita income. Note that this second number should have been state and local spending. More importantly, the article failed to do the last calculation and related ranking – state and local taxes as a percent of personal income (perhaps I missed it). That is, how does our effort compare, given our income?  http://www.taxpolicycenter.org/taxfacts/displayafact.cfm?Docid=531

Typically, when looking at taxes as a share of income, NJ is not among the top few states. Above average, yes, but not the top. More specifically on the question of public education spending… state and local… NJ does rank second in the percent of Gross State Product spent on K-12 public school districts. By the way, Vermont is way ahead of us. But, as I have previously argued, to a large degree though not entirely, you get what you pay for. And the five states listed in the article as model states seem to get what they pay for.

On this previous post, I explain how “small business friendly” states tend to have particularly weak public school systems, if we assume small business friendliness to be only a function of low taxes and not high quality services. https://schoolfinance101.wordpress.com/2008/12/17/state-rankings-small-businesses-school-quality-and-economic-productivity/

As far as I’m concerned, having good schools is probably a critical element of business friendliness at least if you have any dreams of recruiting and retaining well educated employees who want their children to also be well educated.

So, what about those miraculous low tax states identified in this article? Within their regions, all but Georgia (which has less competition for bragging rights, and is relatively average) have relatively poorly funded public education systems and to a large extent, the outcomes to go with it.

Delaware has the lowest percent of school aged children even in the public school system. Only 77.6% of 6 to 16 year olds in Del. attend the public schools according to American Community Survey data from 2005 to 2007.  Delaware’s education spending, however, is relatively average or slightly better than average among states (after adjustments for competitive wages, size, location and relative poverty) and its outcomes are relatively average too. Yes, NJ spends more on schools than Delaware, and NJ serves a large share of its children in those schools, and NJ children generally outperform DE children on a variety of assessments.

http://www.nces.ed.gov/nationsreportcard/statecomparisons/Default.aspx?usrSelections=1%2cMAT%2c4%2c0%2cwithin%2c0%2c0

Quite surprisingly, Washington operates one of the least well funded state school finance systems in the nation. And Washington provides negligible additional support for higher poverty and urban schools (not always one and the same). And, Washington also does less well on outcome measures.

(I have some graphs here from an earlier post on LA and MS: https://schoolfinance101.wordpress.com/2009/02/25/public-schooling-in-louisiana-and-mississippi/)

Yes, within region, combining South Atlantic and Gulf Coast states, Georgia is relatively well funded (though again, not supporting poor and urban districts) compared to its peers and does outperform many states in its low performing region. But, Georgia performs much less well than NJ (okay… not a fair comparison… but that’s partly the point). GA also performs less well than Texas or Kentucky for that matter (in graphs in LA & MS post above).

Missouri, like Pennsylvania has made some efforts of late to straighten out their school funding mess into a more logical formula. But the fact remains that poor inner urban fringe and poor rural districts in Missouri have been starved of educational resources for decades and have the outcomes to match. On average, Missouri falls into that consistently below average category on inputs and outcomes. The bigger story in Missouri are the disparities – not so much with the major urban centers themselves (though they are part of the picture) but with those others noted above.

Maryland is another state with fewer than 80% of 6 to 16 year olds even in the public school system. Maryland’s spending is relatively average as are its outcomes. Which is, perhaps fine for Maryland.

By contrast, as I have pointed out here – https://schoolfinance101.wordpress.com/2009/06/17/vacuous_bowdon/

New Jersey, perhaps in part because of its high investment in public schooling – a major public expense, has strong performance measures either compared against other states or if treated as a separate country as I discuss in my previous post. Yes, much is cyclical here. NJ is a richer and more educated state that, as a result, values education and has the money to pay for it, compared to these states.

Do we really want to be like DE, MD, MO, GA or WA? I do like the seafood in WA better. I just moved back to NJ (no not the other way) from living a few yards away from the state line with MO and had the pleasure of both working for MO on their formula redesign and testifying against them on the remaining disparities. But I’m glad to be back.

Sadly, I think that one of the main things that would be learned from mimicking the taxing behavior of these states is that you can have lower taxes if you simply want to sacrifice the quality of your public services – primarily public education.

That said, I will continue to rant against certain major organizational inefficiencies in New Jersey public schooling and government services – that is, the multiple municipal madness issue, which I have written about and speak about (slides here)  https://schoolfinance101.wordpress.com/2009/08/14/small-districts-racial-isolation-and-new-jersey/

Indeed there is real progress to be made on reducing organizational inefficiencies in local governance. I’m not sure that I hold out much hope for resolution here.

======

PS: I hope at some point in the near future to post some detailed numbers on the characteristics of individuals moving into and out of NJ. Last I checked, while there were net exits of middle to lower income families, there was a net inflow of individuals with graduate level education. This would conflict with the unfounded rhetoric I heard spewed on the radio the other morning that NJ is losing all of its hard working small business types and replacing them will undocumented immigrants. Then again, the undocumented part may be a bit hard to track.

Dollars for Disabilities? What do we know?

In this article, Jay Greene and Marcus Winters present a grossly oversimplified perspective of what we really know about the relationship between state school finance systems – special education aid formulas – and state special education classification rates.

http://www.ajc.com/opinion/funding-may-push-special-145257.html?printArticle=y

This supposed problem plays out at two levels. First, it is assumed that states which allocate funding based on local school district rates of classifying special education students will see greater overall growth in special education student populations than states that a) provide flat funding per fixed share of students in each district, b) cap the number of students classified for which funding will be provided or c) use some other better measures of local district resident rates of children with “real” disabilities – a  measure outside the influence of local district classification procedures. Second, some go so far as to assume that not only are state average disability rates different solely because of local responses to differences in state fiscal incentives, but that local rates of disability classification within states also vary largely because of differences in the extent to which local school officials play the special education fiscal incentives game. Green and Winters seem to be speaking primarily on the first point – state average differences and headcount incentives.

There is a small body of research, some of which is pretty solid, that supports the notion that there is a relationship between fiscal incentives and classification rates. That is not to say, however, that such incentives explain most or all of the differences, as implied by Greene and Winters in their absurd Maine to California anecdote. Second, studies that show that classification rates are partially responsive to fiscal incentives do not address whether the incentivised classification rate may actually be closer to the true rate of disabilities than the non-incentivised rate. Without a measure of true prevalence it is difficult to make the leap that the incentive is necessarily a bad one – one that distorts inappropriately the classification rates and services for children with disabilities.

Further, removing the fiscal incentives entirely does not necessarily bring to a grinding halt the overall statewide growth in classification rates or the variations across districts – even if capitation or flat funding does create modest statistical differences in growth rates between states. Pennsylvania has provided flat, census based funding since the early 1990s, yet classification rates grew dramatically since that time and rates continue to vary widely across Pennsylvania districts, from about 5% to over 30%.

Further, as with most demographic characteristics, families of children with mental or physical disabilities are simply not uniformly distributed across neighborhoods, cities and towns within states or across states making it very difficult to say that the typical school district or state should have only X% of such children. Complicating the issue is that the uneven distribution of families of children with disabilities is endogenous to the quality of services available across communities within states and across states. So, for example, if a state provides generous funding based on actual needs of students and that funding leads to higher quality services for students, families of children with disabilities are more likely to consider relocating to those states. The same applies to more local moves where services vary across districts. And parents of children with disabilities may make these decisions based on more than the services provided by the school district alone. Large towns and small cities in otherwise rural areas tend to have elevated disability rates in part because of greater availability of social services and health-care services less available in surrounding areas.

So perhaps a state can export its children with disabilities to a neighboring state by adopting school finance policies that ensure low quality programming and limit district incentive to pursue diagnostic testing. And perhaps some of the differences we see across states – especially between neighboring states – are a function of these programming and service quality differences. This question is yet to be thoroughly addressed in the literature.

In any case, it is a huge unwarranted stretch to argue that state limitation of funding for special education necessarily leads to a more correct identification rate of children in need while holding constant (or even improving) the quality of programs and services and while not exporting children with disabilities.

The problem for state policymakers is to find the correct balance between sensitivity to the needs of individual children as identified by those charged with providing their educational services (local school districts, etc.), and measures of population differences across cities, towns and school districts within states that can serve as a guide in the distribution of resources while avoiding the wrong incentives.

I have written about this topic in the attached research article.

Baker.Ramsey.CBased.JEFSubmit.May28_09

Fact Check: Washington School Finance

I read this today:

http://www.ncpa.org/sub/dpd/index.php?Article_ID=18464

And was especially intrigued by the first bullet point: “Schools receive more than $10,000 per pupil per year, about one-third more than private schools spend per student.”

Having just completed my study of private school tax returns, this statement seemed a bit out of line and there was absolutely no support for it, not even in their main report: http://www.washingtonpolicy.org/Centers/education/policybrief/06_finne_schoolfunding.pdf

They do argue (but do not validate) in this report that the typical Washington private school spends about $6,000 per pupil.

So, I went back to my data set of private schools. Note that the main finding of my report was that private school spending varies widely and varies especially as a function of the affiliation of the schools. The lowest spending schools in my set of 1500 tax returns were those which are members of the major Christian Associations. My sample included 26 such schools in Washington state, which spent in 2007, on average about $6,656 per pupil. So, even the lowest spending group of private schools in Washington spend more than $6k per kid. The largest group filing their IRS 990 returns in Washington were private independent day schools. These schools spent, on average, $19,283 per kid per year.  Hey, that’s about twice what the posting said was the allocation for public schools. Sadly, only 2 catholic schools reported their IRS 990 in Washington, and those schools spent about $13k per kid per year, but are not necessarily representative of all Catholic schools in Washington.