Teacher Selection: Smart Selection vs. Dumb Selection

I had a twitter argument the other day about a blog posting that compared the current debate around “de-selection” of bad teachers to eugenics. It is perhaps a bit harsh to compare Hanushek  (cited author of papers on de-selecting bad teachers) to Hitler, if that was indeed the intent. However, I did not take that as the intent of the posting by Cedar Riener.  Offensive or not, I felt that the blog posting made 3 key points about errors of reasoning that apply to both eugenecists and to those promoting empirical de-selection of fixed shares of the teacher workforce.  Here’s a quick summary of those three points:

  • The first error is a deterministic view of a complex and uncertain process.
  • The second common error becomes apparent once the need arises to concretely measure quality.
  • The third error is a belief that important traits are fixed rather than changeable.

These are critically important, and help us to delineate between smart selection and, well, dumb selection.  These three errors of reasoning are the basis for dumb selection.  A selection that is, as the author explains, destined to fail.  But, I do not see this particular condemnation of dumb selection to be a condemnation of selection more generally. By contrast, the reformy pundit with whom I was arguing continued to claim that Riener’s blog was condemning any and all forms of selection as doomed to fail, a seemingly absurd proposition (and not how I read it at all).

Clearly, selection can and should play a positive role in the formation of the teacher workforce or in the formation of that team of school personnel that can make a school great.

Smart Selection: In nearly every human endeavor, in every and any workforce or labor market activity exists some form of selection. Selection of individuals into specific careers, jobs and roles and de-selection of individuals out of careers, jobs and roles. Selection in and of itself is clearly not a bad thing. In fact, the best of organizations necessarily select the best available individuals over time to work within those organizations. And, individuals attempt to select the best organizations, careers, jobs and roles to suit their interests, motivation and needs. That is, self-selection. Teacher selection or any education system employee selection is no different. And good teacher selection is obviously important for having good schools. Like any selection process on the labor market, teacher selection involves a two-sided match. On the one hand, there are the school leaders and existing employees (to the extent they play a role in recruitment and selection) who may play a role in determining among a pool of applicants which ones are the best fit for their school and the specific job in question. On the other hand, there are the signals sent out by the school (some within and some outside the control of existing staff and leaders) which influence the composition of the applicant pool and for that matter, whether an individual who is selected decides to stay. These include signals about compensation, job characteristics and work environment. Managing this complex system well is key to having a great school. Sending the right signals. Creating the right environment. Making the right choices among applicants. Knowing when a choice was wrong. And handling difficult decisions with integrity.

There has also been much discussion of late about a recent publication by Brian Jacob of the University of Michigan, who found that when given the opportunity to play a strong role in selecting which probationary teachers should continue in their schools, principals generally selected teachers who later proved to generate good statistical outcomes (test scores). Note that this approach to declaring successful decision making suffers the circular logic I’ve frequently bemoaned on this blog. But, at the very least, Jacob’s findings suggest that decisions made by individuals – human beings considering multiple factors – are not counterproductive when measured against our current batch of narrow and noisy metrics. Specifically, Jacob found:

Principals are more likely to dismiss teachers who are frequently absent and who have previously received poor evaluations. They dismiss elementary school teachers who are less effective in raising student achievement. Principals are also less likely to dismiss teachers who attended competitive undergraduate colleges. It is interesting to note that dismissed teachers who were subsequently hired by a different school are much more likely than other first-year teachers in their new school to be dismissed again.

That to me seems like good selection. And it seems that principals are doing it reasonably well when given the chance. And this is why I also support using principals as the key leverage point in the process (with the caveat that principal quality itself is very unequally distributed, and must be improved).

Dumb “Selection:” Dumb selection on the other hand – the kind of selection that is destined to fail if applied en masse in public schooling or any endeavor suffers the three major flaws of reasoning addressed by Cedar Riener in his blog post.  Now, you say to yourself, but who is really promoting dumb selection and what more specifically are the elements of dumb selection when it comes to the teacher workforce? Here are the elements:

  1. Heavy (especially a defined fixed, large share) weight in making teacher evaluation, compensation or dismissal decisions placed on Value-Added metrics, which can be corrupted, may suffer severe statistical bias, and are highly noisy and error prone.
  2. Explicit, prior specification of the exact share of teachers who should be de-selected in any given year, or year after year over time OR prior specification of exact scores or ratings (categories) derived from those scores requiring action to be taken – including de-selection.

Sadly, several states have already adopted into policy the first of these dumb selection concepts – the mandate of a fixed weight to be place on problematic measures.  See this post by Matt Di Carlo at ShankerBlog for more on this topic.

Thus far, I do not know of states or districts that have, for example, required that 5% of the bottom scoring teachers in any given year be de-selected. But, states and districts have established categorical rating systems for teachers from high to low rating groups, based arbitrary cut points applied to these noisy measures, and have required that dismissal, intervention and compensation decisions be based on where teachers fall in the fixed, arbitrary classification scheme in a given year, or sequence of three years.

To some extent, the notion of de-selecting fixed shares of the teacher workforce based on noisy metrics comes from economists simulations based on convenience of measures than on active policy conversations. But in the past year, the lines between these simulations and reality have become blurred as policy conversations have indeed drifted toward actually using fixed values based on noisy achievement measures in place of seniority as a blunt tool to deselect teachers during times of budget cuts.  If and when these simplified social science thought exercises are applied as public policy involving teachers, they do reek of the disturbingly technocratic, “value-neutral” mindset pervasive in eugenics as well.

One other recent paper that’s gotten attention, applies this technocratic (my preference over eugenic) approach to determine whether using performance measures instead of seniority would result a) in different patterns of layoffs and b) in different average “effectiveness” scores (again, that circular logic rears its ugly head) Now, of course, if you lay off based on effectiveness scores rather than seniority, the average effectiveness scores of those left should be higher. The deck is stacked in this reformy analysis. But, even then, the authors find very small differences, largely because a) seniority based layoffs seem to be affecting mainly first and second year teachers, and b) effectiveness scores tend to be lower for first and second year teachers. Overall, the authors find:

We next examine our value-added measures of teacher effectiveness and find that teachers who received layoff notices were about 5 percent of a standard deviation less effective on average than the average teacher who did not receive a notice. This result is not surprising given that teachers who received layoff notices included many first and second-year teachers, and numerous studies show that, on average, effectiveness improves substantially over a teacher’s first few years of teaching.

Perhaps most importantly, these thought experiments, not ready for policy implementation prime time (nor will they ever be?) necessarily ignore the full complexity of the system to which they are applied, and as Riener noted, assume that individual’s traits are fixed – how you are rated by the statistical model today is assumed to correct (despite a huge chance it’s not) and assumed to be sufficient for classifying your usefulness as an employee, now and forever (be it a 1 or 3 year snapshot). In that sense, Riener’s comparison, while offensive to some, was right on target.

To summarize: Smart selection good. Dumb selection bad. Most importantly, selection itself is neither good nor bad. It all depends on how it’s done.

More Flunkin’ out from Flunkout Nation (and junk graph of the week!)

Earlier today I stumbled across this brilliant post by RiShawn Biddle over at Dropout Nation.

Biddle boldly claims:

Despite the arguments (and the pretty charts) of such defenders as Rutgers’ Bruce Baker, there is no evidence that spending more on American public education will lead to better results for children.

Now, regarding the “no evidence” claim, I would recommend reading this article from Teachers College Record, this year, which summarizes a multitude of rigorous empirical studies of state school finance reforms finding generally that increased funding levels have been associated with improved outcomes and that more equitable distributions of resources have been associated with more equitable distributions of outcomes.

In fact, even the Spring 2011 issue of the journal Education Finance and Policy includes an article by Joydeep Roy supporting the positive results of state school finance reforms (using Michigan data).

Proposal A was quite successful in reducing interdistrict spending disparities. There was also a significant positive effect on student performance in the lowest-spending districts as measured in state tests.(from abstract)

As Kevin Welner and I point out in our article, this study is not unique in its findings. Here are a few others:

Card & Payne (2002)

Using micro samples of SAT scores from this same period, we then test whether changes in spending inequality affect the gap in achievement between different family background groups. We find evidence that equalization of spending leads to a narrowing of test score outcomes across family background groups. (p. 49)

Deke (2003)

Using panel models that, if biased, are likely biased downward, I have a conservative estimate of the impact of a 20% increase in spending on the probability of going on to postsecondary education. The regression results show that such a spending increase raises that probability by approximately 5% (p. 275).

Papke (2001)

Focusing on pass rates for fourth-grade and seventh grade math tests (the most complete and consistent data available for Michigan), I find that increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance. (Papke, 2001, p. 821.)

Downes (2004) on VT

All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post–Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance. (p. 312)

Downes, Zabel & Ansel (2009) on Mass

The achievement gap notwithstanding, this research provides new evidence that the state’s investment has had a clear and significant impact. Specifically, some of the research findings show how education reform has been successful in raising the achievement of students in the previously low-spending districts. Quite simply, this comprehensive analysis documents that without Ed Reform the achievement gap would be larger than it is today. (p. 5)

Guryan (2003) on Mass

Using state aid formulas as instruments, I find that increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students. The magnitudes imply a $1,000 increase in per pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures. (p. 1)

Goertz & Weiss (2009) on NJ

State Assessments: In 1999 the gap between the Abbott districts and all other districts in the state was over 30 points. By 2007 the gap was down to 19 points, a reduction of 11 points or 0.39 standard deviation units. The gap between the Abbott districts and the high-wealth districts fell from 35 to 22 points. Meanwhile performance in the low-, middle-, and high-wealth districts essentially remained parallel during this eight-year period (Figure 3, p. 23).

I could go on. But that’s a fair share of evidence right there.

And what does Biddle provide as counter evidence to this – apparent lack of evidence I summarize above (I’ve sent the article link to Biddle on more than one occasion, but he apparently doesn’t read this kind of academic stuff)?

Biddle counters with a link to this graph – a true gem (I’ve added some annotation, not in his original)!

Yes, Biddle’s entire counter to the body of research he has not and will not read, is to use this graph of “promoting power” by student race group for Jersey City, NJ in 2004 and 2009. Note that the infusion of additional funds in NJ occurred mainly from 1998 to 2003, leveling off thereafter. But that’s a tangential point (not really).  So, Biddle’s absolute verification that more money doesn’t matter is to simply assert without verification that Jersey City got a whole lot more money and then to use this graph to argue that nothing improved!

First of all, that analysis wouldn’t pass muster in as a master’s degree level assignment (I teach a class on this stuff at that level), no less major research conclusions. From a graphing standpoint, I often criticize my students’ work for what I refer to as gratuitous use of 3d – especially where the use of 3d bars actually obscures the comparisons by making it hard to see where they align on the axis.

But, the really funny if not warped part of this graph is that there appear to be significant gains for black males between 2004 and 2009, but those gains are obscured by hiding the 2009 black male score behind the 2004 black female score.

Note that the graph also contains no information regarding the actual shares of the student population that fall into each group? Not very useful. Pretty damn amateur. Certainly fails to make any particular point, and certainly doesn’t refute the various citations above – all of which employ more rigorous analytic methods, apply to more than a single district, and most of which appear in rigorous peer reviewed journals.

References:

Card, D., and Payne, A. A. (2002). School Finance Reform, the Distribution of School Spending, and the Distribution of Student Test Scores. Journal of Public Economics, 83(1), 49-82.

Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas, Economics of Education Review, 22(3), 275-284.

Downes, T. A. (2004). School Finance Reform and School Quality: Lessons from Vermont. In Yinger, J. (ed), Helping Children Left Behind: State Aid and the Pursuit of Educational Equity. Cambridge, MA: MIT Press.

Downes, T. A., Zabel, J., and Ansel, D. (2009). Incomplete Grade: Massachusetts Education Reform at 15. Boston, MA. MassINC.

Goertz, M., and Weiss, M. (2009). Assessing Success in School Finance Litigation: The Case of New Jersey. New York City: The Campaign for Educational Equity, Teachers College, Columbia University.

Guryan, J. (2003). Does Money Matter? Estimates from Education Finance Reform in Massachusetts. Working Paper No. 8269. Cambridge, MA: National Bureau of Economic Research.

Papke, L. (2005). The effects of spending on test pass rates: evidence from Michigan. Journal of Public Economics, 89(5-6). 821-839.

Roy, J. (2003). Impact of School Finance Reform on Resource Equalization and Academic Performance: Evidence from Michigan. Princeton University, Education Research Section Working Paper No. 8. Retrieved October 23, 2009 from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=630121(Forthcoming in Education Finance and Policy.)

Private Choices, Public Policy & Other People’s Children

I don’t spend much if any time talking about my personal decisions and preferences on this blog. It’s mostly about data and policy.  There’s been much talk lately about whether a Governor’s or President’s choice to send their children to elite private schools, or where Bill Gates, Mark Zuckerberg or prominent “ed reformers” attended school are at all relevant to the current policy conversation around  “reforming” public schools.  When those choices have been questioned publicly, they’ve often been met with the backlash that those are personal choices of no relevance to the current policy debate – just dirty personal attacks about personal, rational choices.

I have no problem with these personal choices. But, these personal choices may, in fact be relevant to the current policy debate.  I do keep in mind my own personal choices and preferences as I evaluate what I believe to be good policy for the children of others. And, I try to keep in mind what I know from my background in research and policy when I make my personal choices.   Like these prominent politicos and pundits, I too choose private independent schools – relatively expensive ones – for my children, and I have my reasons for doing so. As I’ve noted on my blog on a number of occasions, I taught at an exceptional private independent school in New York City, and have relatives and friends who continue to be involved in (and with) high quality private independent schools as teachers, administrators and parents. I did not, however, attend private school. I attended public school in Vermont, followed by private college (Lafayette College).

Why do I personally prefer private independent schools, which often come with a high price tag?  Here are a few reasons:

  1. The responsiveness that comes from a close-knit small community with not only small class sizes but also lower total student load for teachers (at middle and secondary level in particular)
  2. The depth and breadth of curricular offerings ranging from Latin in the middle school, to a diverse array of social science, advanced science and math courses at the high school level and a plethora of opportunities in the arts and athletics.
  3. The lack of emphasis on standardized testing – bubble tests and overemphasis on tested curricular areas and state standards.

Yes, I do consider it important that these schools are not test-whipped, specifically that they are not obsessed with basic reading and math bubble tests alone, or even more disturbing, tests of science and social studies content where the balance (or absence) of content is a function of partisan preferences of ill-informed politically motivated elected officials (e.g. Kansas science standards, or Texas social studies/history standards – thankfully, I’m not in KS anymore).

These days, I consider it especially important that my children not be in a school where teachers have to hang their hopes of achieving a living wage (or getting a bonus to afford cosmetic surgery as in “Bad Teacher”[hope to see that one soon!]) on whether or not my child gains X+Y points on those reading or math tests. In fact, these may now be my main reasons for opting out.

So yes, you might try to call me a hypocrite for preferring private schools for my own children while apparently being such a staunch defender and supporter of the public system (including voting yes on local district budgets, even when encouraged to vote no by public officials). But that would be a dreadful oversimplification and misrepresentation of my position.

I have worked in both public and private schools – one good and one bad of each – over a 10+ year period prior to my life in higher education.  I’ve studied and compared public and private schools in various locations and of various types for over 15 years and published numerous articles, papers and reports. What I’ve learned most from these studies is that private and/or less regulated markets are simply more varied than public and/or more regulated markets. Neither better nor worse on average – simply more varied.

Top notch private schools spend much more, and many financially strapped, relatively average to very low academic quality private schools do spend much less. Much more and much less than one another, and much more and much less than nearby public schools.  It is a massive bait and switch to suggest – look how great Sidwell Friends (DC),  Dalton or Fieldston (NYC) are compared to public schools, and look how much the average Catholic parish elementary school spends compared to the urban public district?  Of course, it’s never as obviously phrased as a bait and switch – suggesting that you can get a Sidwell or Dalton education at an urban Catholic elementary school price.  You can’t! Yes, the average Catholic parish elementary school likely spends less per pupil than the public district. But that school is no Sidwell, Dalton or Fieldston, which spend closer to and in excess of double the public schools in their area.

Private schools do not, as many assume, spend only about half what public schools do. This is urban legend, drawn from dated analyses that were misrepresented to begin with (over 10 years ago).  My extensive report on private school supply and spending covers these issues quite extensively.

To reiterate a major finding from my study of private school costs, private independent schools of the type I am talking about here (members of NAIS or NIPSA), spend ON AVERAGE, 1.96 times the average per pupil amount of public schools in the same labor market! (and have half the pupil to teacher ratio)

I am quite convinced that many of the policy makers who choose elite private schools for their own and advocate for scaling back the public system, really don’t understand the difference. They really don’t know that their private schools outspend nearby traditional public schools – by a lot – despite serving more advantaged student populations. Heck, I’ve talked to administrators in private independent schools who feel that their own budgets are tight (legitimately so), and assume that the public schools around them spend much more per child. But they are simply naïve in this regard (while wise in many other ways). No intent to harm. They’ve simply bought into the misguided rhetoric that private schools spend less and get more and they’ve never double-checked the facts. But even a few minutes of pondering their own budgets and looking up local public school spending brings them around. (Part of this perception is likely driven by differences in access to funding for capital projects, where heads of private schools recognize the heavy lifting of major fundraising campaigns, and envy the taxing authority of public school districts for these purposes).

In my view, the hypocrisy lies in what those who choose elite private schools for their own argue are the best solutions for public education for the children of others.  If the preferences are the same, there is no hypocrisy. The problem is when those preferences are vastly different – completely at odds – as they tend to be in the present “ed reform” and “new normal” debate.

It is hypocritical for pundits who favor for their own children, expensive schooling with diverse curriculum, small class size and little standardized testing (freeing teachers to be professionals), to argue for less money, class size increases and increased standardized testing (and teacher evaluation based on those tests) when it comes to other peoples’ children.

Yes, I too personally favor expensive private schooling for the reasons I’ve indicated above. And yes, my private school significantly outspends both the elite suburban public school district where I live and New Jersey’s reasonably well funded urban districts (compared to other states, see: http://www.schoolfundingfairness.org).   The way I see it, I would not just be a hypocrite, but a complete a-hole if I used my pulpit (what little pulpit I have) as a school finance expert to argue that we should be spending less on others, advocating different policies for others than I desire for myself.  But it’s precisely because I spend my day buried in data on school finance and education policy that I see this glaring hypocrisy.

The difference is that I believe that other children – those whose parents are not able to make this expensive choice – should have access to well-funded schools that also provide small class sizes, diverse curriculum, and for that matter, place less emphasis on standardized tests, and treat teachers as responsible, knowledgeable professionals (not script reading stand-ins and test proctors).

To clarify, this is not a criticism of individuals with personal preferences for high quality education for their own children who are otherwise unconcerned with (or oblivious to) the broader public policy questions pertaining to the children of others. Rather, this is a direct criticism of those public officials and vocal “ed reformers” who prefer high quality, well funded education for their own and then loudly and publicly advocate for a very different quality (and type) of education for the children of others.

If we could actually close the gap between public school resources and resource levels of elite private schools, there might be less demand for those elite private schools (though some would indeed respond with an arms race to outpace public schools).  Presently, however, elite private schools stand to benefit significantly from the “ed reform” and “new normal” movement which will likely make more public schools – including those in more affluent ‘burbs – even less desirable for parents currently on the fence.

So, here’s my challenge to all those policymakers who also prefer elite private independent schools for their children.  I urge you to make a list of all of the reasons why you chose a private independent school. Notably, many if not most parents list class size as a major factor (and most schools advertise class size as a major benefit).  Make a list of the specific attributes of your private school including:

  1. Average class size
  2. Teacher education levels
  3. Numbers and types of elective and advanced course offerings
  4. Numbers and types of extracurricular activities
  5. Whether they pay more experienced teachers more than less experienced ones (or more for teachers holding advanced degrees?)
  6. Whether they emphasize student test scores when evaluating or compensating teachers?

and whatever else you might think of. (here are a few sample NJ private schools)

Get a copy of the school’s IRS 990 tax filing from the school (or from:  http://foundationcenter.org/, or http://www.guidestar.org) to find out roughly how much your school spends each year, and divide that by the number of total enrolled pupils.

Then, gather similar information on surrounding public schools. Make your own comparisons. And after you’ve done so, let me know if you’re still comfortable making bold public proclamations that we need to reign in the absurd spending of public schools, increase class sizes and slash all of those frivolous extracurricular programs for other people’s children, but certainly not our own!

Video Extra:

And a Song:

Zip it! Charters and Economic Status by Zip Code in NY and NJ

There’s no mystery or proprietary secret among academics or statisticians and data geeks as to how to construct simple comparisons of school demographics using available data.  It’s really not that hard. It doesn’t require bold assumptions, nor does it require complex statistical models. Sometimes, all that’s needed to shed light on a situation is a simple descriptive summary of the relevant data.  Below is a “how to” (albeit sketchy) with links to data for doing your own exploring of charter and traditional public school demographics, by grade level and location.

Despite the value of a simple, direct and relevant comparison using accessible data providing for easy replication, many continue to obscure charter-non-charter comparisons with convoluted presentations of less pertinent information.  Matt DiCarlo recently published a very useful post (at Shanker Blog) explaining the various convoluted descriptions from Caroline Hoxby’s research on charter schools that make it difficult to discern whether the charter schools in her comparisons really had comparable student populations to nearby, same grade level traditional public schools.

 As I’ve discussed in the past, charter advocate researchers tend to avoid these basic comparisons, instead showing that students selected through the lottery were comparable to those not selected but who still entered the lottery (excluding all those who didn’t enter the lottery). While this information is relevant to the research question at hand (comparing effectiveness among lottery winners and losers), it skips over entirely another potentially relevant tidbit – whether, on average, the charter students are comparable to students in surrounding schools.

Alternatively, charter advocate researchers will compare charter characteristics to district wide averages, or whatever comparison sheds the most favorable light.  For example, Matt DiCarlo explains of Caroline Hoxby’s NYC charter research that:

“The authors compare the racial composition of charter students to that of students throughout the whole city – not to that of students in the neighborhoods where the charters are located, which is the appropriate comparison (one that is made in neither the summary nor the body of the report). For example, NYC charter schools are largely concentrated in Harlem, central Brooklyn and the South Bronx, where regular public schools are predominantly non-white and non-Asian (just like the charters).”

The better approach is, of course, to compare against the, well, most comparable schools – or those serving similar grade levels in the same general proximity – or even to be able to identify each individual school (such that one can determine comparable grade levels) among districts in similar locations.

Here’s my general guide to making your own comparisons using a readily available data source.

Go to: www.nces.ed.gov/ccd

Use the Build a Table function: http://nces.ed.gov/ccd/bat/

  1. Select as many years of data you want/need (first screen toggle)
  2. Select the “school” as your unit of analysis for your data (first screen, drop down)
  3. Select “contact information” from the drop down menu on next screen
    1. Select location zip code
    2. Select location city
  4. Select “classification information” from the drop down menu
    1. Select the “charter” indicator
    2. Select the “magnet” indicator (in case you want to include/exclude these)
  5. Select “total enrollment” from the drop down menu
    1. Select total enrollment
  6. Select “students in special programs” from the drop down menu
    1. Select students qualifying for free lunch
    2. Select students qualifying for reduced price lunch
  7. Select “Grade Span Information” from the drop down menu
    1. Select “school level” identifier
    2. Select “High Grade” and “Low Grade” indicators if you want more flexibility in comparing “like” schools
  8. Pick the state or states you want (you can’t use this tool to pull all schools nationally because the data set will be too large for this tool. Complete data are downloadable at: http://nces.ed.gov/ccd/pubschuniv.asp )

Calculate Percent Free Lunch and Percent Free & Reduced Lunch (divide groups by total enrollment)!

Play…

Here are some examples…

First, here are a handful of New Jersey Charter Schools compared to other schools (comparable and not) in their same zip code.

In this first figure, from a Newark, NJ zip code, we can see quite plainly and obviously that the shares of children qualifying for free lunch in Robert Treat Academy are much lower than all other surrounding schools, including the high school in the zip code (Barringer), where high schools typically have lower rates of students qualifying (or filing relevant forms) for free lunch.

Here are a few more.

Other “high flying” charters in Newark including North Star Academy, Gray Charter School and Greater Newark Academy, in a zip code with fewer traditional public schools, tend to have poverty concentrations more similar to specialized/magnet schools than to neighborhood schools in Newark. Other charter schools like Maria Varisco Rogers and Adelaide Sanford have populations more comparable to traditional neighborhood schools.  But, we don’t tend to hear as much about these schools – or their great academic successes.

Things aren’t too different over in Jersey City.  In the area (zip code) around Learning Community Charter School, other charters and neighborhood schools have much higher rates of children qualifying for free lunch than LCCS. Only the special Explore 2000 school has a lower rate.

Ethical Community Charter also stands out like a sore thumb when compared to all other schools in the same zip code, including those serving upper grades which typically have lower rates.

But what about those NYC KIPP schools? How about some KIPP BY ZIP?

So much has been made of the successes of KIPP middle schools, coupled with much contentious debate over whether KIPP schools really serve representative populations and/or whether they are advantaged by selective attrition. I included some links to relevant studies on those points here. But even those studies, which make many relevant and interesting comparisons, don’t give the simple demographic comparison to other middle schools in the same neighborhood. So here it is: