Charter School Group Links Poverty to School Grades
A just-released report from the research arm of the Arizona Charter Schools Association adds to the body of evidence that school performance – and the method the state uses to hold schools accountable – are heavily affected by poverty levels.
The Center for Student Achievement concludes rather succinctly that “the school ratings used for the 2011-12 school year are, to a large extent, tied to the degree of poverty in a school.”
The center was created in 2011 by the charter schools association with the mission of improving student achievement in all schools – regardless of whether they are of the district or charter variety. The organization of which it is a part represents 80 per cent of the charter schools in the state.
The center’s latest report, entitled “School Ratings: Improving the Data in Data-Driven Decision Making,” takes aim at the state grading system. It notes the state system is intended to “convey a judgment as to the school’s quality or effectiveness at educating students” with “A” schools doing a great job and “D” schools doing a lousy one. But the report asks, “Can we reliably determine that ‘C’ schools are lower quality schools, or is it more appropriate to assume that they serve a greater proportion of students in poverty?”
The center found the latter to be the case. Its report goes on to add new insight to previous findings.
Just a month ago this website published a report drawing the same correlation between school performance and the socio-economic circumstances of their students.
The same point had been made in a 2003 study done by Research Advisory Services, Inc., in support of a lawsuit filed by seven school districts. The plaintiffs in the case known as Crane Elementary School District v. State of Arizona alleged the school funding formula violated the state constitution by not providing adequate education to the state’s “at-risk” students.
The state Court of Appeals didn’t dispute the correlation made between poverty and performance, but it ruled the state wasn’t obligated to do anything to address it.
Different Measures, Same Result
The three studies chose different but related measures to represent school performance. The analysis done for the lawsuit was based on the percentage of students who passed the AIMS tests. Thinking Arizona used the mean scores on those tests. The new report studies the grades given to schools based on the test results.
Regardless of the performance measure, the results are the same. Scatter-plot diagrams presented in the three reports paint the same pattern. As poverty levels go up, school performance goes down.
The consistency of the findings is important. They should scotch the belief, deeply held by some, that school performance depends only on the performance of schools. Those who stubbornly cling to that position do so based on the strength of their convictions rather than on any substantive proof.
The center’s study of all schools in the state — district and charter — was done by Anabel Aportela and Ildiko Laczko-Kerr, both of whom have Ph.D’s and extensive experience in measuring academic achievement. Each previously held posts in the state Department of Education, which devised the state grading system. View their findings.
The report advances previous analysis into new territory by examining the mechanism the state believes gives all schools a chance at a good grade. Grades are based on a total score of 1) student proficiency (the percent passing AIMS) and 2) student growth from one measurement period to the next. The theory goes that poor schools can make up for shortcomings in proficiency by excelling in growth.
Proficiency Overshadows Growth
Indeed, the study found, student growth occurs almost to the same degree in disadvantaged schools as it does in advantaged schools. The problem is that it doesn’t score out as highly as proficiency. Each ostensibly is worth 100 points in the state formula. But the center determined that the mean score for proficiency is 72; the mean score for growth is only 52.
The imbalance favors richer schools and penalizes poorer schools, the center found. The grading system, according to the report, “fails to adequately control for the effect of poverty on indicators of achievement in order to measure the school’s contribution to learning.”
The ramifications of using a measurement system that is stacked against disadvantaged schools, the report said, “are particularly important in a policy environment that seeks to attribute the results to educators and attach significant rewards or consequences.”
The report states, “The assumption [is] that the school accountability ratings are actually measuring what we think they are measuring: school quality.” The trouble with that, the report cautions, is: “The relationship between poverty and measures of achievement (e.g., percent of students passing a state’s standardized test) has long been a limitation of measurement of student achievement in education. This is not to say that schools do not make a difference; they do, but it is often difficult to measure their effect.”
This assessment is right on point. The report disappoints, however, in the remedies it offers.
The authors suggest five ways to tweak the achievement formula, such as by basing grades on multiple years of data. While one or more of the suggestions might right the ship, they run the risk of merely re-arranging the deck chairs.
The only real way to subtract out the effects of student poverty on school performance is to explicitly enter it into the equation. Only then will we get a better idea of which schools are performing above, or below, their respective circumstances.