Just this week in Ohio – a state that continues to contract with SAS Institute Inc. for test-based accountability output from its Education Value-Added Assessment System – SAS’s EVAAS Director, John White, “defended” the use of his model statewide, during which he also claimed before Ohio’s Joint Education Oversight Committee (JEOC) that “poorer schools do no better or worse on student growth than richer schools” when using the EVAAS model.
For the record, this is false. First, about five years ago in Ohio, while the state of Ohio was using the same EVAAS model, Ohio’s The Plain Dealer in conjunction with StateImpact Ohio found that Ohio’s “value-added results show that districts, schools and teachers with large numbers of poor students tend to have lower value-added results than those that serve more-affluent ones.” They also found that:
- Value-added scores were 2½ times higher on average for districts where the median family income is above $35,000 than for districts with income below that amount.
- For low-poverty school districts, two-thirds had positive value-added scores — scores indicating students made more than a year’s worth of progress.
- For high-poverty school districts, two-thirds had negative value-added scores — scores indicating that students made less than a year’s progress.
- Almost 40 percent of low-poverty schools scored “Above” the state’s value-added target, compared with 20 percent of high-poverty schools.
- At the same time, 25 percent of high-poverty schools scored “Below” state value-added targets while low-poverty schools were half as likely to score “Below.” See the study here.
Second, about three years ago, similar results were evidenced in Pennsylvania – another state that uses the same EVAAS statewide, although in Pennsylvania the model is known as the Pennsylvania Education Value-Added Assessment System (PVAAS). Research for Action (click here for more about the organization and its mission), more specifically, evidenced that bias also appears to exist particularly at the school-level. See more here.
Third, and related, in Arizona – my state that is also using growth to measure school-level value-added, albeit not with the EVAAS – the same issues with bias are being evidenced when measuring school-level growth for similar purposes. Just two days ago, for example, The Arizona Republic evidenced that the “schools with ‘D’ and ‘F’ letter grades” recently released by the state board of education “were more likely to have high percentages of students eligible for free and reduced-price lunch, an indicator of poverty” (see more here). In actuality, the correlation is as high or “strong” as r = -0.60 (e.g., correlation coefficient values that land between r = ± 0.50 and ± 1.00 are often said to indicate “strong” correlations). What this means in more pragmatic terms is that the better the school letter grade received the lower the level of poverty at the school (i.e., a negative correlation which indicates in this case that as the letter grade goes up the level of poverty goes down).
While the state of Arizona combines with growth a proficiency measure (always strongly correlated with poverty), and this explains at least some of the strength of this correlation (although combining proficiency with growth is also a practice endorsed and encouraged by John White), this strong correlation is certainly at issue.
More specifically at issue, though, should be how to get any such correlation down to zero or near-zero (if possible), which is the only correlation that would, in fact, warrant any such claim, again as noted to the JEOC this week in Ohio, that “poorer schools do no better or worse on student growth than richer schools”.
Thanks for keeping on top of this sham in Ohio and in other states. As a person who is not accustomed to thinking of human growth and development in numerical terms, unless the matter is physical growth (height, weight, and the like) I have a slow tummy roll every time I encounter claims about growth as if growth can be reduced to an increase in scores on standardized tests, VAM, and the like.
Larry Cuban is also doing some blog posts on the neglected aspects of Campbell’s research–that out of school factors are so much more significant than in-school, including something as basic as time devoted to learning in schools. He also posted a wonderful bell curve with the label at the peak of the curve “I have achieved mediocrity.”
We use this same EVAAS system here in North Carolina and it’s just as bogus. One of our district officials has been peppering us with supposed “growth” scores from EVAAS data that compare the level scores (as opposed to the raw scores or even the adjusted cut scores) on 8th grade EOG science tests with those on the 9th grade EOC Biology test. She is oblivious to the fact that not only are we comparing scores that are twice-derived (and thus filtered through a whole set of assumptions) but the two tests aren’t even comparable. The 8th grade test has more than half its material taken from the physical science part of the 8th grade curriculum and has very little relation even in its life sciences sections to the 9th grade biology test. Yet we’re somehow supposed to use this for “data-driven” decision making. We’re just lucky that they haven’t tried to use it for evaluations…yet.
ok. Thanks for such a information