Following up on my most recent post about “School-Level Bias in the PVAAS Model in Pennsylvania,” also in Ohio – a state that also uses “the best” and “most sophisticated” VAM (i.e., a version of the Education Value-Added Assessment System [EVAAS]; for more information click here) – this seems to be a problem, as per an older (2013) article just sent to me following my prior post “Teachers’ ‘Value-Added’ Ratings and [their] Relationship to Student Income Levels [being] Questioned.”
The key finding? Ohio’s “2011-12 value-added results show that districts, schools and teachers with large numbers of poor students tend to have lower value-added results than those that serve more-affluent ones.” Such output continue to evidence how using VAMs may not be “the great equalizer” after all. VAMs might not be the “true” measures they are assumed (and marketed/sold) to be.
Here are the state’s stats, as highlighted in this piece and taken directly from the Plain Dealer/StateImpact Ohio analysis:
- Value-added scores were 2½ times higher on average for districts where the median family income is above $35,000 than for districts with income below that amount.
- For low-poverty school districts, two-thirds had positive value-added scores — scores indicating students made more than a year’s worth of progress.
- For high-poverty school districts, two-thirds had negative value-added scores — scores indicating that students made less than a year’s progress.
- Almost 40 percent of low-poverty schools scored “Above” the state’s value-added target, compared with 20 percent of high-poverty schools.
- At the same time, 25 percent of high-poverty schools scored “Below” state value-added targets while low-poverty schools were half as likely to score “Below.”
One issue likely causing this level of bias is that student background factors are not accounted for in the EVAAS model. “By including all of a student’s testing history, each student serves as his or her own control,” as per the analytics company – SAS – that now sells and runs model calculations. Rather, the EVAAS uses students’ prior test scores to control for these factors. As per a document available on the SAS website: “To the extent that SES/DEM [socioeconomic/demographic] influences persist over time, these influences are already represented in the student’s data. This negates the need for SES/DEM adjustment.”
Within the same document SAS authors write: “that adjusting value-added for the race or poverty level of a student adds the dangerous assumption that those students will not perform well. It would also create separate standards for measuring students who have the same history on scores on previous tests. And it says that adjusting for these issues can also hide whether students in poor areas are receiving worse instruction. ‘We recommend that these adjustments not be made,” the brief reads. “Not only are they largely unnecessary, but they may be harmful.”
See also another article, also from Ohio about how a value-added “Glitch Cause[d the] State to Pull Back Teacher “Value Added” Student Growth Scores.” From the intro, “An error by contractor SAS Institute Inc. forced the state to withdraw some key teacher performance measurements that it had posted online for teachers to review. The state’s decision to take down the value added scores for teachers across the state on Wednesday has some educators questioning the future reliability of the scores and other state data…” (click here to read more).