Tis the award season, and during this time every year, the National Education Policy Center (NEPC) recognizing the “lowlights” in educational research over the previous year, in their annual Bunkum Awards. To view the entertaining video presentation of the awards, hosted by my mentor David Berliner (Arizona State University), please click here.
Lowlights, specifically defined, include research studies in which researchers present, and often oversell thanks to many media outlets, “weak data, shoddy analyses, and overblown recommendations.” Like the Razzies are to the Oscars in the Academy of Film, are the Bunkums to the best educational research studies in the Academy of Education. And like the Razzies, “As long as the bunk [like junk] keeps flowing, the awards will keep coming.”
As per David Berliner, in his introduction in the video, “the taxpayers who finance public education deserve smart [educational] policies based on sound [research-based] evidence.” This is precisely why these awards are both necessary, and morally imperative.
One among this year’s deserving honorees is of particular pertinence here. This is the, drum roll: ‘We’re Pretty Sure We Could Have Done More with $45 Million’ Award — Awarded to the Bill & Melinda Gates Foundation for Two Culminating Reports they released this year from their Measures of Effective (MET) Project. To see David’s presentation on this award, specifically, scroll to minute 3:15 (to 4:30) in the aforementioned video.
Those at NEPC write about these studies: “We think it important to recognize whenever so little is produced at such great cost. The MET researchers gathered a huge data base reporting on thousands of teachers in six cities. Part of the study’s purpose was to address teacher evaluation methods using randomly assigned students. Unfortunately, the students did not remain randomly assigned and some teachers and students did not even participate. This had deleterious effects on the study–limitations that somehow got overlooked in the infinite retelling and exaggeration of the findings.
When the MET researchers studied the separate and combined effects of teacher observations, value-added test scores, and student surveys, they found correlations so weak that no common attribute or characteristic of teacher-quality could be found. Even with 45 million dollars and a crackerjack team of researchers, they could not define an “effective teacher.” In fact, none of the three types of performance measures captured much of the variation in teachers’ impacts on conceptually demanding tests. But that didn’t stop the Gates folks, in a reprise from their 2011 Bunkum-winning ways, from announcing that they’d found a way to measure effective teaching, nor did it deter the federal government from strong-arming states into adoption of policies tying teacher evaluation to measures of students’ growth.”
To read the full critique of both of these studies, written by Jesse Rothstein (University of California – Berkeley) and William Mathis (University of Colorado – Boulder), please click here.