Following up “On Rating The Effectiveness of Colleges of Education Using VAMs” – which is about how the US Department of Education wants teacher training programs to track how college of educations’ teacher graduates’ students are performing on standardized tests (i.e., teacher-level value-added that reflects all the way back to a college of education’s purported quality), the proposal for these new sanctions is now open for public comment.
Click here on Regulations.gov, “Your Voice in Federal Decision-Making” to read more, but also to post any comments you might have (click on the “Comment Now!” button in blue in the upper right hand corner). I encourage you all to post your concerns, as this really is a potential case of things going from bad to worse in the universe of VAMs. The deadline is Monday, February 2, 2015.
I pasted what I submitted below, as taken from an article I published about this in Teachers College Record in 2013:
1. The model posed is inappropriately one-dimensional. More than 50% of college graduates attend more than one higher education institution before receiving a bachelor’s degree (Ewell, Schild, & Paulson, 2003), and approximately 60% of teacher education occurs in general liberal arts and sciences, and other academic departments outside of teacher education. There are many more variables that contribute to teachers’ knowledge by the time they graduate than just the teacher education program (Anrig, 1986; Darling-Hammond & Sykes, 2003).
2. The implied assumptions of the aforementioned linear formula are overly simplistic given the nonrandomness of the teacher candidate population…If teacher candidates who enroll in a traditional teacher education program are arguably different from teacher candidates who enroll in an alternative program, and both groups are compared once they become teachers, one group might have a distinct and unfair advantage over the other…What cannot be overlooked, controlled for, or dismissed from these comparative investigations are teachers’ enduring qualities that go beyond their preparation (Boyd et al., 2006; Boyd, Grossman, Lankford, Loeb, & Wyckoff, 2007; Harris & Sass, 2007; Shulman, 1988; Wenglinsky, 2002).
3. Teachers are nonrandomly distributed into schools after graduation as well. The type of teacher education program from which a student graduates is highly correlated with the type and location of the school in which the teacher enters the profession (Good et al., 2006; Harris & Sass, 2007; Rivkin, 2007; Wineburg, 2006), especially given the geographic proximity of the program…Without randomly distributing teachers across schools, comparison groups will never be adequately equivalent, as implied in this model, to warrant valid assertions about teacher education quality (Boyd et al., 2006; Good et al., 2006). It should be noted, however, that whether the use of students’ pretest scores and other covariates can account or control for such inter- and intra-classroom variations is still being debated and remains highly uncertain (Ballou, Sanders, & Wright, 2004; Capitol Hill Briefing, 2011; Koedel & Betts, 2010; Kupermintz, 2003; McCaffrey, Lockwood, Koretz, Louis, & Hamilton, 2004; J. Rothstein, 2009; Tekwe et al., 2004).
4. Students are also not randomly placed into classrooms…Students’ innate abilities and motivation levels bias even the most basic examinations in which researchers attempt to link teachers with student learning (Newton et al., 2010; Harris & Sass, 2007; Rivkin, 2007)…the degree to which such systematic errors, often considered measurement biases, [still] impact value-added output is [still] yet highly unsettled (Ballou et al., 2004; Capitol Hill Briefing, 2011; Koedel & Betts, 2010; Kupermintz, 2003; McCaffrey et al., 2004; J. Rothstein, 2009; Tekwe et al., 2004).
5. A student’s performance is also empirically compounded by what teachers learn “on the job” post-graduation via professional development (see, for example, Greenleaf et al., 2011). If researchers are to measure the impact of a teacher education program using student achievement, and graduates have received professional development, mentoring, and enrichment opportunities post-graduation, one must question whether it is feasible to disentangle the impact that professional development, versus teacher education, has on teacher quality and students’ learning over time. Graduates’ opportunities to learn on the job, and the extent to which they take advantage of such opportunities, introduces yet another source of construct irrelevant variance (CIV) into, what seemed to be, the conceptually simple relational formula presented earlier (Good et al., 2006; Harris & Sass, 2007; Rivkin, 2007; Yinger, Daniel, & Lawton, 2007). CIV is generally prevalent when a test measures too many variables, including extraneous and uncontrolled variables that ultimately impact test outcomes and test-based inferences (Haladyna & Downing, 2004) [and the statistics, no matter how sophisticated they might be, cannot control for or factor all of this out].