Following up “On Rating The Effectiveness of Colleges of Education Using VAMs” – which is about how the US Department of Education wants teacher training programs to track how college of educations’ teacher graduates’ students are performing on standardized tests (i.e., teacher-level value-added that reflects all the way back to a college of education’s purported quality), the proposal for these new sanctions is now open for public comment.
Click here on Regulations.gov, “Your Voice in Federal Decision-Making” to read more, but also to post any comments you might have (click on the “Comment Now!” button in blue in the upper right hand corner). I encourage you all to post your concerns, as this really is a potential case of things going from bad to worse in the universe of VAMs. The deadline is Monday, February 2, 2015.
I pasted what I submitted below, as taken from an article I published about this in Teachers College Record in 2013:
1. The model posed is inappropriately one-dimensional. More than 50% of college graduates attend more than one higher education institution before receiving a bachelor’s degree (Ewell, Schild, & Paulson, 2003), and approximately 60% of teacher education occurs in general liberal arts and sciences, and other academic departments outside of teacher education. There are many more variables that contribute to teachers’ knowledge by the time they graduate than just the teacher education program (Anrig, 1986; Darling-Hammond & Sykes, 2003).
2. The implied assumptions of the aforementioned linear formula are overly simplistic given the nonrandomness of the teacher candidate population…If teacher candidates who enroll in a traditional teacher education program are arguably different from teacher candidates who enroll in an alternative program, and both groups are compared once they become teachers, one group might have a distinct and unfair advantage over the other…What cannot be overlooked, controlled for, or dismissed from these comparative investigations are teachers’ enduring qualities that go beyond their preparation (Boyd et al., 2006; Boyd, Grossman, Lankford, Loeb, & Wyckoff, 2007; Harris & Sass, 2007; Shulman, 1988; Wenglinsky, 2002).
3. Teachers are nonrandomly distributed into schools after graduation as well. The type of teacher education program from which a student graduates is highly correlated with the type and location of the school in which the teacher enters the profession (Good et al., 2006; Harris & Sass, 2007; Rivkin, 2007; Wineburg, 2006), especially given the geographic proximity of the program…Without randomly distributing teachers across schools, comparison groups will never be adequately equivalent, as implied in this model, to warrant valid assertions about teacher education quality (Boyd et al., 2006; Good et al., 2006). It should be noted, however, that whether the use of students’ pretest scores and other covariates can account or control for such inter- and intra-classroom variations is still being debated and remains highly uncertain (Ballou, Sanders, & Wright, 2004; Capitol Hill Briefing, 2011; Koedel & Betts, 2010; Kupermintz, 2003; McCaffrey, Lockwood, Koretz, Louis, & Hamilton, 2004; J. Rothstein, 2009; Tekwe et al., 2004).
4. Students are also not randomly placed into classrooms…Students’ innate abilities and motivation levels bias even the most basic examinations in which researchers attempt to link teachers with student learning (Newton et al., 2010; Harris & Sass, 2007; Rivkin, 2007)…the degree to which such systematic errors, often considered measurement biases, [still] impact value-added output is [still] yet highly unsettled (Ballou et al., 2004; Capitol Hill Briefing, 2011; Koedel & Betts, 2010; Kupermintz, 2003; McCaffrey et al., 2004; J. Rothstein, 2009; Tekwe et al., 2004).
5. A student’s performance is also empirically compounded by what teachers learn “on the job” post-graduation via professional development (see, for example, Greenleaf et al., 2011). If researchers are to measure the impact of a teacher education program using student achievement, and graduates have received professional development, mentoring, and enrichment opportunities post-graduation, one must question whether it is feasible to disentangle the impact that professional development, versus teacher education, has on teacher quality and students’ learning over time. Graduates’ opportunities to learn on the job, and the extent to which they take advantage of such opportunities, introduces yet another source of construct irrelevant variance (CIV) into, what seemed to be, the conceptually simple relational formula presented earlier (Good et al., 2006; Harris & Sass, 2007; Rivkin, 2007; Yinger, Daniel, & Lawton, 2007). CIV is generally prevalent when a test measures too many variables, including extraneous and uncontrolled variables that ultimately impact test outcomes and test-based inferences (Haladyna & Downing, 2004) [and the statistics, no matter how sophisticated they might be, cannot control for or factor all of this out].
We in K-12 know from experience that VAM Has no place in evaluation.
However, far too many on higher ed and teacher prep programs have been quiet about VAM in K-12.
It’s time they pay the full fare; we can form a powerful force working together, but we all have to carry the water.
As a teacher with a masters degree in elementary education from the University of Richmond and as a retired public school teacher of 40 years, I would implore you to listen to other professional educators as myself. Please read this petition and follow the advise of education professionals and not business professionals. Thank you for your consideration. Alice Towery
I appreciate your work on the VAM problem. Equal attention needs to be given to the use of SLOs for evaluating teacher education in so-called untested and non-tested subjects. It has been estimated that about 65-69% of teachers have job assignments for which there are not state-wide tests. SLOs (and variants) are the proxy of choice for VAM. This writing exercise is required in at least 27 states, with pretest-posttest and/or baseline to post-test reports on student growth. Four reports from USDE (2014) show that there is no empirical research to support the use of the SLO process (and associated district-devised tests and cut-off scores) for teacher evaluation.
The template for SLOs originated in Denver in 1999. It has been widely copied and promoted via publications from USDE’s “Reform Support Network,” which operates free of any need for evidence and few constraints other than marketing a deeply flawed product. SLO templates in wide use have no peer reviewed evidence to support their use for teacher evaluation…not one reliability study, not one study addressing their validity for teacher evaluation.
SLO templates in Ohio and other states are designed to fit the teacher-student data link project (funded by Gates and USDE since 2005). This means that USDE’s proposed evaluations of specific teacher education programs ( e.g., art education at Ohio State University) will be aided by the use of extensive “teacher of record” data routinely gathered by schools and districts, including personnel files that typically require the teacher’s college transcripts, degree earned, certifications, scores on tests for any teacher license and so on.
There are technical questions galore, but a big chunk of the data of interest to the promotors of this latest extension of the Gates/USDE’s rating game are in place.
I have written about the use of SLOs as a proxy for VAM in an unpublished paper titled The Marketing of Student Learning Objectives (SLOs): 1999-2014. A pdf with references can be obtained by request at chapmanLH@aol.com
I will work on it, although for the time being I will need to enlist at least one other to help. Interested helpers do email.
Great response thank you for you wonderful leadership fighting these authoritarian testing mandates. They are clearly not based on research or good science.
I posted my extended response to the federal registry to my own blog and the Dailykos http://www.dailykos.com/story/2015/01/20/1358972/-Comment-on-Awful-NPRM-for-Teacher-Training
I would also like to see coverage of SLOs. My state will be adopting them. But after taking the training and attempting to write them for special education students, it’s hard for me believe there could be any validity to this process.