After our recent post regarding the DC Public Schools’ IMPACT Evaluation System, and Diane Ravitch’s follow-up, a DC teacher wrote to Diane expressing his concerns about his DC IMPACT evaluation scores, attaching the scores he recently received after his supervising administrator and a master educator observed the same 30-minute lesson he recently taught to the same class.
First, take a look at his scores summarized below. Please note that other supportive “evidence” (e.g., notes re: what was observed to support and warrant the scores below) was available, but for purposes of brevity and confidentiality this “evidence” is not included here.
As you can easily see, these two evaluators were very much NOT on the same evaluation page, again, when observing the same thing during the same time at the same instructional occasion.
|Evaluative Criteria||Definition||Administrator Scores (Mean Score = 1.44)||Master Educator Scores (Mean Score = 3.11)|
|TEACH 1||Lead Well-Organized, Objective-Driven Lessons||= 1 Ineffective||= 4 Highly Effective|
|TEACH 2||Explain Content Clearly||= 1 Ineffective||= 3 Effective|
|TEACH 3||Engage Students at All Learning Levels in Rigorous Work||= 1 Ineffective||= 3 Effective|
|TEACH 4||Provide Students Multiple Ways to Engage with Content||= 1 Ineffective||= 3 Effective|
|TEACH 5||Check for Student Understanding||= 2 Minimally Effective||= 4 Highly Effective|
|TEACH 6||Respond to Student Understandings||= 1 Ineffective||= 3 Effective|
|TEACH 7||Develop Higher- Level Understanding through Effective Questioning||= 1 Ineffective||= 2 Minimally Effective|
|TEACH 8||Maximize Instructional Time||= 2 Minimally Effective||= 3 Effective|
|TEACH 9||Build a Supportive, Learning-Focused Classroom Community||= 3 Effective||= 3 Effective|
Overall, Mr. T (an obvious pseudonym) received a 1.44 from his supervising administrator and a 3.11 from the master educator, with scores ranging from 1 = Ineffective to 4 = Highly Effective.
This is particularly important as illustrated in the prior post (Footnote 8 of the full piece to be exact), because “Teacher effectiveness ratings were based on, in order of importance by the proportion of weight assigned to each indicator [including first and foremost]: (1) scores derived via [this] district-created and purportedly “rigorous” (Dee & Wyckoff, 2013, p. 5) yet invalid (i.e., not having been validated) observational instrument with which teachers are observed five times per year by different folks, but about which no psychometric data were made available (e.g., Kappa statistics to test for inter-rater consistencies among scores).” For all DC teachers, this is THE observational system used, and for 83% of them these data are weighted at 75% of their total “worth” (Dee & Wyckoff, 2013, p. 10). This is precisely the system that is receiving (and gaining) praise, especially as it has thus far led to teacher bonuses (professedly up to $25,000 per year) as well as terminations of more than 500 teachers (≈ 8%) throughout DC’s Public Schools. Yet as evident here, again,this system has some fatal flaws and serious issues, despite its praised “rigor” (Dee & Wyckoff, 2013, p. 5).
See also ten representative comments taken from both the administrator’s evaluation form and the master educator’s evaluation form. Revealed here, as well, are MAJOR issues and discrepancies that should not occur in any “objective” and reliable” evaluation system, especially in one to which such major consequences are attached and that has been, accordingly, so “rigorously” praised (Dee & Wyckoff, 2013, p. 5).
1. The objective was not posted nor verbally articulated during the observation… Students were asked what the objective was and they looked to the board but when they saw no objective.
2. There was limited evidence that students mastered the content based on the work they produced.
3. Explanations of content weren’t clear and coherent based on student responses and the level of attention that Mr. T had to pay to most students.
4. Students were observed using limited academic language throughout the observation.
5. The lesson was not accessible to students and therefore posed too much challenge based on their level of ability.
6. [T]here wasn’t an appropriate balance between teacher‐directed and student‐centered learning.
7. There was limited higher-level understanding developed based on verbal conferencing or work products that were created.
8. Through [checks for understanding] Mr. T was able to get the pulse of the class… however there was limited evidence that Mr. T understood the depth of student understanding.
9. There were many students that had misunderstandings based on student responses from putting their heads down to moving to others to talk instead of work.
10. Inappropriate behaviors occurred regularly within the classroom.
Master Educator’s Comments:
1. Mr. T was highly effective at leading well-organized, objective-driven lessons.
2. Mr. T’s explanations of content were clear and coherent, and they built student understanding of content.
3. All parts of Mr. T’s lesson significantly moved students towards mastery of the objective as evidenced by students.
4. Mr. T included learning styles that were appropriate to students needs and all students responded positively and were actively involved.
5. Mr. T’s explanations of content were clear and coherent, and they built student understanding of content.
6. Mr. T was effective at engaging students at all levels in accessible and challenging work.
7. Students had adequate opportunities to meaningfully practice, apply, and demonstrate what they are learning.
8. Mr. T always used appropriate strategies to ensure that students moved toward higher-level understanding.
9. Mr. T was effective at maximizing instructional time…Inappropriate or off-task student behavior never interrupted or delayed the lesson.
10. Mr. T was effective at building a supportive, learning-focused classroom community. Students were invested in their work and valued academic success.
In sum, as Mr. T wrote in his email to Diane, while he is “fortunate enough to have a teaching position that is not affected by VAM nonsense…that doesn’t mean [he’s] completely immune from a flawed system of evaluations.” This “supposedly ‘objective’ measure seems to be anything but.” Is the administrator correct whereas positioning Mr. T as ineffective? Or might it be, perhaps, the master educator was “just being too soft.” Either way, “it’s confusing and it’s giving [Mr. T.] some thought as to whether [he] should just spend the school day at [his] desk working on [his] resumé.”
Our thanks to Mr. T for sharing his DC data, and for sharing his story!