Mr. T’s Scores on the DC Public Schools’ IMPACT Evaluation System

After our recent post regarding the DC Public Schools’ IMPACT Evaluation System, and Diane Ravitch’s follow-up, a DC teacher wrote to Diane expressing his concerns about his DC IMPACT evaluation scores, attaching the scores he recently received after his supervising administrator and a master educator observed the same 30-minute lesson he recently taught to the same class.

First, take a look at his scores summarized below. Please note that other supportive “evidence” (e.g., notes re: what was observed to support and warrant the scores below) was available, but for purposes of brevity and confidentiality this “evidence” is not included here.

As you can easily see, these two evaluators were very much NOT on the same evaluation page, again, when observing the same thing during the same time at the same instructional occasion.

Evaluative Criteria Definition Administrator Scores   (Mean Score = 1.44) Master Educator Scores (Mean Score = 3.11)
TEACH 1 Lead Well-Organized, Objective-Driven Lessons = 1 Ineffective = 4 Highly Effective
TEACH 2 Explain Content Clearly = 1 Ineffective = 3 Effective
TEACH 3 Engage Students at All Learning Levels in Rigorous Work = 1 Ineffective = 3 Effective
TEACH 4 Provide Students Multiple Ways to Engage with Content = 1 Ineffective = 3 Effective
TEACH 5 Check for Student Understanding = 2 Minimally Effective = 4 Highly Effective
TEACH 6 Respond to Student Understandings = 1 Ineffective = 3 Effective
TEACH 7 Develop Higher- Level Understanding through Effective Questioning = 1 Ineffective = 2 Minimally Effective
TEACH 8 Maximize Instructional Time = 2 Minimally Effective = 3 Effective
TEACH 9 Build a Supportive, Learning-Focused Classroom Community = 3 Effective = 3 Effective

Overall, Mr. T (an obvious pseudonym) received a 1.44 from his supervising administrator and a 3.11 from the master educator, with scores ranging from 1 = Ineffective to 4 = Highly Effective.

This is particularly important as illustrated in the prior post (Footnote 8 of the full piece to be exact), because “Teacher effectiveness ratings were based on, in order of importance by the proportion of weight assigned to each indicator [including first and foremost]: (1) scores derived via [this] district-created and purportedly “rigorous” (Dee & Wyckoff, 2013, p. 5) yet invalid (i.e., not having been validated) observational instrument with which teachers are observed five times per year by different folks, but about which no psychometric data were made available (e.g., Kappa statistics to test for inter-rater consistencies among scores).” For all DC teachers, this is THE observational system used, and for 83% of them these data are weighted at 75% of their total “worth” (Dee & Wyckoff, 2013, p. 10). This is precisely the system that is receiving (and gaining) praise, especially as it has thus far led to teacher bonuses (professedly up to $25,000 per year) as well as terminations of more than 500 teachers (≈ 8%) throughout DC’s Public Schools. Yet as evident here, again,this system has some fatal flaws and serious issues, despite its praised “rigor” (Dee & Wyckoff, 2013, p. 5).

See also ten representative comments taken from both the administrator’s evaluation form and the master educator’s evaluation form. Revealed here, as well, are MAJOR issues and discrepancies that should not occur in any “objective” and reliable” evaluation system, especially in one to which such major consequences are attached and that has been, accordingly, so “rigorously” praised (Dee & Wyckoff, 2013, p. 5).

Administrator’s Comments:
1. The objective was not posted nor verbally articulated during the observation… Students were asked what the objective was and they looked to the board but when they saw no objective.
2. There was limited evidence that students mastered the content based on the work they produced.
3. Explanations of content weren’t clear and coherent based on student responses and the level of attention that Mr. T had to pay to most students.
4. Students were observed using limited academic language throughout the observation.
5. The lesson was not accessible to students and therefore posed too much challenge based on their level of ability.
6. [T]here wasn’t an appropriate balance between teacher‐directed and student‐centered learning.
7. There was limited higher-level understanding developed based on verbal conferencing or work products that were created.
8. Through [checks for understanding] Mr. T was able to get the pulse of the class… however there was limited evidence that Mr. T understood the depth of student understanding.
9. There were many students that had misunderstandings based on student responses from putting their heads down to moving to others to talk instead of work.
10. Inappropriate behaviors occurred regularly within the classroom.

Master Educator’s Comments:
1. Mr. T was highly effective at leading well-organized, objective-driven lessons.
2. Mr. T’s explanations of content were clear and coherent, and they built student understanding of content.
3. All parts of Mr. T’s lesson significantly moved students towards mastery of the objective as evidenced by students.
4. Mr. T included learning styles that were appropriate to students needs and all students responded positively and were actively involved.
5. Mr. T’s explanations of content were clear and coherent, and they built student understanding of content.
6. Mr. T was effective at engaging students at all levels in accessible and challenging work.
7. Students had adequate opportunities to meaningfully practice, apply, and demonstrate what they are learning.
8. Mr. T always used appropriate strategies to ensure that students moved toward higher-level understanding.
9. Mr. T was effective at maximizing instructional time…Inappropriate or off-task student behavior never interrupted or delayed the lesson.
10. Mr. T was effective at building a supportive, learning-focused classroom community. Students were invested in their work and valued academic success.

In sum, as Mr. T wrote in his email to Diane, while he is “fortunate enough to have a teaching position that is not affected by VAM nonsense…that doesn’t mean [he’s] completely immune from a flawed system of evaluations.” This “supposedly ‘objective’ measure seems to be anything but.” Is the administrator correct whereas positioning Mr. T as ineffective? Or might it be, perhaps, the master educator was “just being too soft.” Either way, “it’s confusing and it’s giving [Mr. T.] some thought as to whether [he] should just spend the school day at [his] desk working on [his] resumé.”

Our thanks to Mr. T for sharing his DC data, and for sharing his story!

8 thoughts on “Mr. T’s Scores on the DC Public Schools’ IMPACT Evaluation System

  1. This is happening everyday across our district and is demoralizing to teachers. I received a similar evaluation from my supervisor. It was interesting that I was being evaluated on things I had limited control over. For example, “students had cell phones out”, yet the school allows students to walk through the hallways with cell phones out. “Student had head on desk.” Well, maybe this student did not get enough sleep last night. In urban settings, there are so many reasons why kids do what they do that works against teachers in the classroom that are no reflection on the teachers abilities. We need to address what these kids need before they enter the classroom. Also, I have many students who have been moved along academically without having mastered the basics, then I am judge because they cannot process the higher level questions I might ask of them. It’s absolute madness and maddening!

  2. A few thoughts. First, this may well be the result of the observers observing two different classes. That would explain the differences in student behavior. Because Mr .T does not get a growth score, that is most likely the case–elementary teachers get growth scores. Second, they were two different lessons. Even master teachers have an off day. Third, one may have been announced, and the other unannounced. (That can swing both ways. I have seen teachers blow announced observations because of nerves).
    Impressions–the second observer makes statements but provides no evidence. The first observer at least makes an attempt(heads down). I am in my second year using this kind of observational tool. It is unhelpful and tedious. I now go through the motions and I have returned to what I have always done–have serious conversations about what I saw and what the teacher experienced and saw in the lesson. This is then reduced to a few concrete recommendations if needed. Sometimes the conversation is about all the things the teacher did so well.
    I have observed teachers for 18 years. Here is what it is all about–good time management, clarity and student engagement that is continuous and challenging.
    The first two define a good teacher. All three define a master teacher.

    • I should have added that. These were the same lessons observed, both observations were announced, and there was “a lot” of evidence provided that I could not include in this post, mainly for purposes of brevity and confidentiality. Hope that helps, and thanks for your thoughtful comments!

      • I’m not too sure about the claim that both the Master Educator (ME) and Administrator were present for the same lesson. Was it the same lesson, but different periods? I, too, teach in a DC public school, and I know the ME observation is never “announced.” The Administrator observation shouldn’t be “announced” either, but every teacher has an inclination when the admins are “making their rounds.” That said, the disparity between the two reflect two different perspectives. Although the gap is far too wide, I would be more skeptical if they were identical scores.

      • Thanks for the clarification. I have no idea what to make of this then. Clearly something is very wrong in this school’s observation process.

  3. Our district has gone to these rubric type assessments on a scale of 1-7. We have been told that it is impossible to get a 7. Mind bogglingly inane, eh! I decided that I am going to ask them for any and all materials that they have concerning the evaluation, the training of the administrators, etc. . . . Somehow I know they will resist giving me any more information that they have already given us.

    At our last monthly faculty meeting there were two groups one of which was on class room management (put out by a psychologist, who’s parents and wife were/are teachers but not he) and the other on the NEE, the evaluation system developed at the University of Missouri-Columbia. I asked to be a part of the NEE and was told that I had to go to the other and that the NEE would be explained to me at my evaluation with the principal.

    Time for a little preemptive offensive strike to get the information now.

  4. I am still not sure, were the 2 evaluators in the same class at the same time? Or was it that they saw the same lesson, but that lesson was in 2 different classes?

Leave a Reply

Your email address will not be published. Required fields are marked *