One of the “Forty-Four” Misclassified DC Teachers Speaks Up and Out

ShareTweet about this on TwitterShare on Facebook0Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

Two weeks ago I wrote a post about what’s going in DC’s public schools with their value-added-based teacher evaluation system, and more specifically about the 44 DC public school teachers who received “incorrect” VAM scores for the last academic year (2012-2013). While this occurred for more than just these 44 teachers because VAM formulas are always “subject to error” across the board, as per the official report just these 44 were misclassified because of a “simple” algorithmic error in the Mathematica Inc. (third party) formula used to calculate DC teachers’ scores. One of the 44 teachers was fired as a result.

Another “One of the ‘Forty-Four’ Teachers” is now speaking up and speaking out about this situation, using as a base for his reflections the email he received from the district with, most pertinent (in my opinion), all of its arbitrariness included. Check out the email he received, but also “the district’s” explanations of both the errors and the system, and in particular its weighting schema. As you read, recall another previous VAMboozled! post, whereas actual administrator and master educator scores (the scores at the source of the errors for this teacher) evidenced themselves as wholly invalid as well.

Read this DC teacher’s other thoughts as well, as they too are pertinent and very personal. My favorite: “What is DCPS’ plan for re-instituting the one teacher who was ‘fired mistakenly?’ I may not speak legalese, but I’m sure there are legal ramifications for this ‘error.’ Side note, suggesting the teacher was ‘fired mistakenly’ is akin to saying someone was ‘robbed accidentally.'”

ShareTweet about this on TwitterShare on Facebook0Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

3 thoughts on “One of the “Forty-Four” Misclassified DC Teachers Speaks Up and Out

  1. Thanks for all of your work in exposing the VAM sham, not just the error rates but the real consequences of this fraud in the lives of teachers. I hope that a legal expert will put a class action lawsuit in motion, against the marketers of this flawed product and its cousins, the SLOs and SGOs, that are being inflicted on a majority of teachers. Here in Ohio, these flawed measures count for 50% of a teachers evaluation.

  2. Unfortunately, the focus on algorithms and statistical noise in VAM also distracts attention from the sham of using “student growth objectives” (SGOs) or “student learning objectives” (SLOs) for high stakes evaluation of teachers (up to 70%) for whom there are not state-wide test scores. SLOs and SGOs are functionally the same. They are marketed by the federally funded Reform Support Network as the teacher evaluation strategy for the other 70%.
    The Reform Support Network is an amorphous group in charge of promoting the Race to the Top of agenda. See http://www2.ed.gov/about/inits/ed/implementation-support-unit/tech-assist/slo-toolkit.pdf
    In place of VAMs, every teacher writes one or more SLOs for selected classes (often using a computer template). The SLO/SGO is a shoddy version of Peter Drucker’s 1954 Management by Objectives and 1970′s behavioral objectives on steroids.
    Teachers must analyze baseline test data on students (prior year tests, pretests) and set “targets” for pre-to-posttest gains in scores on an approved district-wide test. The targeted gains must be set for each student and subgroups. Each SLO is graded (by a trained rater) on about 26 criteria in 8 categories: Rationale, Population, Interval of time, Assessments, Expected growth, Learning Content, Teaching Strategies.
    The evaluator rates the SLO/SGO much like a college writing assignment using a four or five point scale—“high quality” to “unacceptable” or “incomplete.” Rewrites are limited and must be approved by the principal or evaluator. Targets cannot be changed midstream. Later in the year, all teachers who have similar district-approved SLOs are rated and ranked on the gains in test scores they have produced.
    Bottom line: “Growth” is a euphemistic name for a gain in test scores from one point in time to another. Meeting “growth targets for learning” is like meeting a sales target or a production quota by a date certain. Students are performing “on grade level” if their test scores are at or above the median on a percentile scale (1-99). A student is said to have achieved “a year’s worth of growth” if his or her gain-score on a proficiency test is equal to, or greater than, the gain-score made by a 50th percentile student. Teachers in some districts are rated “highly effective” only if all or most of their students have gain-scores of “more than a year’s worth of growth.”
    Here is the sham in federal policies that promote this scheme. A 2013 review of research bearing on SLO/SGOs documented unresolved issues in the validity, reliability, efficiency, and fairness of these measures for high-stakes evaluations of teachers in a wide range of subjects and job assignments Gill, B., Bruch, J., & Booker, K. (2013). Using alternative student growth measures for evaluating teacher performance: What the literature says. (REL 2013–002). U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/edlabs. In Ohio and other states, 50% of a teacher’s evaluation is based on a metric that is both recommended and known to be unsuitable for high stakes use. The end game of the SLO/SGO process is a district-wide (or state-wide) stack rating of every teacher who has a job-alike assignment. In Ohio, an undisclosed formula in a computer interface produces the evaluation.

Leave a Reply

Your email address will not be published. Required fields are marked *