States’ Math and Reading Performance After the Implementation of School A-F Letter Grade Policies

It’s been a while! Thanks to the passage of the Every Student Succeeds Act (ESSA; see prior posts about ESSA here, here, and here), the chaos surrounding states’ teacher evaluation systems has exponentially declined. Hence, my posts have declined in effect. As I have written prior, this is good news!

However, there seems to be a new form of test-based accountability on the rise. Some states are now being pressed to move forward with school letter grade policies, also known as A-F policies that help states define and then label school quality, in order to better hold schools and school districts accountable for their students’ test scores. These reform-based policies are being pushed by what was formerly known as the Foundation for Excellence in Education, that was launched while Jeb Bush was Florida’s governor, and what has since been rebranded as ExcelinEd. With Jeb Bush still in ExcelinEd’s Presidential seat, the organization describes itself as a “501(c)(3) nonprofit organization focused on state education reform” that operates on approximately $12 million per year of donations from the Bill & Melinda Gates Foundation, Michael Bloomberg Philanthropies, the Walton Family Foundation, and the Pearson, McGraw-Hill, Northwest Evaluation Association, ACT, College Board, and Educational Testing Service (ETS) testing corporations, among others.

I happened to be on a technical advisory committee for the state of Arizona, advising the state board of education on its A-F policies, when I came to really understand all that was at play, including the politics at play. Because of this role, though, I decided to examine, with two PhD students — Tray Geiger and Kevin Winn — what was just put out via an American Educational Research Association (AERA) press release. Our study, titled “States’ Performance on NAEP Mathematics and Reading Exams After the Implementation of School Letter Grades” is currently under review for publication, but below are some of the important highlights as also highlighted by AERA. These highlights are especially critical for states currently or considering using A-F policies to also hold schools and school districts accountable for their students’ achievement, especially given these policies clearly (as per the evidence) do not work for that which they are intended.

More specifically, 13 states currently use a school letter grade accountability system, with Florida being the first to implement a school letter grade policy in 1998. The other 12 states, and their years of implementation are Alabama (2013), Arkansas (2012), Arizona (2010), Indiana (2011), Mississippi (2012), New Mexico (2012), North Carolina (2013), Ohio (2014), Oklahoma (2011), Texas (2015), Utah (2013), and West Virginia (2015). These 13 states have fared no better or worse than other states in terms of increasing student achievement on the National Assessment of Educational Progress (NAEP) – the nation’s report card, which is also widely considered the nation’s “best” test – post policy implementation. Put differently, we found mixed results as to whether there was a clear, causal relationship between implementation of an A-F accountability system and increased student achievement. There was no consistent positive or negative relationship between policy implementation and NAEP scores on grade 4 and grade 8 mathematics and reading.

More explicitly:

  • For NAEP grade 4 mathematics exams, five of the 13 states (38.5 percent) had net score increases after their A-F systems were implemented; seven states (53.8 percent) had net score decreases after A-F implementation; and one state (7.7 percent) demonstrated no change.
  • Compared to the national average on grade 4 mathematics scores, eight of the 13 states (61.5 percent) demonstrated growth over time greater than that of the national average; three (23.1 percent) demonstrated less growth; and two states (15.4 percent) had comparable growth.
  • For grade 8 mathematics exams, five of the 13 states (38.5 percent) had net score increases after their A-F systems were implemented, yet eight states (61.5 percent) had net score decreases after A-F implementation.
  • Grade 8 mathematics growth compared to the national average varied more than that of grade 4 mathematics. Six of the 13 states (46.2 percent) demonstrated greater growth over time compared to that of the national average; six other states (46.2 percent) demonstrated less growth; and one state (7.7 percent) had comparable growth.
  • For grade 4 reading exams, eight of the 13 states (61.5 percent) had net score increases after A-F implementation; three states (23.1 percent) demonstrated net score decreases; and two states (15.4 percent) showed no change.
  • Grade 4 reading evidenced a pattern similar to that of grade 4 mathematics in that eight of the 13 states (61.5 percent) had greater growth over time compared to the national average, while five of the 13 states (38.5 percent) had less growth.
  • For grade 8 reading, eight states (61.5 percent) had net score increases after their A-F systems were implemented; two states (15.4 percent) had net score decreases; and three states (23.1 percent) showed no change.
  • In grade 8 reading, states evidenced a pattern similar to that of grade 8 mathematics in that the majority of states demonstrated less growth compared to the nation’s average growth. Five of 13 states (38.5 percent) had greater growth over time compared to the national average, while six states (46.2 percent) had less growth, and two states (15.4 percent) exhibited comparable growth.

In sum, the NAEP data slightly favored A-F states on grade 4 mathematics and grade 4 reading; half of the states increased and half of the states decreased in achievement post A-F implementation on grade 8 mathematics; and a plurality of states decreased in achievement post A-F implementation on grade 8 reading. See more study details and results here.

In reality, how these states performed post-implementation is not much different from random, or a flip of the coin. As such, these results should speak directly to other states already, or considering, investing human and financial resources in such state-level, test-based accountability policies.

 

“You Are More Than Your EVAAS Score!”

Justin Parmenter is a seventh-grade language arts teacher in Charlotte, North Carolina. Via his blog — Notes from the Chalkboard — he writes “musings on public education.” You can subscribe to his blog at the bottom of any of his blog pages, one of which I copied and pasted below for all of you following this blog (now at 43K followers!!).

His recent post is titled “Take heart, NC teachers. You are more than your EVAAS score!” and serves as a solid reminder of what teachers’ value-added scores (and namely in this case teachers’ Education Value-Added Assessment Scores (EVAAS)) cannot tell you, us, or pretty much anybody about anyone’s worth as a teacher. Do give it a read, and do give him a shout out by sharing this with others.

*****

Last night an email from the SAS corporation hit the inboxes of teachers all across North Carolina.  I found it suspicious and forwarded it to spam.

EVAAS is a tool that SAS claims shows how effective educators are by measuring precisely what value each teacher adds to their students’ learning.  Each year teachers board an emotional roller coaster as they prepare to find out whether they are great teachers, average teachers, or terrible teachers–provided they can figure out their logins.

NC taxpayers spend millions of dollars for this tool, and SAS founder and CEO James Goodnight is the richest person in North Carolina, worth nearly $10 billion.  However, over the past few years, more and more research has shown that value added ratings like EVAAS are highly unstable and are unable to account for the many factors that influence our students and their progress. Lawsuits have sprung up from Texas to Tennessee, charging, among other things, that use of this data to evaluate teachers and make staffing decisions violates teachers’ due process rights, since SAS refuses to reveal the algorithms it uses to calculate scores.

By coincidence, the same day I got the email from SAS, I also got this email from the mother of one of my 7th grade students:

Photos attached provided evidence that the student was indeed reading at the dinner table.

The student in question had never thought of himself as a reader.  That has changed this year–not because of any masterful teaching on my part, but just because he had the right book in front of him at the right time.

Here’s my point:  We need to remember that EVAAS can’t measure the most important ways teachers are adding value to our students’ lives.  Every day we are turning students into lifelong independent readers. We are counseling them through everything from skinned knees to school shootings.  We are mediating their conflicts. We are coaching them in sports. We are finding creative ways to inspire and motivate them. We are teaching them kindness and empathy.  We are doing so much more than helping them pass a standardized test at the end of the year.

So if you figure out your EVAAS login today, NC teachers, take heart.  You are so much more than your EVAAS score!

Learning from What Doesn’t Work in Teacher Evaluation

One of my doctoral students — Kevin Close — and I just had a study published in the practitioner journal Phi Delta Kappan that I wanted to share out with all of you, especially before the study is no longer open-access or free (see full study as currently available here). As the title indicates, the study is about how states, school districts, and schools can “Learn from What Doesn’t Work in Teacher Evaluation,” given an analysis that the two of us conducted of all documents pertaining to the four teacher evaluation and value-added model (VAM)-centered lawsuits in which I have been directly involved, and that I have also covered in this blog. These lawsuits include Lederman v. King in New York (see here), American Federation of Teachers et al. v. Public Education Department in New Mexico (see here), Houston Federation of Teachers v. Houston Independent School District in Texas (see here), and Trout v. Knox County Board of Education in Tennessee (see here).

Via this analysis we set out to comb through the legal documents to identify the strongest objections, as also recognized by the courts in these lawsuits, to VAMs as teacher measurement and accountability strategies. “The lessons to be learned from these cases are both important and timely” given that “[u]nder the Every Student Succeeds Act (ESSA), local education leaders once again have authority to decide for themselves how to assess teachers’ work.”

The most pertinent and also common issues as per these cases were as follows:

(1) Inconsistencies in teachers’ VAM-based estimates from one year to the next that are sometimes “wildly different.” Across these lawsuits, issues with reliability were very evident, whereas teachers classified as “effective” one year were either theorized or demonstrated to have around a 25%-59% chance of being classified as “ineffective” the next year, or vice versa, with other permutations also possible. As per our profession’s Standards for Educational and Psychological Testing, reliability should, rather, be observed whereby VAM estimates of teacher effectiveness are more or less consistent over time, from one year to the next, regardless of the type of students and perhaps subject areas that teachers teach.

(2) Bias in teachers’ VAM-based estimates were also of note, whereby documents suggested or evidenced that bias, or rather biased estimates of teachers’ actual effects does indeed exist (although this area was also of most contention and dispute). Specific to VAMs, since teachers are not randomly assigned the students they teach, whether their students are invariably more or less motivated, smart, knowledgeable, or capable can bias students’ test-based data, and teachers’ test-based data when aggregated. Court documents, although again not without counterarguments, suggested that VAM-based estimates are sometimes biased, especially when relatively homogeneous sets of students (i.e., English Language Learners (ELLs), gifted and special education students, free-or-reduced lunch eligible students) are non-randomly concentrated into schools, purposefully placed into classrooms, or both. Research suggests that this also sometimes happens regardless of the the sophistication of the statistical controls used to block said bias.

(3) The gaming mechanisms in play within teacher evaluation systems in which VAMs play a key role, or carry significant evaluative weight, were also of legal concern and dispute. That administrators sometimes inflate the observational ratings of their teachers whom they want to protect, while simultaneously offsetting the weight the VAMs sometimes carry was of note, as was the inverse. That administrators also sometimes lower teachers’ ratings to better align them with their “more objective” VAM counterparts were also at issue. “So argued the plaintiffs in the Houston and Tennessee lawsuits, for example. In those systems, school leaders appear to have given precedence to VAM scores, adjusting their classroom observations to match them. In both cases, administrators admitted to doing so, explaining that they sensed pressure to ensure that their ‘subjective’ classroom ratings were in sync with the VAM’s ‘objective’ scores.” Both sets of behavior distort the validity (or “truthfulness”) of any teacher evaluation system and are in violation of the same, aforementioned Standards for Educational and Psychological Testing that call for VAM scores and observation ratings to be kept separate. One indicator should never be adjusted to offset or to fit the other.

(4) Transparency, or the lack thereof, was also a common issue across cases. Transparency, which can be defined as the extent to which something is accessible and readily capable of being understood, pertains to whether VAM-based estimates are accessible and make sense to those at the receiving ends. “Not only should [teachers] have access to [their VAM-based] information for instructional purposes, but if they believe their evaluations to be unfair, they should be able to see all of the relevant data and calculations so that they can defend themselves.” In no case was this more legally pertinent than in Houston Federation of Teachers v. Houston Independent School District in Texas. Here, the presiding judge ruled that teachers did have “legitimate claims to see how their scores were calculated. Concealing this information, the judge ruled, violated teachers’ due process protections under the 14th Amendment (which holds that no state — or in this case organization — shall deprive any person of life, liberty, or property, without due process). Given this precedent, it seems likely that teachers in other states and districts will demand transparency as well.”

In the main article (here) we also discuss what states are now doing to (hopefully) improve upon their teacher evaluation systems in terms of using multiple measures to help to evaluate teachers more holistically. We emphasize the (in)formative versus the summative and high-stakes functions of such systems, and allowing teachers to take ownership over such systems in their development and implementation. I will leave you all to read the full article (here) for these details.

In sum, though, when rethinking states’ teacher evaluation systems, especially given the new liberties afforded to states via the Every Student Succeeds Act (ESSA), educators, education leaders, policymakers, and the like would do well to look to the past for guidance on what not to do — and what to do better. These legal cases can certainly inform such efforts.

Reference: Close, K., & Amrein-Beardsley, A. (2018). Learning from what doesn’t work in teacher evaluation. Phi Delta Kappan, 100(1), 15-19. Retrieved from http://www.kappanonline.org/learning-from-what-doesnt-work-in-teacher-evaluation/

Effects of the Los Angeles Times Prior Publications of Teachers’ Value-Added Scores

In one of my older posts (here), I wrote about the Los Angeles Times and its controversial move to solicit Los Angeles Unified School District (LAUSD) students’ test scores via an open-records request, calculate LAUSD teachers’ value-added scores themselves, and then publish thousands of LAUSD teachers’ value-added scores along with their “effectiveness” classifications (e.g., least effective, less effective, average, more effective, and most effective) on their Los Angeles Teacher Ratings website. They did this, repeatedly, since 2010, and they have done this all the while despite the major research-based issues surrounding teachers’ value-added estimates (that hopefully followers of this blog know at least somewhat well). This is also of professional frustration for me since the authors of the initial articles and the creators of the searchable website (Jason Felch and Jason Strong) contacted me back in 2011 regarding whether what they were doing was appropriate, valid, and fair. Despite my strong warnings against it, Felch and Song thanked me for my time and moved forward.

Just yesterday, the National Education Policy Center (NEPC) at the University of Colorado – Boulder, published a Newsletter in which authors answer the following question, as taken from the Newsletter’s title: “Whatever Happened with the Los Angeles Times’ Decision to Publish Teachers’ Value-Added Scores?” Here is what they found, by summarizing one article and two studies on the topic, although you can also certainly read the full report here.

  • Publishing the scores meant already high-achieving students were assigned to the classrooms of higher-rated teachers the next year, [found a study in the peer-reviewed Economics of Education Review]. That could be because affluent or well-connected parents were able to pull strings to get their kids assigned to those top teachers, or because those teachers pushed to teach the highest-scoring students. In other words, the academically rich got even richer — an unintended consequence of what could be considered a journalistic experiment in school reform.
  • The decision to publish the scores led to: (1) A temporary increase in teacher turnover; (2) Improvements
    in value-added scores; and (3) No impact on local housing prices.
  • The Los Angeles Times’ analysis erroneously concluded that there was no relationship between value-added scores and levels of teacher education and experience.
  • It failed to account for the fact that teachers are non-randomly assigned to classes in ways that benefit some and disadvantage others.
  • It generated results that changed when Briggs and Domingue tweaked the underlying statistical model [i.e., yielding different value-estimates and classifications for the same teachers].
  • It produced “a significant number of false positives (teachers rated as effective who are really average), and false negatives (teachers rated as ineffective who are really average).”

After the Los Angeles Times’ used a different approach in 2011, Catherine Durso found:

  • Class composition varied so much that comparisons of value-added scores of two teachers were only valid if both teachers are assigned students with similar characteristics.
  • Annual fluctuations in results were so large that they lead to widely varying conclusions from one year to the next for the same teacher.
  • There was strong evidence that results were often due to the teaching environment, not just the teacher.
  • Some teachers’ scores were based on very little data.

In sum, while “[t]he debate over publicizing value-added scores, so fierce in 2010, has since died down to
a dull roar,” more states (e.g., like in New York and Virginia), organizations (e.g., like Matt Barnum’s Chalbeat), and news outlets (e.g., the Los Angeles Times has apparently discontinued this practice, although their website is still live) need to take a stand against or prohibit the publications of individual teachers’ value-added results from hereon out. As I noted to Jason Felch and Jason Strong a long time ago, this IS simply bad practice.

A Win in New Jersey: Tests to Now Account for 5% of Teachers’ Evaluations

Phil Murphy, the Governor of New Jersey, is keeping his campaign promise to parents, students, and educators, according to a news article just posted by the New Jersey Education Association (NJEA; see here). As per the New Jersey Commissioner of Education – Dr. Lamont Repollet, who was a classroom teacher himself — throughout New Jersey, Partnership for Assessment of Readiness for College and Careers (PARCC) test scores will now account for just 5% of a teacher’s evaluation, which is down from 30% as mandated for approxunatelt five years prior by both Murphy’s and Repollet’s predecessors.

Alas, the New Jersey Department of Education and the Murphy administration have “shown their respect for the research.” Because state law continues to require that standardized test scores play some role in teacher evaluation, a decrease to 5% is a victory, perhaps with a revocation of this law forthcoming.

“Today’s announcement is another step by Gov. Murphy toward keeping a campaign promise to rid New Jersey’s public schools of the scourge of high-stakes testing. While tens of thousands of families across the state have already refused to subject their children to PARCC, schools are still required to administer it and educators are still subject to its arbitrary effects on their evaluation. By dramatically lowering the stakes for the test, Murphy is making it possible for educators and students alike to focus more time and attention on real teaching and learning.” Indeed, “this is a victory of policy over politics, powered by parents and educators.”

Way to go New Jersey!

ACT Also Finds but Discourages States’ Decreased Use of VAMs Post-ESSA

Last June (2018), I released a blog post covering three key findings from a study that I along with two others conducted on states’ revised teacher evaluation systems post the passage of the federal government’s Every Student Succeeds Act (ESSA; see the full study here). In short, we evidenced that (1) the role of growth or value-added models (VAMs) for teacher evaluation purposes is declining across states, (2) many states are embracing the increased local control afforded them via ESSA and no longer have one-size-fits-all teacher evaluation systems, and (3) the rhetoric surrounding teacher evaluation has changed (e.g., language about holding teachers accountable is increasingly less evident than language about teachers’ professional development and support).

Last week, a similar study was released by the ACT standardized testing company. As per the title of this report (see the full report here), they too found that there is a “Shrinking Use of Growth” across states’ “Teacher Evaluation Legislation since ESSA.” They also found that for some states there was a “complete removal” of the state’s teacher evaluation system (e.g., Maine, Washington), a postponement of the state’s teacher evaluation systems until further notice (e.g., Connecticut, Indiana, Tennessee) or a complete prohibition of the use of students’ standardized tests in any teachers’ evaluations moving forward (e.g., Connecticut, Idaho). Otherwise, as we also found, states are increasingly “allowing districts, rather than the state, to determine their evaluation frameworks” themselves.

Unlike in our study, however, ACT (perhaps not surprisingly as a standardized testing company that also advertises its tests’ value-added capacities; see, for example, here) cautions states against “the complete elimination of student growth as part of teacher evaluation systems” in that they have “clearly” (without citations in support) proven “their value and potential value.” Hence, ACT defines this as “a step backward.” In our aforementioned study and blog post we reviewed some of the actual evidence (with citations in support) and would, accordingly, contest all-day-long that these systems have a “clear” “proven” “value.” Hence, we (also perhaps not surprisingly) called our similar findings as “steps in the right direction.”

Regardless, in in all fairness, they recommend that states not “respond to the challenges of using growth
measures in evaluation systems by eliminating their use” but “first consider less drastic measures such as:

  • postponing the use of student growth for employment decisions while refinements to the system can be made;
  • carrying out special studies to better understand the growth model; and/or
  • reviewing evaluation requirements for teachers who teach in untested grades and subjects so that the measures used more accurately reflect their performance.

“Pursuing such refinements, rather than reversing efforts to make teacher evaluation more meaningful and reflective of performance, is the best first step toward improving states’ evaluation systems.”

Citation: Croft, M., Guffy, G., & Vitale, D. (2018). The shrinking use of growth: Teacher evaluation legislation since ESSA. Iowa City, IA: ACT. Retrieved from https://www.act.org/content/dam/act/unsecured/documents/teacher-evaluation-legislation-since-essa.pdf

New Mexico Loses Major Education Finance Lawsuit (with Rulings Related to Teacher Evaluation System)

Followers of this blog should be familiar with the ongoing teacher evaluation lawsuit in New Mexico. The lawsuit — American Federation of Teachers – New Mexico and the Albuquerque Federation of Teachers (Plaintiffs) v. New Mexico Public Education Department (Defendants) — is being heard by a state judge who ruled in 2015 that all consequences attached to teacher-level value-added model (VAM) scores (e.g., flagging the files of teachers with low VAM scores) were to be suspended throughout the state until the state (and/or others external to the state) could prove to the state court that the system was reliable, valid, fair, uniform, and the like. This case is set to be heard in court again this November (see more about this case from my most recent update here).

While this lawsuit has been occurring, however, it is important to note that two other very important New Mexico cases (that have since been consolidated into one) have been ongoing since around the same time (2014) — Martinez v. State of New Mexico and Yazzie v. State of New Mexico. Plaintiffs in this lawsuit, filed by the New Mexico Center on Law and Poverty and the Mexican American Legal Defense and Education Fund (MALDEF), argued that the state’s schools are inadequately funded; hence, the state is also denying New Mexico students their constitutional rights to an adequate education.

Last Friday, a different state judge presiding over this case ruled, “in a blistering, landmark decision,” that New Mexico is in fact :violating the constitutional rights of at-risk students by failing to provide them with a sufficient education.” As such, the state, its governor, and its public education department (PED) are “to establish a funding system that meets constitutional requirements by April 15 [of] next year” (see full article here).

As this case does indeed pertain to the above mentioned teacher evaluation lawsuit of interest within this blog, it is also important to note that the judge:

  • “[R]ejected arguments by [Governor] Susana Martinez’s administration that the education system is improving…[and]…that the state was doing the best with what it had” (see here).
  • Emphasized that “New Mexico children [continue to] rank at the very bottom in the country for educational achievement” (see here).
  • Added that “New Mexico doesn’t have enough teachers…[and]…New Mexico teachers are among the lowest paid in the country” (see here).
  • “[S]uggested the state teacher evaluation system ‘may be contributing to the lower quality of teachers in high-need schools…[also given]…punitive teacher evaluation systems that penalize teachers for working in high-need schools contribute to problem in this category of schools” (see here).
  • And concluded that all of “the programs being lauded by PED are not changing this [bleak] picture” (see here) and, more specifically, “offered a scathing assessment of the ways in which New Mexico has failed its children,” again, taking “particular aim at the state’s punitive teacher evaluation system” (see here).

Apparently, the state plans to appeal the decision (see a related article here).

Fired “Ineffective” Teacher Wins Battle with DC Public Schools

In November of 2013, I published a blog post about a “working paper” released by the National Bureau of Economic Research (NBER) and written by authors Thomas Dee – Economics and Educational Policy Professor at Stanford, and James Wyckoff – Economics and Educational Policy Professor at the University of Virginia. In the study titled “Incentives, Selection, and Teacher Performance: Evidence from IMPACT,” Dee and Wyckoff (2013) analyzed the controversial IMPACT educator evaluation system that was put into place in Washington DC Public Schools (DCPS) under the then Chancellor, Michelle Rhee. In this paper, Dee and Wyckoff (2013) presented what they termed to be “novel evidence” to suggest that the “uniquely high-powered incentives” linked to “teacher performance” via DC’s IMPACT initiative worked to improve the performance of high-performing teachers, and that dismissal threats worked to increase the voluntary attrition of low-performing teachers, as well as improve the performance of the students of the teachers who replaced them.

I critiqued this study in full (see both short and long versions of this critique here), ultimately asserting that the study had “fatal flaws” which compromised the exaggerated claims Dee and Wyckoff (2013) advanced. This past January (2017) they published another report, titled “Teacher Turnover, Teacher Quality, and Student Achievement in DCPS,” which was also (prematurely) released as a “working paper” by the same NBER. I also critiqued this study here).

Anyhow, a public interest story that should be of interest to followers of this blog was published two days ago in The Washington Post. The article, “I’ve Been a Hostage for Nine Years’: Fired Teacher Wins Battle with D.C. Schools,” details one fired, now 53-year old, veteran’s teachers last nine years after being one of nearly 1,000 educators fired during the tenure of Michelle Rhee. He was fired after district “leaders,” using the IMPACT system and a teacher evaluation system prior, deemed him “ineffective.” He “contested his dismissal, arguing that he was wrongly fired and that the city was punishing him for being a union activist and for publicly criticizing the school system.” That he made a significant salary at the time (2009) also likely had something to do with it in terms of cost-savings, although this is more peripherally discussed in this piece.

In short, “an arbitrator [just] ruled in favor of the fired teacher, a decision that could entitle him to hundreds of thousands of dollars in back pay and the opportunity to be a District teacher again” although, perhaps not surprisingly, he might not take them up on that  offer. As well, apparently this teacher “isn’t the only one fighting to get his job back. Other educators who were fired years ago and allege unjust dismissals [as per the IMPACT system] are waiting for their cases to be settled.” The school system can appeal this ruling.

The Gates Foundation’s Expensive ($335 Million) Teacher Evaluation Missteps

The header of an Education Week article released last week (click here) was that “[t]he Bill & Melinda Gates Foundation’s multi-million-dollar, multi-year effort aimed at making teachers more effective largely fell short of its goal to increase student achievement-including among low-income and minority students.”

An evaluation of Gates Foundation’s Intensive Partnerships for Effective Teaching initiative funded at $290 million, an extension of its Measures of Effective Teaching (MET) project funded at $45 million, was the focus of this article. The MET project was lead by Thomas Kane (Professor of Education and Economics at Harvard, former leader of the MET project, and expert witness on the defendant’s side of the ongoing lawsuit supporting New Mexico’s MET project-esque statewide teacher evaluation system; see here and here), and both projects were primarily meant to hold teachers accountable using their students test scores via growth or value-added models (VAMs) and financial incentives. Both projects were tangentially meant to improve staffing, professional development opportunities, improve the retention of the teachers of “added value,” and ultimately lead to more-effective teaching and student achievement, especially in low-income schools and schools with higher relative proportions of racial minority students. The six-year evaluation of focus in this Education Week article was conducted by the RAND Corporation and the American Institutes for Research, and the evaluation was also funded by the Gates Foundation (click here for the evaluation report, see below for the full citation of this study).

Their key finding was that Intensive Partnerships for Effective Teaching district/school sites (see them listed here) implemented new measures of teaching effectiveness and modified personnel policies, but they did not achieve their goals for students.

Evaluators also found (see also here):

  • The sites succeeded in implementing measures of effectiveness to evaluate teachers and made use of the measures in a range of human-resource decisions.
  • Every site adopted an observation rubric that established a common understanding of effective teaching. Sites devoted considerable time and effort to train and certify classroom observers and to observe teachers on a regular basis.
  • Every site implemented a composite measure of teacher effectiveness that included scores from direct classroom observations of teaching and a measure of growth in student achievement.
  • Every site used the composite measure to varying degrees to make decisions about human resource matters, including recruitment, hiring, placement, tenure, dismissal, professional development, and compensation.

Overall, the initiative did not achieve its goals for student achievement or graduation, especially for low-income and racial minority students. With minor exceptions, student achievement, access to effective teaching, and dropout rates were also not dramatically better than they were for similar sites that did not participate in the intensive initiative.

Their recommendations were as follows (see also here):

  • Reformers should not underestimate the resistance that could arise if changes to teacher-evaluation systems have major negative consequences.
  • A near-exclusive focus on teacher evaluation systems such as these might be insufficient to improve student outcomes. Many other factors might also need to be addressed, ranging from early childhood education, to students’ social and emotional competencies, to the school learning environment, to family support. Dramatic improvement in outcomes, particularly for low-income and racial minority students, will likely require attention to many of these factors as well.
  • In change efforts such as these, it is important to measure the extent to which each of the new policies and procedures is implemented in order to understand how the specific elements of the reform relate to outcomes.

Reference:

Stecher, B. M., Holtzman, D. J., Garet, M. S., Hamilton, L. S., Engberg, J., Steiner, E. D., Robyn, A., Baird, M. D., Gutierrez, I. A., Peet, E. D., de los Reyes, I. B., Fronberg, K., Weinberger, G., Hunter, G. P., & Chambers, J. (2018). Improving teaching effectiveness: Final report. The Intensive Partnerships for Effective Teaching through 2015–2016. Santa Monica, CA: The RAND Corporation. Retrieved from https://www.rand.org/pubs/research_reports/RR2242.html

States’ Teacher Evaluation Systems Moving in the “Right” Direction

Last week, a technical report that one of my current and one of my former doctoral students helped me to research and write, was published by the University of Colorado Boulder’s National Education Policy Center (NEPC). While you can navigate to and read the press release here, as well as download and read the full report here, I thought I would summarize the report’s most interesting facts in this post, for the readers/followers of this blog who are likely more interested in the findings pertaining to states’ revised teacher evaluation systems, post the federal passage of the Every Student Succeeds Act (ESSA).

In short, we collected and analyzed for purposes of this study the 51 (i.e., 50 states plus Washington DC) revised teacher evaluation plans submitted to the federal government post ESSA (i.e. spring/summer of 2017) We found, again as specific only to states’ teacher evaluation systems, three key findings:

— First, the role of growth or value-added models (VAMs) for teacher evaluation purposes is declining. That is, the number of states using statewide growth models or VAMs has decreased from 42% to 30% since 2014. This is certainly a step in the “right,” defined as research-informed, direction. See also Figure 1 below (Close, Amrein-Beardsley, & Collins, 2018, p. 13).

— Second, because ESSA loosened federal control of teacher evaluation, many states no longer have a one-size-fits-all teacher evaluation system. This is allowing local districts to make more choices about models, implementation, execution, and the like, in the contexts of the schools and communities in which schools exist.

— Third, the rhetoric surrounding teacher evaluation has changed: language about holding teachers accountable for their value-added effects, or lack thereof, is much less evident in post-ESSA plans. Rather, new plans make note of providing data to teachers as a means of supporting professional development and improvement, essentially shifting the purpose of the evaluation system away from summative and toward formative use.

We also set forth recommendations for states in this report, as based on the evidence noted above (and presented in much more detail in the full report). The recommendations that also directly pertain to states’ (and districts’) teacher evaluation systems are that states/districts:

  1. Take advantage of decreased federal control by formulating revised assessment policies informed by the viewpoints of as many stakeholders as feasible. Such informed revision can help remedy earlier weaknesses, promote effective implementation, stress correct interpretation, and yield formative information.
  2. Ensure that teacher evaluation systems rely on a balanced system of multiple measures, without disproportionate weight assigned to any one measure as allegedly “superior” than any other. If measures contradict one another, however, output from all measures should be interpreted judiciously.
  3. Emphasize data useful as formative feedback in state systems, so that specific weaknesses in student learning can be identified, targeted and used to inform teachers’ professional development.
  4. Mandate ongoing research and evaluation of state assessment systems and ensure that adequate resources are provided to support [ongoing] evaluation [efforts].
  5. Set goals for reducing proficiency gaps and outline procedures for developing strategies to effectively reduce gaps once they have been identified.

We hope this information helps, especially the states and districts still looking to other states to see what is trending. While we note in the title of this blog post as well as the title of the full report that all of this represents “some steps in the right direction,” there is still much work to be done. This is especially true in states, for example like New Mexico (see my most recent post about the ongoing lawsuit in this state here) and other states which have yet to give up on the false promises and limited research of such educational policies established almost one decade ago (e.g., Race to the Top; Duncan, 2009).

Citations:

Close, K., Amrein-Beardsley, A., & Collins, C. (2018). State-level assessments and teacher evaluation systems after the passage of the Every Student Succeeds Act: Some steps in the right direction. Boulder, CO: Nation Education Policy Center (NEPC). Retrieved from http://nepc.colorado.edu/publication/state-assessment

Duncan, A. (2009, July 4). The race to the top begins: Remarks by Secretary Arne Duncan. Retrieved from http://www.ed.gov/news/speeches/2009/07/07242009.html