Breaking News: A Vergara Appeal

As you all likely recall, the Vergara case involved nine public school students (backed by some serious corporate reformer funds) to challenge five California state statutes that supported the state’s “ironclad [teacher] tenure system.” The prosecution successfully advanced its argument in June, when the judge ruled that students’ rights to a good education were being violated by teachers’ job protections…protections that were making it too difficult to fire “grossly ineffective” teachers.

It appears that, thanks to a post by Diane Ravitch, that California’s Superintendent of Schools in California – Tom Torlakson – just issued a statement declaring his decision to seek appellate review of the Vergara ruling, assuming he is re-elected in the forthcoming election. Here is his statement:

“The people who dedicate their lives to the teaching profession deserve our admiration and support. Instead, this ruling lays the failings of our education system at their feet.

“We do not fault doctors when the emergency room is full. We do not criticize the firefighter whose supply of water runs dry. Yet while we crowd our classrooms and fail to properly equip them with adequate resources, those who filed and support this case shamelessly seek to blame teachers who step forward every day to make a difference for our children.

“No teacher is perfect. A very few are not worthy of the job. School districts have always had the power to dismiss those who do not measure up, and this year I helped pass a new law that streamlined the dismissal process, while protecting the rights of both teachers and students. It is disappointing that the Court refused to even consider this important reform.

“In a cruel irony, this final ruling comes as many California teachers spend countless unpaid hours preparing to start the new school year in hopes of better serving the very students this case purportedly seeks to help.

“While the statutes in this case are not under my jurisdiction as state Superintendent, it is clear that the Court’s ruling is not supported by the facts or the law. Its vagueness provides no guidance about how the Legislature could successfully alter the challenged statutes to satisfy the Court. Accordingly, I will ask the Attorney General to seek appellate review.”

Best regards,


Arne Duncan’s “Back-to-School Conversation”

Last week, Arne Duncan wrote a blog post titled, “A Back-to-School Conversation with Teachers and School Leaders.” This was subsequently reprinted on the official blog of the U.S. Department of Education, “Homeroom,” and also summarized/critiqued by Alan Singer on an Huffington Post-post titled, “Arne Duncan Declares Victory in War on Schools and Teachers.”

First, Duncan thanks America’s students as they “have posted some unprecedented achievements in the last year — the highest high-school graduation rate in the nation’s history, and sharp cuts in dropout rates and increases in college enrollment, especially for groups [who] in the past have lagged significantly.” As Diane Ravitch would say, “Where is the evidence?” No evidence is cited or referred/linked.

Those who have ever worked with graduation and dropout rate data (rates in this case which are inversely related but reported here as separately celebratory), also know how difficult it is to report these in a standardized, but more importantly accurate manner. Graduation and dropout rates, like test scores, are very easy to manipulate, adjust, and game, also because few agree on how these should be calculated and what policies and rules should be followed when calculating these rates (e.g., what should serve as the denominator). But let us not spoil Duncan’s celebration, yet.

“These achievements come at a time of nearly unprecedented change in American education — which entails enormously hard work by educators [because educators were not working hard enough prior to Duncan…But now] nearly every state has adopted new standards, new assessments, new approaches to incorporating data on student learning, and new efforts to support teachers,” thanks to Duncan. These policies are, of course, what are purportedly causing the political miracles we are to now celebrate and observe.

But then comes Duncan’s concerns with his politically (and economically) driven solutions. Most importantly he notes that tests are “sucking the oxygen out of the room;” he agrees they should not be. Rather, we should focus on growth in student achievement (i.e., via the use of VAMs) using new-and-improved tests (i.e., via the new tests being developed to align with the common core). Growth in student achievement is reportedly valued “by all;” hence, this will add oxygen back to the classroom air, as well as illuminate that which needs to be done to continue our nation’s purported (and Ducan’s self-reported) trends.

So the conversation is to sidestep the concerns expressed by teachers and educators from throughout the country, and embrace a still grossly politically driven solution based on growth using new-and-improved tests. I see nothing here but a circular logic that, unfortunately, our nation’s education leader cannot convince himself out of. As per Duncan, this is why he, or more we as taxpayers under his leadership, have “committed a third of a billion dollars to two consortia of states working to create [these] new [and improved] assessments.”

Duncan then guarantees that the feds will “stay out of it” (i.e., what measures are used), even though there are only two sets of tests in which “we” have invested millions and in which states are still being encouraged to adopt. The “only” condition is that when “evaluating teachers, states and districts include student growth [i.e., growth models and VAMs]” as at least one part of their state-level teacher evaluation plans. In addition, “states will have the opportunity to request a delay in when test results matter for teacher evaluation…but typically I’d [i.e., Duncan, referring to himself] expect this to mean that states that request [a] delay will push back by [only] one year (to 2015-16) the time when student growth measures based on new state assessments [are to] become part of [states’] evaluation systems.”

As written by Singer in the aforementioned Huffington Post-post, Duncan, “instead of ending the onerous requirements that are creating the “distraction” and “sucking the oxygen” out of the classroom, postponed [these tests] for ONE year, granting states the “opportunity to request a delay in when test results matter for teacher evaluation.” Singer rightfully concludes: “Duncan’s blog is reminiscent of the famous George W. Bush May 1, 2003 “Mission Accomplished” speech on the aircraft carrier USS Abraham Lincoln where the President celebrated the end of major combat operations by the United States in Iraq, a declaration that now appears to have been at least twelve years too early. The vast majority of casualties in the Iraq war occurred after the Bush speech and unfortunately the dismantling of education in the United States and the high-stakes testing war on schools and teachers will continue long after the Duncan blog.”

Vergara in New York, Thanks (in Part) to Campbell Brown

In a post I wrote about “Vergara Going on Tour,” I wrote about how the financier of the Vergara v. California case was preparing to bring similar suits to New York, Connecticut, Maryland, Oregon, New Mexico, Idaho, and Kansas. As well, the law firm that won the Vergara case for the plaintiffs, was also reported to have officially signed on to help defend the Houston Independent School District (HISD) on the forthcoming lawsuit during which, this time, the court will be investigating the EVAAS value-added system and its low- and high-stakes uses in HISD (this is also the source of a recent post here).

Last month, it was reported that New York was the next state on the tour, so-to-speak. To read a post from July about all of this, written by Bruce Baker at Rutgers titled “The VergarGuments are Coming to New York State!” click here.

It also seems that Campbell Brown, previous host of the Campbell Brown Show on CNN and award winning news anchor/journalist for multiple media outlets elsewhere, has joined “the cause” and even started her own foundation in support, aptly named the Partnership for Education Justice. Read more about their mission, as well as “The Challenge” and “The Solution” in America’s public schools as per America’s public school teachers as they define these here.

In New York specifically, via their first but unfortunately and likely not their last “project,” they are helping families “fight for the great teachers their children deserve by challenging factory-era laws that keep poorly-performing teachers in the classroom.” Read also about “The Problem,” the “Roadblocks,” and the like as they pertain to this specific suit in New York here. It probably won’t surprise you to see what research they are using to justify their advocacy work either – give it a quick guess and then check to verify here. Here is also a related article Brown recently wrote about how she (with all of her wisdom about America’s public school system – sorry) feels about teacher tenure.

Anyhow, last month (July 31, 2014) she was interviewed by Stephen Colbert on The Colbert Report on the Comedy Channel. Give this a watch to see what this is all about, in her terms and as per her (wrongheaded, misinformed, etc.) perspectives. See also Colbert’s funny but also wise response(s) to her aforementioned perspectives.

Watch it here:



The Colbert Report i

Vermont’s Enlightened State Board of Education

The Vermont State Board of Education recently released a more than reasonable “Statement on Assessment and Accountability” that I certainly wish would be read and adopted by other leaders across other states.

They encourage their educators to “make use of diverse indicators of student learning and strengths,” when measuring student learning and achievement, the growth of both over time, and especially when using such data to inform their practice. The use of multiple and diverse indicators (i.e., including traditional and non-traditional tests, teacher-developed assessments, and student work samples) is in line with the professional measurement and assessment standards. At the same time, however, they must also “document the opportunities schools provide to further the goals of equity and [said] growth.”

As per growth on standardized tests in particular, and particularly in the case of value-added models (VAMs), they write that such tests and test uses cannot “adequately capture the strengths of all children, nor the growth that can be ascribed to individual teachers. And under high-stakes conditions, when schools feel extraordinary pressure to raise scores, even rising scores may not be a signal that students are actually learning more. At best, a standardized test is an incomplete picture of learning: without additional measures, a single test is inadequate to capture a years’ worth of learning and growth.” This too aligns with the standards of the profession.

They continue, noting that “the way in which standardized tests have been used under federal law as almost the single measure of school quality has resulted in the frequent misuse of these instruments across the nation.” Hence, they also put forth a set of guiding principles they, as a state, are to use to inform their assessment and accountability goals (and mandates).

The principle that should be of most interest to readers of this blog?

  • “Value-added scores – Although the federal government is encouraging states to use value added scores for teacher, principal and school evaluations, this policy direction is not appropriate. A strong body of recent research has found that there is no valid method of calculating “value-added” scores which compare pass rates from one year to the next, nor do current value-added models adequately account for factors outside the school that influence student performance scores. Thus, other than for research or experimental purposes, this technique will not be employed in Vermont schools for any consequential purpose.”

See also their other related principles as also very important, summarized briefly here:

  • All tests must have evidence validating their particular uses. In other words, tests may not be used for things that make sense in theory or may seem convenient. Rather, research evidence must support their uses, otherwise valid inferences cannot be made, or more importantly accepted as valid.
  • When such test scores are reported via press and media outlets, more than just test scores, hierarchical rankings of test scores, and the like are to be reported, given the people of Vermont more holistic understandings about schools in their state.
  • Educators must actively and consciously prevent “excessive testing” as it “diverts resources and time away from learning while providing little additional value for accountability purposes.”
  • “While the federal government continues to require the use of subjectively determined, cut-off scores; employing such metrics lacks scientific foundation…Claims to the contrary are technically indefensible and their application [is to] be [considered] unethical.”
  • “So that [they] can more validly and meaningfully describe school- and state-level progress…[they also endorse] reporting performance in terms of scale scores and standard deviations rather than percent proficient” indicators.
  • “[A]ny report on a school based on the state’s EQS standards must also include a report on the adequacy of resources provided by or to that school in light of the school’s unique needs. Such evaluations shall address the adequacy of resources [and] the judicious use of resources.”
  • In terms of assessment in general, educators are to always align with and follow “the aforementioned guidelines and principles adopted by the American Educational Research Association, the National Council on Measurement in Education, and the American Psychological Association.

See also their list of resolution at the end of this document, as also, I can’t think of a better adjective than enlightened!!


No Teacher Is An Island

This week in The Shanker Blog, authors lan Daly (Professor, University of California San Diego) and Kara Finnigan (Associate Professor, the University of Rochester) published a piece titled: No Teacher Is An Island: The Role Of Social Relations In Teacher Evaluation.

They discuss, as largely based on their research and expertise in social network analyses, the roles of social interactions when examining student outcomes (i.e., student outcomes that are to be directly attributed to teacher effects using value-added models).

They also discuss three major assumptions surrounding the use of value-added measures to assess teacher quality. The first assumption is that growth in student achievement is the result of (really only) interaction(s) among teacher knowledge/training/experience, teachers’ abilities to teach, students’ prior performance levels, and student demographics. Once that assumption is agreed to, the second assumption is that all of these variables can be captured (well), or controlled for (well), using a quantitative or numerical measure. It is then assumed, more generally, that “a teacher’s ability to ‘add-value’ [can be appropriately captured as] a very individualistic undertaking determined almost exclusively by the human capital (i.e., training, knowledge, and skills) of the individual teacher and some basic characteristics of the student.”

As they explain in this piece, these assumptions overlook recent research, as well as reality. They also provide two real-world examples (with graphics to help illustrate how these interactions really look in reality, which I also advise readers to examine here). The first real-world example captures a teacher who “enters a grade level or department in which trust is low and teachers do not share or collaborate around effective practices, innovative ideas or instructional resources, all of which have been shown to support student achievement.” The second real-world example captures a teacher who “enters a department in which teachers actively collaborate, exchange ideas, develop common assessments and reflect on practice – in short, a faculty that operates as a professional learning community.”

Even though these teachers might be teaching two miles from one another, as they are in the case used to illustrate this point, “the first teacher is ‘disadvantaged’ because he/she was not able to learn from colleagues and, as a result, [appears to be] less equipped to provide effective instruction to students. In contrast, in the second scenario, a similarly skilled teacher, one who has benefited from rich exchanges with peers, [appears to have the capacity] to add more ‘value’ based on increased access to effective instructional practices and support from colleagues, as well as many other relational resources such as emotional support or mentorship.”

While these two teachers, with very different professional (and likely personal) realities vary greatly, the value-added models used to evaluate them will not really vary at all, nor will or can the models capture all that interacts with their effectiveness, every single day of every year they teach.

Such teachers will vary only by the types of schools in which they teach, largely given the varying backgrounds of the students they teach and the “prior performance” numerically captured in the model (as mentioned). This, it is assumed, effectively captures all of these other “things” or data nuances (and nuisances) as oft-perceived.

This all continues to occur entirely despite “the social milieu” that always surround teachers’ professional practice, which these authors argue in this article “play a crucial role”…and in their view might be the most significant shortcoming of many/most/all value-added models.

Do read more here.



Friedman Rubbing Elbows

Chetty, Friedman, and Rockoff have been the source of many posts on this blog over the past, almost year. See some of these prior posts here, here, and here. These posts are about their (implied) nobel-worthy work supporting the use of value-added models (VAMs) to increasingly hold America’ public school teachers accountable for that which they purportedly are not doing in America’s public schools.

What I have termed “The Study that Keeps on Giving…” was also largely credited for winning the recent Vergara case in California. But it is also apparently (and as per the explicit boasting of its lead author during a recent set of emails exchanged among Raj Chetty, Diane Ravitch, and me), having a continuous impact on federal policy.

Courtesy of Facebook, this may just be the case.


A Leap of Faith: The Incoherence of Using Value-Added Estimates as a Proxy for Effective Teaching

Jimmy Scherrer – Assistant Professor at North Carolina State University and former teacher and mathematics instructional coach in the Los Angeles Unified School District (LAUSD) – is a rising star in the education academy, in large part due to his educational research on VAMs as well as mathematics, educational policy, and the like. I’ve cited one of the pieces he wrote in 2011 (for a educational practitioner audience) many times as the way he carefully deconstructed some of the assumptions surrounding VAMs and VAM uses within this piece speaks volumes to much of the absurdity surrounding them. See the full PDF of this article here. See also the full reference for this piece: “Measuring Teaching Using Value-Added Modeling: The Imperfect Panacea,” here, in my list of the “Top 25 Research Articles” about VAMs.

Well, Schererrer just published a new article titled, “The Limited Utility of Value-Added Modeling in Education Research and Policy,” and I invited him to write a blog post about this piece for you all here. Scherrer graciously agreed, and wrote the following:

As someone who works with students in poverty [see also a recent article Scherrer wrote in the highly esteemed, peer-reviewed Educational Researcher here], I am deeply troubled by the use of status measures—the raw scores of standardized assessments—for accountability purposes. The relationship between SES and standardized assessment scores is well known. Thus, using status measures for accountability purposes incentivizes teachers to work in the most advantaged schools.

So, I am pleased with the increasing number of accountability systems that are moving away from status measures. In their place, systems seem to be favoring value-added estimates. In theory, this is a significant improvement. However, the manner in which the models are currently being used and how the estimates are currently being interpreted is intellectually criminal. The models’ limitations are obvious. But, as a learning scientist, what’s most alarming is the increasing use of the estimates generated by value-added models as a proxy for “effective” teaching. Here’s why:

Different teaching practices reflect different pedagogical epistemologies. These epistemologies are rooted in various learning perspectives. Different perspectives correspond to different assumptions about how to teach and, ultimately, how to assess. When the education policy community discusses “effectiveness,” the articulation of these different conceptions of learning matter, and considerations of consistency and coherence across learning, teaching, and assessing need to come into the discourse.

Typically, research studies on teaching and learning are framed using one of three perspectives: the behaviorist, the cognitivist, and the situative. Each perspective is associated with a different grain size. The behaviorist perspective focuses on basic skills, such as arithmetic. The cognitivist perspective focuses on conceptual understanding, such as making connections between addition and multiplication. The situative perspective focuses on practices, such as the ability to make and test conjectures. Effective teaching includes providing opportunities for students to strengthen each focus. However, traditional standardized assessments mainly contain questions that are crafted from a behaviorist perspective. The conceptual understanding that is highlighted in the cognitivist perspective and the participation in practices that is highlighted in the situative perspective are not captured on traditional standardized assessments. Thus, the only valid inference that can be made from a value-added estimate is about a teacher’s ability to teach the basic skills and knowledge associated with the behaviorist perspective.

When using assessment data to make an inference about classroom teaching, there needs to be coherence and consistency within a learning perspective. Claims of “effectiveness” can only be made between types of learning and types of teaching that are rooted in the same perspective. The current practice of using value-added estimates as a proxy for effective teaching introduces a “leap” across perspectives. That is, scores from traditional standardized assessments rooted in behaviorism are being used to make inferences about classroom teaching practices that are coherent with different perspectives. This “leap” essentially eliminates the ability to make a connection between high value-added estimates and current notions of effective classroom teaching.

Simply using value-added estimates as a proxy for effective teaching is intellectually lazy. If the education policy community is serious about improving the quality of teaching, then any accountability system must articulate what quality teaching is (not what it produces!). Until then, we all leap at our own risk.

Contact Jimmy Scherrer at and/or follow him on Twitter: @jimmyscherrer  Thanks Jimmy!

One Houston Teacher and Future Teachers Reportedly Not to Be

Following up on a previous post about “Houston Teachers Suing over their District’s EVAAS Use,” an opinion piece released this summer via The Houston Chronicle about the realities of another teacher in Houston working for 15 years in the district, highlights some of the same and a few more of the unfortunate details, including details about those who are reportedly choosing, now, not to teach in the district.

The authors write, “the tool that the Houston Independent School District school board had hoped would keep its best teachers in the classroom is actually sending great teachers running,” and here are some reasons why:

The one teacher highlighted in this piece, “holds a mathematics degree from the University of Houston, has taught all levels of high school mathematics for 15 years…and has repeatedly pursued assignments in high-needs schools with large Latino populations. While administrators, parents and peers have consistently rated him as a highly effective teacher, his EVAAS scores have varied wildly. While at [one district high school], he earned one of the highest EVAAS scores and year-end bonuses possible. Two years ago, teaching the same subject at [another high school] he received a below-average EVAAS score.” This teacher decided to leave the high-needs school in which his students’ performance apparently “biased” his results. He explained, “I can’t afford to be heroic. I want to be in the toughest schools, but the EVAAS model interprets my students’ challenges as my personal [and professional] failure.” For this teacher the ‘unintended consequences meant leaving the students who needed him most.”

Likewise, there seems to be anecdotal evidence that the EVAAS model may be preventing other reportedly “great” teachers from entering the district as well. Although in the article no “hard” numbers are provided, the authors report that “Instructors for teaching certification programs report that their students [i.e., future teachers] are increasingly looking for jobs outside of HISD.” This makes sense, as well, as HISD has been looking for more teachers out of the state, specifically in North Carolina where teacher salaries are among the lowest (read more about this here).

Overall, the authors conclude that “[o]ver the coming months, [the district’s board of] trustees [must] decide whether HISD will continue to evaluate teachers with flawed and unreliable models [i.e., the EVAAS in particular]….”

“Like [with] the seven highly regarded HISD teachers who have filed a lawsuit against the district, the community must call upon the school board to send EVAAS packing. [Houston’s] children deserve no less,” nor do any other districts’ children for that matter.

William Sanders Interview on His TVAAS (aka EVAAS) Model

Following up on my most recent post, about whether “EVAAS’s William Sanders is really a “Leading Academician,” a loyal follower sent me this story, as transcribed via an interview on Nashville Public Radio (also available on this link) about “The Man All Tennessee [and likely other] Teachers Have An Opinion About (But You’ve Never Heard Of);” that is, William Sanders.

I should note here, though, that I would recommend reading both posts (linked to above) before reading just the latter. It adds important context to this article (and Nashville Public Radio interview), about the self-described “numbers guy” and value-added man. Reading both should also help keep consumers keen, and in check/balance re: some of the realities surrounding “his” value-added “his”tory.

“It all started in 1982. He was then a professor at the University of Tennessee. Sanders stumbled upon a newspaper article suggesting there’s no proper way to hold teachers accountable based on test scores. He said nonsense. What started as a personal challenge to prove the conventional knowledge wrong quickly turned into a career….”

Read (or listen) to more of the full story, about his model in general, critics’ criticisms about his model being a “blackbox,” his agricultural inspiration, his interactions with Bill & Melinda Gates and influence on their subsequent funding imperatives, and the like here.


EVAAS’s William Sanders: A “Leading Academician?”

This past Sunday, the Houston Chronicle released an op-ed piece about the VAM being used by the Houston Independent School District (i.e., the widely known/used/abused EVAAS system at the source of a major lawsuit forthcoming in Houston as detailed in previous VAMboozled posts here and here). The author(s) of the piece took a righteous and appropriate stance re: this model, and also included the coveted equation in this piece as well as a follow-up piece responding to a comment made by one of the initial post’s readers (if interested, see the deeper of the two explanations of the EVAAS equation here). Both pieces are worth a read, especially as brief and to the point, and also given the main purposes of both were to help to explain the equation behind the madness of just this model.

But in the latter post, the Houston District’s Superintendent – Terry Grier – was quoted saying the following, in favor as well as defense of his/this model (this quote was used as a counterpoint even though the quote came from a prior post in the Chronicle in May).

Grier wrote: “Value-added measures are the product of nearly three decades of research by leading academicians, and its use dates to the early 1990s,” he wrote. “With this data [sic] in hand, we can identify how much individual students are expected to grow based on their history – and how much they actually grow based on their performance during the year.”

A key point of clarification here. The “leading academician” who developed this model – yes the one “leading academician” who developed this model – is named William L. Sanders. In actuality, he was an adjunct professor of agricultural statistics at a satellite campus of the University of Tennessee in the 1990s (i.e., Knoxville). He was not then and never came to be a tenure-track or tenured professor, or what Superintendent Grier heralded him as a “leading academician.” This is key to point out and understand, as adjunct professors, as a whole, are not “leading academicians” as they do not hold tenured, research positions at research universities. Rather, they are hired, again as a whole, to teach. They are not typically members of the academic faculty, nor are they officially qualified or classified as such.

On this note, some argue that this is why Sanders made his EVAAS model proprietary – he was not accustomed to how real university “academicians” typically conduct open research in research universities, whereas research is conducted for the common good, research findings are open and subjected to critique, research is open for replication to verify and also improve upon research findings, and the like. Rather, the route Sanders took, perhaps related to his adjunct role back in the 1990s, was more about personal and financial interest and gain.