Article on the “Heroic” Assumptions Surrounding VAMs Published (and Currently Available) in TCR

Share Button

My former doctoral student and I wrote a paper about the “heroic” assumptions surrounding VAMs. We titled it “Truths” Devoid of Empirical Proof: Underlying Assumptions Surrounding Value-Added Models in Teacher Evaluation” and it was just published in the esteemed Teachers College Record (TCR). It is also open and accessible for one week, for free, here. I have also pasted the abstract below for more information.

Abstract

Despite the overwhelming and research-based concerns regarding value-added models (VAMs), VAM advocates, policymakers, and supporters continue to hold strong to VAMs’ purported, yet still largely theoretical strengths and potentials. Those advancing VAMs have, more or less, adopted and promoted a set of agreed-upon, albeit “heroic” set of assumptions, without independent, peer-reviewed research in support. These “heroic” assumptions transcend promotional, policy, media, and research-based pieces, but they have never been fully investigated, explicated, or made explicit as a set or whole. These assumptions, though often violated, are often ignored in order to promote VAM adoption and use, and also to sell for-profits’ and sometimes non-profits’ VAM-based systems to states and districts. The purpose of this study was to make obvious the assumptions that have been made within the VAM narrative and that, accordingly, have often been accepted without challenge. Ultimately, sources for this study included 470 distinctly different written pieces, from both traditional and non-traditional sources. The results of this analysis suggest that the preponderance of sources propagating unfounded assertions are fostering a sort of VAM echo chamber that seems impenetrable by even the most rigorous and trustworthy empirical evidence.

Share Button

More (This Time Obvious) Correlations between Race to the Top and State Policies

Share Button

About one year ago I released a post titled “States on the VAMwagon Most Likely to Receive Race to the Top Funds” in which I wrote about the correlational analyses that reveal that state-level policies that rely at least in part on VAMs are indeed more common in states that (1) allocate less money than the national average for schooling, (2) allocate relatively less in terms of per pupil expenditures, (3) have more centralized governments, (4) are more highly populated and educate relatively larger populations of poor and racial and language minority students, and (5) have as state residents people who predominantly vote for the Republican Party and, related, Republican initiatives. All of these underlying correlations indeed explain why such policies are more popular, and accordingly adopted in certain states versus others.

Later, Mathematica released a News Brief (sponsored by the U.S. Department of Education’s Institute of Education Sciences) titled “Alignment of State Teacher Evaluation Policies with Race to the Top Priorities.” Although Mathematica wrongfully claimed that they were “the first to present data on the extent to which states, both those that received Race to the Top grants and those that did not, reported requiring teacher evaluation policies aligned with Race to the Top priorities as of spring 2012.”

Beat ya to it, Mathematica, but whatever 😉

Anyhow, they found also (continuing from the list above) that states that won Race to the Top monies were states that (6) required more teacher evaluation and accountability policies, (7) used (or proposed to use) multiple measures to evaluate teacher performance, (8) used (or proposed to use) multiple rating categories to classify teacher effectiveness, (9) conducted (or proposed to conduct) teacher evaluations on an annual basis, and (10) used (or proposed to use) evaluation results to inform decisions regarding teacher compensation and career advancement. Go figure!

Share Button

Bias in School-Level Value-Added, Related to High V. Low Attrition

Share Button

In a 2013 study titled “Re-testing PISA Students One Year Later: On School Value Added Estimation Using OECD-PISA” (Organisation for Economic Co‑operation and Development-Programme for International Student Assessment), researchers Bratti and Chechi explored a unique PISA international test score data set in Italy.

‘[I]n two regions of North Italy (Valle d’Aosta and the autonomous province of Trento) the PISA 2009 test was re-administered to the same students one year later.” Hence, authors had the unique opportunity to analyze what happens with school-level value-added when the same students were retested for two adjacent years, using a very strong standardized achievement test (i.e., the PISA).

Researchers found that “cross-sectional measures of school value added based on PISA…tend to be very volatile over time whenever there is a high year-to-year attrition in the student population.” In addition, some of this volatility can be mitigated when longitudinal measures of school value added take into account students’ prior test scores; however, higher consistency (less volatility) tends to be more evident in schools in which there is little attrition/transition. Inversely, lower consistency (higher volatility) tends to be more evident in schools in which there is much attrition/transition.

Researchers observed correlations “as high as 0.92 in Trento and is close to zero in Valle d’Aosta” when the VAM was not used to control for past test scores. When a more sophisticated VAM was used (accounting for students’ prior performance, and school fixed effects), however, researchers found that the ” coefficient [was] much higher for Valle d’Aosta than for Trento.” So, the correlations flip-flopped based on model specifications, the more advanced specs yielding “the better” or “more accurate” value-added output.

Researchers attribute this to panel attrition in that “in Trento only 8% of the students who were originally tested in 2009 dropped out or changed school in 2010, [but] the percentage [rose] to about 21% in Valle d’Aosta” at the same time.

Likewise, “[i]n educational settings characterized by high student attrition, this will lead to very volatile measures of VA.” Inversely, “in settings characterized by low student attrition (drop-out or school changes), longitudinal and cross-sectional measures of school VA turn out to be very correlated.”

Share Button

Splits, Rotations, and Other Consequences of Teaching in a High-Stakes Environment in an Urban School

Share Button

An Arizona teacher who teaches in a very urban, high-needs schools writes about the realities of teaching in her school, under the pressures that come along with high-stakes accountability and a teacher workforce working under an administration, both of which are operating in chaos. This is a must read, as she also talks about two unintended consequences of educational reform in her school about which I’ve never heard before: splits and rotations. Both seem to occur at all costs simply to stay afloat during “rough” times, but both also likely have deleterious effects on students in such schools, as well as teachers being held accountable for the students “they” teach.

She writes:

Last academic year (2012-2013) a new system for evaluating teachers was introduced into my school district. And it was rough. Teachers were dropping like flies. Some were stressed to the point of requiring medical leave. Others were labeled ineffective based on a couple classroom observations and were asked to leave. By mid-year, the school was down five teachers. And there were a handful of others who felt it was just a matter of time before they were labeled ineffective and asked to leave, too.

The situation became even worse when the long-term substitutes who had been brought in to cover those teacher-less classrooms began to leave also. Those students with no contracted teacher and no substitute began getting “split”. “Splitting” is what the administration of a school does in a desperate effort to put kids somewhere. And where the students go doesn’t seem to matter. A class roster is printed, and the first five students on the roster go to teacher A. The second five students go to teacher B, and so on. Grade-level isn’t even much of a consideration. Fourth graders get split to fifth grade classrooms. Sixth graders get split to 5th and 7th grade classrooms. And yes, even 7th and 8th graders get split to 5th grade classrooms. Was it difficult to have another five students in my class? Yes. Was it made more difficult that they weren’t even of the same grade level I was teaching? Yes. This went on for weeks…

And then the situation became even worse. As it became more apparent that the revolving door of long-term substitutes was out of control, the administration began “The Rotation.” “The Rotation” was a plan that used the contracted teachers (who remained!) as substitutes in those teacher-less classrooms. And so once or twice a week, I (and others) would get an email from the administration alerting me that it was my turn to substitute during prep time. Was it difficult to sacrifice 20-40 % of weekly prep time (that is used to do essential work like plan lessons, gather materials, grade, call parents, etc…) Yes. Was it difficult to teach in a classroom that had a different teacher, literally, every hour without coordinated lessons? Yes.

Despite this absurd scenario, in October 2013, I received a letter from my school district indicating how I fared in this inaugural year of the teacher evaluation system. It wasn’t good. Fifty percent of my performance label was based on school test scores (not on the test scores of my homeroom students). How well can students perform on tests when they don’t have a consistent teacher?

So when I think about accountability, I wonder now what it is I was actually held accountable for? An ailing, urban school? An ineffective leadership team who couldn’t keep a workforce together? Or was I just held accountable for not walking away from a no-win situation?

Coincidentally, this 2013-2014 academic year has, in many ways, mirrored the 2012-2013. The upside is that this year, only 10% of my evaluation is based on school-wide test scores (the other 40% will be my homeroom students’ test scores). This year, I have a fighting chance to receive a good label. One more year of an unfavorable performance label and the district will have to, by law, do something about me. Ironically, if it comes to that point, the district can replace me with a long-term substitute, who is not subject to the same evaluation system that I am. Moreover, that long-term substitute doesn’t have to hold a teaching certificate. Further, that long-term substitute will cost the district a lot less money in benefits (i.e. healthcare, retirement system contributions).

I should probably start looking for a job—maybe as a long-term substitute.

Share Button

Data Secrecy in DC Continued…

Share Button

Following a recent post on “Data Secrecy Violating Data Democracy in DC Public Schools (DCPS),” the lawyer(s) from Washington DC sent me an email, including the actual complaint they filed in DC Superior Court to get access to the DC teacher evaluations. With their permission, I include this complaint here, for those of you who might be interested.

The chronology and description of their information request is detailed in the complaint, and the chronology of the attempt to codify the FOIA exemption (under Mayor Bowser) follows (also, as per the above-mentioned lawyer(s)):

On Feb 20, 2015, the American Federation of Teachers (AFT) and Washington Teachers Union (WTU) appealed to Mayor Bowser to require DCPS and DC’s Office of the State Superintendent of Education (OSSE) to turnover the state’s teacher IMPACT evaluation scores (with names redacted) for school years 2009-10 through 2013-14. On March 3, 2015, emergency legislation gets introduced (i.e., legislation in support of “a radical new secrecy provision to hide the information that’s being used to make [such] big decisions.” On March 18, 2015, Mayor Bowser denied AFT/WTU’s appeal for teacher IMPACT scores (again, with names redacted). On March 30, 2015, Mayor Bowser signs the emergency legislation exempting educator evaluations and effectiveness ratings from being disclosed. On April 14, 2015, AFT/WTU file suit to overturn the decision of DCPS and Mayor Bowser. On June 2, 2015, the permanent legislation exempting educator evaluations from FOIA is placed in the DC budget bill “at the request of the Mayor.”

As it also turns out, the prior mayor (Mayor Gray) introduced “emergency” legislation in 2014 to keep teacher evaluations exempt from FOIA as well, and this legislation was actually about to expire when Mayor Bowser recently introduced the emergency, and now permanent legislation. Mayor Gray’s justification was different than current Mayor Bowser’s, however, as according to the legislative history, under former Mayor Gray’s watch, emergency legislation was needed to keep teacher evaluations secret because charter schools throughout DC were refusing to turn over their teacher evaluations to the OSSE, out of fear that the OSSE would release them (e.g., like they did in Los Angeles Unified, via the Los Angeles Times).

Nonetheless, Mayor Gray felt that neither he nor OSSE could compel the charters to turn over their teacher evaluations. Now, Mayor Bowser wants permanent legislation that would exempt teacher evaluations from FOIA, but Mayor Bowser and DCPS are both arguing that the legislation would only apply to charters.

Why? It is not clear. The proposed legislation does not limit the exemption, but rather states: “Individual educator evaluations and effectiveness ratings, observation, and value-added data collected or maintained by OSSE are not public records and shall not be subject to disclosure…” It is also important to note also, though, that charter operators can use whatever evaluation system or performance measures they want. So they are also exempt, in general.

Kaya Henderson, the DCPS Chancellor (Michelle Rhee’s Deputy Chancellor) was on NPR last week on The Politics Hour hosted by Kojo Nnamdi, during which she also insisted that the new legislation was only to apply to charters.

The WTU President, Liz Davis, will be on Kojo’s show this Thursday to address the DCPS IMPACT evaluations and collective bargaining (CBA) negotiations.

Share Button

Evidence of Grade and Subject-Level Bias in Value-Added Measures: Article Published in TCR

Share Button

One of my most recent posts was about William Sanders — developer of the Tennessee Value-Added Assessment System (TVAAS), which is now more popularly known as the Education Value-Added Assessment System (EVAAS®) — and his forthcoming 2015 James Bryant Conant Award — one of the nation’s most prestigious education honors, that will be awarded to him this next month by the Education Commission of the States (ECS).

Sanders is to be honored for his “national leader[ship] in value-added assessments, [as] his [TVAAS/EVAAS] work has [informed] key policy discussion[s] in states across the nation.”

Ironically, this was announced the same week that one of my former doctoral students — Jessica Holloway-Libell, who is soon to be an Assistant Professor at Kansas State University — had a paper published in the esteemed Teachers College Record about this very model. Her paper titled, “Evidence of Grade and Subject-Level Bias in Value-Added Measures” can be accessed (at least for the time being) here.

You might also recall this topic, though, as we posted her two initial drafts of this article over one year ago, here and here. Both posts followed the analyses she conducted after a VAMboozled follower emailed us expressing his suspicions about grade and subject area bias in his district in Tennessee, in which he was/still is a school administrator.  The question he posed was whether his suspicions were correct, and whether this was happening elsewhere in his state, using Sanders’ TVAAS/EVAAS model.

Jessica found it was.

More specifically, Jessica found that:

  1. Teachers of students in 4th and 8th grades were much more likely to receive positive value-added scores than in other grades (e.g., 5th, 6th, and 7th grades); hence, that 4th and 8th teachers are generally better teachers in Tennessee using the TVAAS/EVAAS model.
  2. Mathematics teachers (theoretically throughout Tennessee) are, overall, more effective than Tennessee’s English/language arts teachers, regardless of school district; hence, mathematics teachers are generally better than English/language arts teachers in Tennessee using the TVAAS/EVAAS model.

Being a former mathematics teacher myself, I’d like to support the second claim as being true, being subject-area biased myself. But the fact of the matter is that the counterclaims in this case are obviously true, likely entirely, instead.

It’s not that either or any set of these teachers are in fact better, it’s that Sanders’ TVAAS/EVAAS model – — the model for which Sanders is receiving this esteemed award — is yielding biased output. It is doing this for whatever reason (e.g., measurement error, test construction) but this just adds to the list of other problems (see, for example, here, here, and here) and quite frankly the reasons why this model, not to mention its master creator, is undeserving of really any award, except for a Bunkum, perhaps.

Share Button

Data Secrecy Violating Data Democracy in DC Public Schools

Share Button

The District of Columbia Public Schools (DCPS) is soon to vote on yet another dramatic new educational policy that, as described in an email/letter to all members of the American Federation of Teachers (AFT) by AFT President Randi Weingarten, “would make it impossible for educators, parents and the general public to judge whether some of DCPS’ core instructional strategies and policies are really helping District children succeed.”

As per Weingarten: “Over a year ago, the Washington [DC] Teachers’ Union filed a Freedom of Information Act (FOIA) request to see the data from the school district’s IMPACT [teacher] evaluation system—a system that’s used for big choices, like the firing of 563 teachers in just the past four years, curriculum decisions, school closures and more [see prior posts about this as related to the IMPACT program here]. The FOIA request was filed because DCPS refused to provide the data….[data that are]…essential to understanding and addressing the DCPS policies and practices that impact” teachers and education in general.

Not only are such data crucial to build understandings, as noted, but they are also crucial in support of a functioning democracy, to allow others within a population concerned with a public institution test the mandates and policies they collectively support, in theory or concept (perhaps) but also via public taxes.

Regardless, soon after the DC union filed the FOIA, DCPS (retaliated, perhaps, and) began looking to override FOIA laws through “a radical new secrecy provision to hide the information that’s being used to make big decisions” like those associated with the aforementioned IMPACT teacher evaluation system.

Sound familiar? See prior posts about other extreme governmental moves in the name of secrecy, or rather educational policies at all costs, namely in New Mexico here and here.

You can send a letter to those in D.C. to vote NO on their “Educator Evaluation Data Protection” provisions by clicking here.

As per another post on this topic, in GFBrandenburg’s Blog — that is “Just a blog by a guy who’s a retired math teacher” — Brandenburg did leak some of the data now deemed “secret.” Namely, he “was leaked,” by an undisclosed source, “the 2009-10 IMPACT sub-scores from the Value-Added Monstrosity (VAM) nonsense and the Teaching and Learning Framework (TLF), with the names removed. [He] plotted the two [sets of] scores and showed that the correlation was very, very low, in fact about 0.13 [r-squared=0.33], or nearly random, as you [can] see here:”

vam-vs-tlf-dc-2009-10

In the world of correlation, this is atrocious, IF high-stakes (e.g., teacher termination, tenure, merit pay) are to be attached to such output. No wonder DCPS does not want people checking in to see if that which they are selling is true to what is being sold.

In Brandenburg’s words: “Value-Added scores for any given teacher jumped around like crazy from year to year. For all practical purposes, there is no reliability or consistency to VAM whatsoever. Not even for elementary teachers who teach both English and math to the same group of children and are ‘awarded’ a VAM score in both subjects. Nor for teachers who taught, say, both 7th and 8th grade students in, say, math, and were ‘awarded’ VAM scores for both grade levels: it’s as if someone was to throw darts at a large chart, blindfolded, and wherever the dart lands, that’s your score.”

Share Button

NY’s Board of Regents Voted Today (11:6) in Favor of the State’s New Teacher Evaluation System

Share Button

Two weeks ago, seven members of the 17-member New York State Board of Regents issued a vigorous dissent (included below) charging that the state’s “new and improved” teacher evaluation system, being forced into policy primarily by the state’s Schools Chancellor Merryl Tisch, with the support and prodding of New York Governor Andrew Cuomo, is not (at all) research based, research supported, or research wise.

Today, they voted, and voted 11:6 in favor of the state’s new teacher evaluation plan, making state tests worth 50% of a teacher’s total effectiveness rating. A bad day for teachers in New York…

As per a recent post about his on Diane Ravitch’s blog, “Unlike the Governor and the Legislature, these seven members of the Regents have demonstrated respect for research and concern for the consequences of this hastily-passed law on teachers, children, principals, schools, and communities. They are courageous, they are wise, and they are visionaries. They have shown the leadership that our society so desperately needs. All New Yorkers are in their debt.” See also a post written about this by Carol Burris — New York State’s 2013 High School Principal of the Year, among other things — on her newly released “Round the Inkwell” blog here.

If passed, this will take what was the state’s teacher evaluation system requirement that 20% of an educator’s evaluation be based on “locally selected measures of achievement,” to a system whereas teachers’ value-added as based on growth on the state’s (Common Core) standardized test scores will be set at 50%. See prior posts on just this state on just this blog here, here, and here.

Interesting to point out is the primary research being used to support this new teacher evaluation system going through: The research of Harvard’s Raj Chetty — the Bloomberg Professor of Economics [emphasis added, given former NY Mayor Michael Bloomberg’s “crusade” to, via VAMs, “turn the teaching profession into corporate-world shape”]. Chetty is also the source of much controversy in the area of VAMs and many prior posts on this blog, as well, here, here, and here. The other research being used to support this system going forward is the research of (also) Harvard’s Thomas Kane — Walter H. Gale Professor of Education [no emphasis added as a similar funding connection is not evident, or as blatant] who is also a professor of economics. Kane also directed the $45 million worth of Measures of Effective Teaching (MET) studies for the Bill & Melinda Gates Foundation, that have since been used (contrary to many/most of study findings) keep pushing VAMs forward, especially in policy arenas such as these. Kane, too, is the (highly controversial) source of many prior posts on this blog here, here, and here.

These two, both in loyal support of the other (see also here, here, and here), have quite a “thing” going, now don’t they…

Anyhow, the dissident Regents issued the following, very important statement. This is worth a thorough read in and of itself:

Position Paper Amendments to Current APPR Proposed Regulations

BY SIGNATORIES BELOW JUNE 2, 2015

We. the undersigned, have been empowered by the Constitution of the State of New York and appointed by the New York State Legislature to serve as the policy makers and guardians of educational goals for the residents of New York State. As Regents, we are obligated to determine the best contemporary approaches to meeting the educational needs of the state’s three million P-12 students as well as all students enrolled in our post secondary schools and the entire community of participants who use and value our cultural institutions.

We hold ourselves accountable to the public for the trust they have in our ability to represent and educate them about the outcomes of our actions which requires that we engage in ongoing evaluations of our efforts. The results of our efforts must be transparent and invite public comment.

We recognize that we must strengthen the accountability systems intended to ensure our students benefit from the most effective teaching practices identified in research.

After extensive deliberation that included a review of research and information gained from listening tours, we have determined that the current proposed amendments to the APPR system are based on an incomplete and inadequate understanding of how to address the task of continuously improving our educational system.

Therefore, we have determined that the following amendments are essential, and thus required, in the proposed emergency regulations to remedy the current malfunctioning APPR system.

What we seek is a well thought out, comprehensive evaluation plan which sets the framework for establishing a sound professional learning community for educators. To that end we offer these carefully considered amendments to the emergency regulations.

I. Delay implementation of district APPR plans based on April 1, 2015 legislative action until September 1, 2016.

A system that has integrity, fidelity and reliability cannot be developed absent time to review research on best practices. We must have in place a process for evaluating the evaluation system. There is insufficient evidence to support using test measures that were never meant to be used to evaluate teacher performance.

We need a large scale study, that collects rigorous evidence for fairness and reliability and the results need to be published annually. The current system should not be simply repeated with a greater emphasis on a single test score. We do not understand and do not support the elimination of the instructional evidence that defines the teaching, learning, achievement process as an element of the observation process.

Revise the submission date. Allow all districts to submit by November 15, 2015 a letter of intent regarding how they will utilize the time to review/revise their current APPR Plan.

II. A. Base the teacher evaluation process on student standardized test scores, consistent with research; the scores will account for a maximum of no more than 20% on the matrix.

B. Base 80% of teacher evaluation on student performance, leaving the following options for local school districts to select from: keeping the current local measures generating new assessments with performance –driven student activities, (performance-assessments, portfolios, scientific experiments, research projects) utilizing options like NYC Measures of Student Learning, and corresponding student growth measures.

C. Base the teacher observation category on NYSUT and UFT’s scoring ranges using their rounding up process rather than the percentage process.

III. Base no more than 10% of the teacher observation score on the work of external/peer evaluators, an option to be decided at the local district level where the decisions as to what training is needed, will also be made.

IV. Develop weighting algorithms that accommodate the developmental stages for English Language Learners (ELL) and special needs (SWD) students. Testing of ELL students who have less than 3 years of English language instruction should be prohibited.

V. Establish a work group that includes respected experts and practitioners who are to be charged with constructing an accountability system that reflects research and identifies the most effective practices. In addition, the committee will be charged with identifying rubrics and a guide for assessing our progress annually against expected outcomes.

Our recommendations should allow flexibility which allows school systems to submit locally developed accountability plans that offer evidence of rigor, validity and a theory of action that defines the system.

VI. Establish a work group to analyze the elements of the Common Core Learning Standards and Assessments to determine levels of validity, reliability, rigor and appropriateness of the developmental aspiration levels embedded in the assessment items.

No one argues against the notion of a rigorous, fair accountability system. We disagree on the implied theory of action that frames its tenet such as firing educators instead of promoting a professional learning community that attracts and retains talented educators committed to ensuring our educational goals include preparing students to be contributing members committed to sustaining and improving the standards that represent a democratic society.

We find it important to note that researchers, who often represent opposing views about the characteristics that define effective teaching, do agree on the dangers of using the VAM student growth model to measure teacher effectiveness. They agree that effectiveness can depend on a number of variables that are not constant from school year to school year. Chetty, a professor at Harvard University, often quoted as the expert in the interpretation of VAM along with co-researchers Friedman & Rockoff, offers the following two cautions: “First, using VAM for high-stakes evaluation could lead to unproductive responses such as teaching to the test or cheating; to date, there is insufficient evidence to assess the importance of this concern. Second, other measures of teacher performance, such as principal evaluations, student ratings, or classroom observations, may ultimately prove to be better predictors of teachers’ long-term impacts on students than VAMs. While we have learned much about VAM through statistical research, further work is needed to understand how VAM estimates should (or should not) be combined with other metrics to identify and retain effective teachers.”i Linda Darling Hammond agrees, in a Phi Delta Kappan March 2012 article and cautions that “none of the assumptions for the use of VAM to measure teacher effectiveness are well supported by evidence.”ii

We recommend that while the system is under review we minimize the disruption to local school districts for the 2015/16 school year and allow for a continuation of approved plans in light of the phasing in of the amended regulations.

Last year, Vicki Phillips, Executive Director for the Gates Foundation, cautioned districts to move slowly in the rollout of an accountability system based on Common Core Systems and advised a two year moratorium before using the system for high stakes outcomes. Her cautions were endorsed by Bill Gates.

We, the undersigned, wish to reach a collaborative solution to the many issues before us, specifically at this moment, the revisions to APPR. However, as we struggle with the limitations of the new law, we also wish to state that we are unwilling to forsake the ethics we value, thus this list of amendments.

Regents: Kathleen Cashi, Judith Chin, Catherine Collins, *Josephine Finn, Judith Johnson, Beverly L. Ouderkirk, & Betty A. Rosa. *Regent Josephine Finn said: *”I support the intent of the position paper”

Share Button

EVAAS’s Bill Sanders: “I’m Full of &$%#”

Share Button

Following up on my most recent post — about VAM developer Bill Sanders who is soon to receive a distinguished award for his TVAAS/EVAAS efforts — oh I wish how I had the technical talent to take the sign in this picture here…

11061320_387376384797539_2500215536093064704_n

…and place it on the chest of Bill Sanders in this picture here (…thanks Joe Nashville 😉

William-L.-Sanders

For those of you confused by this post, or for those of you who have not been following VAMboozled! for an extended period of time, click here, here, here, here, and here, for prior blog posts on this topic, for starters 😉

See also here for a research article I authored in 2008 about this particular model. See also here for a more recent article just published largely about this model, in a recent special issues on the topic of VAMs in the esteemed Educational Researcher.

See another two articles here and here about this model’s actual use (and its intended consequences, or lack thereof, and its unintended consequences, as very relevant thereof) in the Houston Independent School District.

Share Button

A VAM Sham(e): Bill Sanders to Receive Distinguished Award for VAM/EVAAS Efforts

Share Button

VAMs were first adopted in education in the late 1980s, when an agricultural statistician/adjunct professor [emphasis added, as an adjunct professor is substantively different than a tenured/tenure-track professor] at the University of Knoxville, Tennessee – William Sanders – thought that educators struggling with student achievement in the state should “simply” use more advanced statistics, similar to those used when modeling genetic and reproductive trends among cattle, to measure growth, hold teachers accountable for that growth, and solve the educational measurement woes facing the state at the time by doing so. It was to be as simple as that….

Hence, Sanders developed the Tennessee Value-Added Assessment System (TVAAS), which is now known as the Education Value-Added Assessment System (EVAAS®), in response.

Nowadays, the SAS® EVAAS® is widely considered, with over 20 years of development, the largest, one of the best, one of the most widely adopted and used, and likely the most controversial VAM in the country. It is controversial in that it is a proprietary model (i.e., it is costly and used/marketed under exclusive legal rights of the inventors/operators), and it is often akin to a “black box” model (i.e., it is protected by a good deal of secrecy/mystery).

Not surprisingly, Tennessee was one of the first states to receive Race to the Top funds to the tune of $502 million, to further advance the SAS® EVAAS® model, still referred to as the TVAAS, however, in the state of Tennessee. See prior posts about Sanders efforts, in Tennessee and beyond here, here, here, here, and here.

Nonetheless, on the SAS® EVAAS® website developers continue to make grandiose marketing claims without much caution or really any research evidence in support (e.g., using the SAS® EVAAS® will provide “a clear path to achieve the US goal to lead the world in college completion by the year 2020″). Riding on such claims, EVAAS backers  continue to sell their SAS® EVAAS® model to states (e.g., Tennessee, North Carolina, Ohio, Pennsylvania) and school districts (e.g., the Houston Independent School District), at a significant amount (as in millions) of taxpayers’ revenues.

As per the news today, released by Chalkbeat Tennessee, “TVAAS creator William Sanders [is] to receive [a] national education award,” as in, the 2015 James Bryant Conant Award, one of the nation’s most prestigious education honors, awarded by the Education Commission of the States (ECS). “The James Bryant Conant Award recognizes an individual for outstanding contributions to American education. The award is one of the most prestigious honors in the education community…The honor is bestowed upon individuals who have demonstrated a commitment to improving education across the country in significant ways,” etc.

More specifically, Sanders is to be honored for his “national leader[ship] in value-added assessments, [as] his work has [informed] key policy discussion[s] in states across the nation.” Indeed his work has informed key policy discussions across the nation, not for the better, however, as literally no peer-reviewed research suggests that his value-added efforts have improved student learning and achievement across the nation’s public schools whatsoever, but I digress…

As per the article: “Hailed by many who seek greater accountability in education, [Sanders’s] TVAAS continues to be a topic of robust discussion in the education community in Tennessee and across the nation. It has been the source of numerous federal lawsuits filed by teachers who charge that the evaluation system—which has been tied to teacher pay and tenure—is unfair and doesn’t take into account student socio-economic variables such as growing up in poverty. Sanders maintains that teacher effectiveness dwarfs all other factors as a predictor of student academic growth.

“With regard to student academic progress,” Sanders said, “the effectiveness of adults within buildings is more important than the mailing addresses of their students.” This is false, but again, I digress…

On this note, see three recent research articles about the EVAAS and its use in practice here, here, and here. These articles contradict most, if not a strong majority of the claims advanced by model advocates and promoters…advanced, again, largely without researcher evidence in support.

Sanders, now 73, is retired and still lives in Tennessee. He is to receive the award during the ECS’s national forum on educational policy in Denver on June 29-July 1.

Calling all picketers/protesters?

Share Button