Tennessees’ TVAAS (now EVAAS) Developer W. L. Sanders on his VAM

The model I know best, as I have been researching this one for now almost a decade, is the TVAAS (which is now more popularly known as the EVAAS) which, as mentioned numerous times on this blog, has its strong roots in Tennessee. It is in Tennessee that William L. Sanders, a then (in the 1980s/90s) Adjunct Professor of Agriculture at the University of Tennessee – Knoxville, developed the TVAAS.

Contrary to what was written in an article released today in The Tennessean, however, he did not invent “value-added.” This has been a mainly econometric approach that can be found in economics literature since the 1970s. Regardless, it is worth a read of this article to understand this model’s history, the model in education from which much of the current education system’s value-added “craze” (or as they call it “vogue” trend) came, as this article was written in response to the many lawsuits coming to fruition in Tennessee (see posts forthcoming this week), largely in the model’s defense.

Interesting points to point out:

No surprise, I guess, that none of the other “issues” about which the EVAAS has been continuously questioned and critiqued were addressed in this article (e.g., about fairness and the teachers who are not TVAAS eligible, validity or the lack of relationships between the TVAAS and other indicators of quality in Tennessee, subject and grade level bias, as written about here and here, etc.).

 

Research Brief: Access to “Effective Teaching” as per VAMs

Researchers of a brief released from the Institute of Education Sciences (IES), the primary research arm of the United States Department of Education (USDOE), recently set out to “shed light on the extent to which disadvantaged students have access to effective teaching, based on value-added measures [VAMs]” as per three recent IES studies that have since been published in peer-reviewed journals and that include in their analyses 17 total states.

Researchers found, overall, that: (1) disadvantaged students receive less-effective teaching and have less access to effective teachers on average, that’s worth about a four-week lack of achievement in reading and about a two-week lack of achievement in mathematics as per VAM-based estimates, and (2) students’ access to effective teaching varies across districts.

On point (1), this is something we have known for years, contrary to what the authors of this brief write (i.e., “there has been limited research on the extent to which disadvantaged students receive less effective teaching than other students.” They simply dismiss a plethora of studies because researchers did not use VAMs to evaluate “effective teaching.” Linda Darling-Hammond’s research, in particular, has been critically important in this area for decades. It is a fact that, on average, students in high-needs schools that disproportionally serve the needs of disadvantaged students have less access to teachers who have certain teacher-quality indicators (e.g., National Board Certification and advanced degrees/expertise in content-areas, although these things are argued not to matter in this brief). In addition, there are also higher teacher turnover rates in such schools, and oftentimes such schools become “dumping grounds” for teachers who cannot be terminated due to many of the tenure laws currently at focus and under fire across the nation. This is certainly a problem, as is disadvantaged students’ access to effective teachers. So, agreed!

On point (2), agreed again. Students’ access to effective teaching varies across districts. There is indeed a lot of variation in terms of teacher quality across districts, thanks largely to local (and historical) educational policies (e.g., district and school zoning, charter and magnet schools, open enrollment, vouchers and other choice policies promoting public school privatization), all of which continue to perpetuate these problems. No surprise really, here, either, as we have also known this for decades, thanks to research that has not been based solely on the use of VAMs but research by, for example, Jonathan Kozol, bell hooks, and Jean Anyon to name a few.

What is most relevant here, though, and in particular for readers of this blog, is that the authors of this brief used misinformed approaches when writing this brief and advancing their findings. That is, they used VAMs to examine the extent to which disadvantaged students receive “less effective teaching” by defining “less effective teaching” using only VAM estimates as the indicators of effectiveness, and as relatively compared to other teachers across the schools and districts in which they found that such grave disparities exist. All the while, not once did they mention how these disparities very likely biased the relative estimates on which they based their main findings.

Most importantly, they blindly agreed to a largely unchecked and largely false assumption that the teachers caused the relatively low growth in scores rather than the low growth being caused by the bias inherent in the VAMs being used to estimate the relative levels of “effective teaching” across teachers. This is the bias that across VAMs is still, it seems weekly, becoming more apparent and of increasing concern (see, for example, a recent post about a research study demonstrating this bias here).

This is also the same issue I detailed in a recent post titled, “Chicken or the Egg?” in which I deconstructed the “Which came first, the chicken or the egg?” question in the context of VAMs. This is becoming increasingly important as those using VAM-based data are using them to make causal claims, when only correlational (or in simpler terms relational) claims can and should be made. The fundamental question in this brief should have been, rather, “What is the real case of cause and consequence” when examining “effective teaching” in these studies across these states? True teacher effectiveness, or teacher effectiveness along with the bias inherent in and across VAMs given the relativistic comparisons on which VAM estimates are based…or both?!?

Interestingly enough, not once was “bias” even mentioned in either the brief or its accompanying technical appendix. It seems to these researchers, there ain’t no such thing. Hence, their claims are valid and should be interpreted as such.

That being said, we cannot continue to use VAM estimates (emphasis added) to support claims about bad teachers causing low achievement among disadvantaged students when VAM researchers increasingly evidence that these models cannot control for the disadvantages that disadvantaged students bring with them to the schoolhouse door. Until these models are bias-free (which is unlikely), never can claims be made that the teachers caused the growth (or lack thereof), or in this case more or less growth than other similar teachers with different sets of students non-randomly attending different districts and schools and non-randomly assigned into different classrooms with different teachers.

VAMs are biased by the very nature of the students and their disadvantages, both of which clearly contribute to the VAM estimates themselves.

It is also certainly worth mentioning that the research cited throughout this brief is not representative of the grander peer-reviewed research available in this area (e.g., research derived via Michelle Rhee’s “Students First”?!?). Likewise, having great familiarity with the authors of not only the three studies cited in this brief, but also the others cited “in support,” let’s just say their aforementioned sheer lack of attention to bias and what bias meant for the validity of their findings was (unfortunately) predictable.

As far as I’m concerned, the (small) differences they report in achievement might as well be real or true, but to claim that teachers caused the differences because of their effectiveness, or lack thereof, is certainly false and untrue.

Citation: Institute of Education Sciences. (2014, January). Do disadvantaged students get less effective teaching? Key findings from recent Institute of Education Sciences studies. National Center for Education Evaluation and Regional Assistance. Retrieved from http://ies.ed.gov/ncee/pubs/20144010/

A VAM Shame, again, from Florida

Another teacher from Florida wrote a blog post for Diane Ravitch, and I just came across it and am re-posting it here. Be sure to give it a good read as you will see that what is happening in her state right now and why it is a VAM shame!

She writes:

I conducted a very unscientific study and concluded that I might possibly have the worst VAM score at my school. Today I conducted a slightly more scientific analysis and now I can confidently proclaim myself to be the worst teacher at my school, the 14th worst teacher in Dade County, and the 146th worst (out of 120,000) in the state of Florida! There were 4,800 pages of teachers ranked highest to lowest on the Florida Times Union website and my VAM was on page 4,795. Gosh damn! That’s a bad VAM!  I always feared I might end up at the low end of the spectrum due to the fact that I teach gifted students that score high already and have no room to grow, but 146th out of 120,000?!?! That’s not “needs improvement.” That’s “you really stink and should immediately have your teaching license revoked before you do anymore harm to innocent children” bad. That’s, “your odds are so bad you better hope you don’t get eaten by a shark or struck by lightening” bad.  This is the reason I don’t play the lotto or gamble in Vegas. And to think some other Florida teacher had the nerve to write a blog post declaring herself to be one of the worst teachers in the state and her VAM was only -3%! Negative 3 percent is the best you got honey? I’ll meet your negative 3 percent and raise you another negative 146 percentage points! (Actually I enjoyed her blog post [see also our coverage of this teacher’s story here] and I hope more teachers come out of their VAM closets soon).

Speaking of coming out of the VAM closet, I managed to hunt down the emails of about ten other bottom dwellers as posted by the Florida Times Union. I was attempting to conduct a minor survey of what types of teachers end up getting slammed by VAM. Did they have anything in common? What types of students did they teach? As of this moment, none of them have returned my emails. I really wanted to get in touch with “The Worst Teacher in the State of Florida” according to VAM. After a little cyber stalking, it turns out she’s my teaching twin. She also teaches ninth grade world history to gifted students in a preIB program.  The runner up for “Worst Teacher in the State of Florida” teaches at an arts magnet school.  Are we really to believe that teachers selected to teach in an IB program or magnet school are the very worst the state of Florida has to offer? Let me tell you a little something about teaching gifted students. They are the first kids to nark out a bad teacher because they don’t think anyone is good enough to teach them. First they’ll let you know to your face that they’re smarter than you and you stink at teaching. Then they’ll tell their parents and the gifted guidance counselor who will nark you out to the Principal. If you suck as a gifted teacher, you won’t last long.

I don’t want to ignore the poor teachers that get slammed by VAM on the opposite end of the spectrum either. Although there appeared to be many teachers of high achievers who scored poorly under VAM, there also seemed to be an abundance of special education teachers as well.  These poor educators are often teaching children with horrible disabilities who will never show any learning gains on a standardized test. Do we really want to create a system that penalizes and fires the teachers whose positions we struggle the hardest to fill? Is it any wonder that teachers who teach the very top performers and teachers who teach the lowest performers would come out looking the worst in an algorithm measuring learning gains? I suck at math and this was immediately obvious to me.

Another interesting fact garnered from my amateur and cursory analysis of Florida VAM data is the fact that high school teachers overwhelming populated the bottom of the VAM rankings. Of the 148 teachers who scored lower than me, 136 were high school teachers. Ten were middle school teachers, and only two elementary school teachers.  All of this directly contradicts the testimony of Ms. Kathy Hebda, Deputy Chancellor for Educator Quality, in front of the Florida lawmakers last year regarding the Florida VAM.

“Hebda presented charts to the House K-12 Education Subcommittee that show almost zero correlation between teachers’ evaluation scores and the percentages of their students who are poor, nonwhite, gifted, disabled or English language learners. Teachers similarly didn’t get any advantage or disadvantage based on what grade levels they teach.

“Those things didn’t seem to factor in,” Hebda said. “You can’t tell for a teacher’s classroom by the way the value-added scores turned out whether she had zero percent students on free and reduced price lunch or 100 percent.”

Hebda’s 2013 testimony in two public hearings was intended to assure policymakers that everything was just swell with VAM as an affirmation that the merit pay provision of the 2011 Student Success Act (SB736) was going to be ready for prime time in the scheduled 2015 roll-out. No wonder the FLDOE didn’t want actual VAM date released as data completely contradicts Hebda’s assurances that “the model did its job.”

I certainly have been a little disappointed with the media coverage of the FLDOE losing its lawsuit and being forced to release Florida teacher VAM data this week.  The Florida Times Union considers this data to be a treasure trove of information but they haven’t dug very deep into the data they fought so hard to procure. The Miami Herald barely acknowledged that anything noteworthy happened in education news this week.  You would think some other journalist would have thought to cover a story about “The Worst Teacher in Florida.” I write this blog to cover teacher stories that major media outlets don’t seem interested in telling (that, and I am trying to stave off early dementia while on maternity leave).  One journalist bothered to dig up the true story behind the top ten teachers in Florida. But no one has bothered telling the stories of the bottom ten. Those are the teachers who are most likely to be fired and have their teaching licenses revoked by the state. Let those stories be told. Let the public see what kinds of teachers they are at risk of losing to this absurd excuse of an “objective measure of teacher effectiveness” before it’s too late.

A Florida Media Arts Teacher on Her “VAM” Score

A Media Arts Teacher from the state of Florida wrote a piece for The Washington Post – The Answer Sheet by Valerie Strauss about her VAM score recently publicly released, even though she is a media arts teacher and does not teach the subject areas and many of the students tested and whose test scores are being used to hold her accountable.

Bizarre, right? Not really, as this too is a reality facing many teachers who teach out-of-subject areas, or more specifically subject areas that “don’t count,” and who teach students sometimes a lot yet sometimes never. They are being assigned “school-level” VAM scores, and these estimates regardless of their actual contributions are being used to make consequential decisions (e.g., in this case, about her merit pay).

She writes about “What it feels like to be evaluated on test scores of students I don’t have,” noting, more specifically, about what others “need to know about [her] VAM score.” For one, she writes, “As a media specialist, [her] VAM is determined by the reading scores of all the students in [her] school, whether or not [she] teach[es] them. [Her] support of the other academic areas is not reflected in this number.” Secondly, she writes, “Like most teachers, [she has] no idea what [her] score means. [She] know[s] that [her] VAM is related to school-wide reading scores but [she] do[es]n’t understand how it’s calculated or exactly what data is [sic] used. This number does not give [her] feedback about what [she] did for [her] students to support their academic achievement last year or how to improve [her] instruction going forward.” She also writes about issues with her school being evaluated differently from the state system given they are involved in a Gates Foundation grant, and she writes about her concerns about the lack of consistency in teacher-level scores over time, as based on her knowledge of the research. See the full article linked again here to read more.

Otherwise, she concludes with what a very real question, also being faced by many. She writes, “[W]hy do I even care about my VAM score? Because it counts. My VAM score is a factor in determining if I am eligible for a merit pay bonus, whether I have a job in the next few years, and how many times I’ll be evaluated this year.” Of course, she cares as she and many others are being forced to care about their professional livelihoods out from under a system that is out of her control, out of thousands of teachers’ control, and in so many ways just simply out of control.

See also what she has to offer in terms of what she frames as a much better evaluation system, that would really take into account her effectiveness and the numbers that are certainly much more indicative of her as a teacher. These are the numbers, if we continue to fixate on the quantification of effectiveness, that in all actuality should count.

 

 

Stanford Professor Linda Darling-Hammond at Vergara v. California

As you recall from my most recent post, this past Tuesday (March 18, 2014 – “Vergara Trial Day 28“), David C. Berliner, Regents’ Professor Emeritus at Arizona State University (ASU), testified for six hours on behalf of the defense at Vergara v. California. He spoke, primarily, about the out-of-school and in-school peer factors that impact student performance in schools and how this impacts and biases all estimates based on test scores (e.g., VAMs).

Two days later, also on the side of the defense, Stanford Professor Linda Darling-Hammond also took the stand (March 20, 2014 – “Vergara Trial Day 30“). For those of you who are not familiar with Linda Darling-Hammond, or her extensive career as one of the best, brightest, and most influential scholars in the academy of education, she is the nation’s leading expert on issues related to teacher quality, teacher recruitment and retention, teacher preparation, and, related, teacher evaluation (e.g., using value-added measures).

Thanks to a friend of Diane Ravitch, an insider at the trial, Darling-Hammond testified with the following as some of her highlights as they pertain directly to our collective interests on VAMboozled! here.

“On firing the bottom 5% of teachers…My opinion is that there are at least three reasons why firing the bottom 5 percent of teachers, as defined by the bottom 5 percent on an effectiveness continuum created by using the value-added test scores of their students on state tests, will not improve the overall effectiveness of teachers…One reason is that… value-added metrics are inaccurate for many teachers. In addition, they’re highly unstable. So the teachers who are in the bottom 5 percent in one year are unlikely to be the same teachers as who would be in the bottom 5 percent the next year, assuming they were left in place…the third reason is that when you create a system that is not oriented to attract high-quality teachers and support them in their work, that location becomes a very unattractive workplace…[we have]…empirical proof of that…situation currently in Houston, Texas [referencing my research in Houston], which has been firing many teachers at the bottom end of the value-added continuum without creating stronger overall achievement, and finding that they have fewer and fewer people who are willing to come apply for jobs in the district because with the instability of those scores, the inaccuracy and bias that they represent for groups of teachers…it’s become an unattractive place to work.”

“The statement is often made with respect to Finland that if you fire the bottom 5 percent [of teachers], we will be on a par with achievement in Finland. And Finland does none of those things. Finland invests in the quality of beginning teachers, trains them well, brings them into the classroom and supports them, and doesn’t need to fire a lot of teachers.”

“You can’t fire your way to Finland” (although this quote, also spoken by Darling-Hammond, did not come from this particular testimony).

While Students Matter (those financing this lawsuit, big time) twisted her testimony, again, like they did with the testimony of David Berliner (see the twists here), Darling-Hammond also testified about some other interesting and relevant topics. Here are some of the highlights from her testimony:

“On what a good evaluation process looks like….With respect to tenure decisions, first of all, you need to have – in the system, you need to have clear standards that you’re going to evaluate the teacher against, that express the kind of teaching practices that are expected; and a way of collecting evidence about what the teacher does in the classroom. That includes observations and may also include certain artifacts of the teacher’s work, like lesson plans, curriculum units, student work, et cetera…You need well-trained evaluators who know how to apply that instrument in a consistent and effective way…You want to have a system in which the evaluation is organized over a period of time so that the teacher is getting clarity about what they’re expected to do, feed back about what they’re doing, and so on.”

“On the problem with extending the tenure beyond two years…It’s important that while we want teachers to at some point have due process rights in their career, that that judgment be made relatively soon; and that a floundering teacher who is grossly ineffective is not allowed to continue for many years because a year is a long time in the life of a student…having the two-year mark—which means you’re making a decision usually within 19 months of the starting point of that teacher – has the interest of…encouraging districts to make that decision in a reasonable time frame so that students aren’t exposed to struggling teachers for long than they might need to be….But at the end of the [d]ay, the most important thing is not the amount of time; the most important thing is the quality and the intensity of the evaluation and support process that goes on for beginning teachers.”

“On the benefits and importance of having a system that includes support for struggling teachers…it’s important both as a part of a due process expectation; that if somebody is told they’re not meeting a standard, they should have some help to meet that standard…in such programs, we often find that half of the teachers do improve. Others may not improve, and then the decision is more well-grounded. And when it is made, there is almost never a grievance or a lawsuit that follows because there’s [been] such a strong process of help…in the cases where the assistance may not prove adequate to help an incompetent teacher become competent, the benefit is that that teacher is going to be removed from the classroom sooner.”

ASU Regents’ Professor Emeritus David Berliner at Vergara v. California

As you (hopefully) recall from a prior post, nine “students” from the Los Angeles School District are currently suing the state of California “arguing that their right to a good education is [being] violated by job protections that make it too difficult to fire bad [teachers].” This case is called Vergara v. California, and it is meant to challenge “the laws that handcuff schools from giving every student an equal opportunity to learn from effective teachers.” Behind these nine students stand a Silicon Valley technology magnate (David Welch), who is financing the case and an all-star cast of lawyers, and Students Matter, the organization founded by said Welch.

This past Tuesday (March 18, 2014 – “Vergara Trial Day 28“), David C. Berliner, Regents’ Professor Emeritus here at Arizona State University (ASU), who also just happens to be my forever mentor and academic luminary, took the stand. He spoke, primarily, about the out-of-school factors that impact student performance in schools and how this impacts and biases all estimates based on test scores (often regardless of the controls uses – see a most recent post about this evidence of bias here).

As per a recent post by Diane Ravitch (thanks to an insider at the trial) Berliner said:

“The public and politicians and parents overrate the in-school effects on their children and underrate the power of out-of-school effects on their children.” He noted that in-school factors account for just 20 percent of the variation we see in student achievement scores.

He also discussed value-added models and the problems with solely relying on these models for teacher evaluation. He said, “My experience is that teachers affect students incredibly. Probably everyone in this room has been affected by a teacher personally. But the effect of the teacher on the score, which is what’s used in VAM’s, or the school scores, which is used for evaluation by the Feds — those effects are rarely under the teacher’s control…Those effects are more often caused by or related to peer-group composition…”

Now, Students Matter has taken an interesting (and not surprising) take on Berliner’s testimony (given their own slant/biases given their support of this case), which can also be found at Vergara Trial Day 28. But please read this with caution as the author(s) of this summary, let’s say, twisted some of the truths in Berliner’s testimony.

Berliner’s reaction? “Boy did they twist it. Dirty politics.” Hmm…

Research Study: VAM-Based Bias

Researchers from Indiana and Michigan State University, in a study released in the fall of 2012 but that recently came through my email again (thanks to Diane Ravitch), deserves a special post here as it relates to not only VAMs but also the extent to which all VAM models yield biased results.

In this study (albeit still not peer reviewed, so please interpret accordingly), researchers “investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of…student grouping and teacher assignment scenarios.” Researchers find that no VAM “accurately captures true teacher effects in all scenarios, and the potential for misclassifying teachers as high- or low-performing can be substantial [emphasis added].”

While these researchers suggest different statistical controls to yield less biased results (i.e., a dynamic ordinary least square [DOLS] estimator), the bottom line is that VAMs cannot “effectively isolate the ‘true’ contribution of teachers and schools to achievement growth” over time. Whether this will ever be possible given mainly the extraneous variables that are outside of the control of teachers and schools, but that continue to confound and complicate VAM-based estimates deeming them (still) unreliable and invalid, particularly for the high-stakes decision-making purposes for which VAMs are increasingly being tasked, is highly suspect.

The only way we might reach truer/more valid and less biased results is to randomly assign students and teachers to classrooms, which as evidenced in a recent article one of my doctoral students and I recently had published in the highly esteemed American Educational Research Journal, is highly impractical, professionally unacceptable, and realistically impossible. Hence, “[i]f higher achieving students are grouped within certain schools and lower achieving students in others, then the teachers in the high-achieving schools, regardless of their true teaching ability, will [continue to] have higher probabilities of high-achieving classrooms. Similarly, if higher ability teachers are grouped within certain schools and lower ability teachers in others, then students in the schools with better teachers will [continue to] realize higher gains.” This exacerbates the nonrandom sorting issues immensely.

The researchers write, as well, that “it is clear that every estimator has an Achilles heel (or more than one area of potential weakness).” While VAMs seem to have plenty of potential and very real weaknesses, VAM-based bias is one weakness that certainly stands out, here and elsewhere, especially in that so many pro-VAM statisticians believe and continue to perpetuate beliefs about how their complex statistics (e.g., shrinkage estimators) can (miraculously) control for everything and all things causing chaos. As evidenced in this study, the notable work of Jesse Rothstein – Associate Professor at UC Berkeley (see two of his articles here), and other studies cited in the aforementioned study (linked again here), this is not and likely never will be the case. It just isn’t!

Finally, these researchers conclude that, “even in the best scenarios and under the simplistic and idealized conditions…the potential for misclassifying above average teachers as below average or for misidentifying the ‘worst’ or ‘best’ teachers remains nontrivial.” Accordingly, misclassification rates can range “from at least seven to more than 60 percent” depending on the statistical controls and estimators used and the moderately to highly non-random student sorting practices and scenarios across schools.

Full study citation: Guarino, C. M., Reckase, M. D., & Wooldridge, J. M. (2012, December 12). Can value-added measures of teacher education performance be trusted? East Lansing, MI: The Education Policy Center at Michigan State University. Retrieved from http://education.msu.edu/epc/library/documents/WP18Guarino-Reckase-Wooldridge-2012-Can-Value-Added-Measures-of-Teacher-Performance-Be-T_000.pdf

VAMs “Contribute to Student Learning?”

On Friday, The Florida Times-Union released an Opinion-Editorial (Op-Ed) titled “VAM Data Helps [sic] Contribute to Student Learning.” With curiosity, I thought this might be the first Op-Ed to suggest, hopefully with at least some evidence, that VAM-based data can be used in some type of formative way(s). The author of this might know how, or better yet have research evidence to support the title of his/her Op-Ed piece?

So I thought, in many ways unfortunately, as I can (still) only hope that at least something positive is coming from all this VAM-based nonsense (e.g., increased student learning).

Unfortunately, however, and after reading the first paragraph, I could have predicted that this piece did not come from an educator. Rather, the Op-Ed was written by Gary Chartrand — the chairman of Florida’s State Board of Education.

His claims? All hoaxes, as Diane Ravitch would put it, all of which he is advancing in this Op-Ed without any research evidence whatsoever in support, but because he “believes” in the claims he advances. Recall when people believed the world was flat? Just because people believed this, did not mean it was true though did it (see note below).

Some of his most outlandish (and unfortunately false) claims and beliefs include the following:

  • Those who are critical of VAMs, particularly in Florida are “seek[ing] to reverse the hard work of teachers and school districts in using this [sic] data to help inform teacher practice and performance.”
  • VAMs “provide a more in-depth and realistic look at classroom practices that the best teachers [can] use every day [emphasis added] to improve instruction and student learning.”
  • “It is only when these important data sources [e.g., VAM estimates and observational data] are considered together that we begin to see the full picture of an individual’s performance and can determine how much our teachers and principals are contributing to our students’ learning.”

Unfortunately, again, all of these claims are false, and false for so many reasons already detailed on this blog. For purposes of brevity, however, no research evidence exists, to date, to support any of the above claims, or any of the other claims as written into this piece.

I have read over 700 research articles, technical reports, news stories, and other Op-Eds just like this, and never has even one of them (i.e., that is from a pro VAM perspective) ever been written by a teacher or administrator working in America’s public schools and living out the realities of these systems in practice.

If one of you are out there, please do write because I would love to know that at least somebody out there, with hands on experience with these data, can actually evidence if not just suggest that using VAM-based data does improve student learning. I would honestly like to be wrong in this case.

Note: Thanks to Dr. James Banks for a recent conversation about this (i.e., beliefs versus research-based truths) during his Inside the Academy interview.

What is “Value-Added” in Agriculture?

An interesting post came through my email defining “value-added” in its purest form. This comes from the field of agriculture where value-added is often used to model genetic and reproductive trends among livestock, and from where it was taken and applied to the “field” of education in the 1980s.

Here’s the definition: “Value-Added is the process of taking a raw commodity and changing its form to produce a high quality end product. Value-Added is defined as the addition of time, place, and/or form utility to a commodity in order to meet the tastes/preferences of consumers. In other words, value-added is figuring out what consumers want, when they want it, and where they want it – then mak[ing] it and provid[ing] it to them.”

In education, the simplest of translations follows: “Value-Added is the process of taking learning (i.e., a raw material) and changing its form (i.e., via teaching and instruction) to produce a high quality end product (i.e., high test scores). Value-Added is defined as the addition “value” in terms of changing learning’s most observable characteristics (i.e., test scores) in order to meet the (highly politicized) tastes/preferences of consumers. In other words, value-added is figuring out what consumers want, when they want it, and where they want it – then mak[ing] it and provid[ing] it to them.”

If only it were as simple as that. Most unfortunate is that most policymakers, being non-educators/educationists but self-identified education experts, cannot get past this overly-simplified definition, translation, and shallow degreel of depth.