An AZ Teacher’s Perspective on Her “Value-Added”

This came to me from a teacher in my home state – Arizona. Read not only what is becoming a too familiar story, but also her perspective about whether she is the only one who is “adding value” (and I use that term very loosely here) to her students’ learning and achievement.

She writes:

Initially, the focus of this note was going to be my 6-year long experience with a seemingly ever-changing educational system.  I was going to list, with some detail, all the changes that I have seen in my brief time as a K-6 educator, the end-user of educational policy and budget cuts.  Changes like (in no significant order):

  • Math standards (2008?)
  • Common Core implementation and associated instructional shifts (2010?)
  • State accountability system (2012?)
  • State requirements related to ELD classrooms (2009?)
  • Teacher evaluation system (to include a new formula of classroom observation instrument and value-added measures) (2012-2014)
  • State laws governing teacher evaluation/performance, labeling and contracts (2010?)

have happened in a span of, not much more than, three years. And all these changes have happened against a backdrop of budget cuts severe enough to, in my school district, render librarians, counselors, and data coordinators extinct.  In this note, I was going to ask, rhetorically: “What other field or industry has seen this much change this quickly and why?” or “How can any field or industry absorb this much change effectively?”

But then I had a flash of focus just yesterday during a meeting with my school administrators, and I knew immediately the simple message I wanted to relay about the interaction of high-stakes policies and the real world of a school.

At my school, we have entered what is known as “crunch time”—the three-month long period leading up to state testing.  The purpose of the meeting was to roll out a plan, commonly used by my school district, to significantly increase test scores in math via a strategy of leveled grouping. The plan dictates that my homeroom students will be assigned to groups based on benchmark testing data and will then be sent out of my homeroom to other teachers for math instruction for the next three months. In effect, I will be teaching someone else’s students, and another teacher will be teaching my students.

But, wearisomely, sometime after this school year, a formula will be applied to my homeroom students’ state test scores in order to determine close to 50% of my performance. And then another formula (to include classroom observations) will be applied to convert this performance into a label (ineffective, developing, effective, highly effective) that is then reported to the state.  And so my question now is (not rhetorically!), “Whose performance is really being measured by this formula—mine or the teachers who taught my students math for three months of the school year?” At best, professional reputations are at stake–at worse, employment is.

New Research Studies Just Released

Last year, the University of Arizona in Tucson, AZ hosted a conference that included multiple presentations from multiple scholars throughout the country, all of whom are doing research on value-added models (VAMs). These scholar who presented their research include: Thomas Good, Ronald Marx, and Alyson Lavigne from the University of Arizona, Spyros Konstantopoulos from Michigan State University, Heather Hill and a team of her colleagues from Harvard University, David Berliner from Arizona State University, Rick Ginsberg and Neal Kingston from the University of Kansas, and myself with my former graduate student Clarin Collins from Arizona State University.

Teachers College Record thereafter just released each of the authors’ research-pieces that were presented. Click here to access/read the new research studies released, and click here to read the foreword introducing each of the studies written by Diane Ravitch from New York University. To view the presentations each of the above-mentioned scholars made at the conference click here.

While we will summarize each of these studies in the weeks forthcoming, mainly for those of you out there without the time to read each one in full, please take a look as each of these should inform our current thinking on these topics, not to mention policymakers’ thinking and beliefs.

The Study That Keeps On Giving…

About two months ago, I posted (1) a critique of a highly publicized Mathematica Policy Research study released to the media about the vastly overstated “value” of value-added measures, and (2) another critique of another study released to the media by the National Bureau of Economic Research (NBER). This one, like the other, was not peer-reviewed, or even internally reviewed, yet it was released despite its major issues (e.g., overstated findings about VAMs based on a sample for which only 17% of teachers actually had value-added data).

Again, neither study went through a peer review process, both were wrought with methodological and conceptual issues that did not warrant study findings, and both, regardless, were released to the media for wide dissemination.

Yet again, VAM enthusiasts are attempting to VAMboozle policymakers and the general public with another faulty study, again released by the National Bureau of Economic Research (NBER). But, in an unprecedented move, this time NBER has released the same, highly flawed study three times, even though the first study first released in 2011 still has not made it through peer-review to official publication and it has, accordingly, not proved itself as anything more than a technical report with major methodological issues.

In the first study (2011) Raj Chetty (Economics Professor at Harvard), John Friedman (Assistant Professor of Public Policy at Harvard), and Jonah Rockoff (Associate Professor of Finance and Economics at Harvard) conducted value-added analyses on a massive data set and (over-simplistically) presented (highly-questionable) evidence that favored teachers’ long-lasting, enduring, and in some cases miraculous effects. While some of the findings would have been very welcomed to the profession, had they indeed been true (e.g., high value-added teachers substantively affect students incomes in their adult years), the study’s authors way-overstated their findings, and they did not consider alternative hypotheses in terms of what other factors besides teachers might have caused the outcomes they observed (e.g., those things that happen outside of schools).

Accordingly, and more than appropriately, this study has only been critiqued since, in subsequent attempts to undo what should not have been done in the first place (thanks to both the media and the study’s authors given the exaggerated spin they spun given their results). See, for example, one peer-reviewed critique here, two others conducted by well-known education scholars (i.e., Bruce Baker [Education Professor at Rutgers] and Dale Ballou [Associate Professor of Education at Vanderbilt)) here and here, and another released by the Institute of Education Sciences’ What Works Clearinghouse here.

Maybe in response to their critics, maybe to drive the false findings into more malformed policies, maybe because Chetty (the study’s lead author) just received the John Bates Clark Medal awarded by the American Economic Association, or maybe to have the last word, NBER just released the same exact paper in two more installments. See the second and third releases, positioned as Part I and Part II, to see that they are exactly the same but being promulgated, yet again. While “they” acknowledge that they have done this on the first page of each of the two, it is pretty unethical to go the second round given all of the criticism, the positive and negative press this “working paper” received after its original release(s), and given the study has still not made it through to print in a peer-reviewed journal.

*Thanks to Sarah Polasky for helping with this post.

More from an English Teacher in North Carolina

The same English teacher, Chris Gilbert, whom I referenced in a recent post just wrote yet another great piece in The Washington Post.

He writes about an automated phone call he received informing him (and the rest of his colleagues) that the top 25% of teachers in his district were to be offered four-year contracts and an additional and annual $500 in exchange for relinquishing their tenure rights. This, was recently added to another slew of legislative actions in his state of North Carolina including, but not limited to, another year without pay increases (making this the 5th year without increases), no more tenure, no more salary increases for earning master’s/doctoral degrees, and no more class-size caps.

The problems with just this 25% policy, however, and as he writes, include the following: the “policy reflects the view that teachers are inadequately motivated to do their jobs;” this implies, without any evidence that only an arbitrarily set “25% of a district’s teachers deserve a raise;” this facilitates a “culture of competition [that] kills the collaboration that is integral to effective education;” “[t]he idea that a single teacher’s influence can be isolated [using VAMs] is absurd;” and just in general that this policy “reflects a myopic approach to reform.”

David Berliner’s “Thought Experiment”

My main mentor, David Berliner (Regents Professor at Arizona State University) wrote a “Thought Experiment” that Diane Ravitch posted on her blog yesterday. I have pasted the full contents here for those of you who may have missed it. Do take a read, and play along and see if you can predict which state will yield higher test performance in the end.

—–

Let’s do a thought experiment. I will slowly parcel out data about two different states. Eventually, when you are nearly 100% certain of your choice, I want you to choose between them by identifying the state in which an average child is likely to be achieving better in school. But you have to be nearly 100% certain that you can make that choice.

To check the accuracy of your choice I will use the National Assessment of Educational Progress (NAEP) as the measure of school achievement. It is considered by experts to be the best indicator we have to determine how children in our nation are doing in reading and mathematics, and both states take this test.

Let’s start. In State A the percent of three and four year old children attending a state associated prekindergarten is 8.8% while in State B the percent is 1.7%. With these data think about where students might be doing better in 4th and 8th grade, the grades NAEP evaluates student progress in all our states. I imagine that most people will hold onto this information about preschool for a while and not yet want to choose one state over the other. A cautious person might rightly say it is too soon to make such a prediction based on a difference of this size, on a variable that has modest, though real effects on later school success.

So let me add more information to consider. In State A the percent of children living in poverty is 14% while in State B the percent is 24%. Got a prediction yet? See a trend? How about this related statistic: In State A the percent of households with food insecurity is 11.4% while in State B the percent is 14.9%. I also can inform you also that in State A the percent of people without health insurance is 3.8% while in State B the percent is 17.7%. Are you getting the picture? Are you ready to pick one state over another in terms of the likelihood that one state has its average student scoring higher on the NAEP achievement tests than the other?

​If you still say that this is not enough data to make yourself almost 100% sure of your pick, let me add more to help you. In State A the per capita personal income is $54,687 while in state B the per capita personal income is $35,979. Since per capita personal income in the country is now at about $42,693, we see that state A is considerably above the national average and State B is considerably below the national average. Still not ready to choose a state where kids might be doing better in school?

Alright, if you are still cautious in expressing your opinions, here is some more to think about. In State A the per capita spending on education is $2,764 while in State B the per capita spending on education is $2,095, about 25% less. Enough? Ready to choose now?
Maybe you should also examine some statistics related to the expenditure data, namely, that the pupil/teacher ratio (not the class sizes) in State A is 14.5 to one, while in State B it is 19.8 to one.

As you might now suspect, class size differences also occur in the two states. At the elementary and the secondary level, respectively, the class sizes for State A average 18.7 and 20.6. For State B those class sizes at elementary and secondary are 23.5 and 25.6, respectively. State B, therefore, averages at least 20% higher in the number of students per classroom. Ready now to pick the higher achieving state with near 100% certainty? If not, maybe a little more data will make you as sure as I am of my prediction.

​In State A the percent of those who are 25 years of age or older with bachelors degrees is 38.7% while in State B that percent is 26.4%. Furthermore, the two states have just about the same size population. But State A has 370 public libraries and State B has 89.
Let me try to tip the data scales for what I imagine are only a few people who are reluctant to make a prediction. The percent of teachers with Master degrees is 62% in State A and 41.6% in State B. And, the average public school teacher salary in the time period 2010-2012 was $72,000 in State A and $46,358 in State B. Moreover, during the time period from the academic year 1999-2000 to the academic year 2011-2012 the percent change in average teacher salaries in the public schools was +15% in State A. Over that same time period, in State B public school teacher salaries dropped -1.8%.

I will assume by now we almost all have reached the opinion that children in state A are far more likely to perform better on the NAEP tests than will children in State B. Everything we know about the ways we structure the societies we live in, and how those structures affect school achievement, suggests that State A will have higher achieving students. In addition, I will further assume that if you don’t think that State A is more likely to have higher performing students than State B you are a really difficult and very peculiar person. You should seek help!

So, for the majority of us, it should come as no surprise that in the 2013 data set on the 4th grade NAEP mathematics test State A was the highest performing state in the nation (tied with two others). And it had 16 percent of its children scoring at the Advanced level—the highest level of mathematics achievement. State B’s score was behind 32 other states, and it had only 7% of its students scoring at the Advanced level. The two states were even further apart on the 8th grade mathematics test, with State A the highest scoring state in the nation, by far, and with State B lagging behind 35 other states.

Similarly, it now should come as no surprise that State A was number 1 in the nation in the 4th grade reading test, although tied with 2 others. State A also had 14% of its students scoring at the advanced level, the highest rate in the nation. Students in State B scored behind 44 other states and only 5% of its students scored at the Advanced level. The 8th grade reading data was the same: State A walloped State B!

States A and B really exist. State B is my home state of Arizona, which obviously cares not to have its children achieve as well as do those in state A. It’s poor achievement is by design. Proof of that is not hard to find. We just learned that 6000 phone calls reporting child abuse to the state were uninvestigated. Ignored and buried! Such callous disregard for the safety of our children can only occur in an environment that fosters, and then condones a lack of concern for the children of the Arizona, perhaps because they are often poor and often minorities. Arizona, given the data we have, apparently does not choose to take care of its children. The agency with the express directive of insuring the welfare of children may need 350 more investigators of child abuse. But the governor and the majority of our legislature is currently against increased funding for that agency.

State A, where kids do a lot better, is Massachusetts. It is generally a progressive state in politics. To me, Massachusetts, with all its warts, resembles Northern European countries like Sweden, Finland, and Denmark more than it does states like Alabama, Mississippi or Arizona. According to UNESCO data and epidemiological studies it is the progressive societies like those in Northern Europe and Massachusetts that care much better for their children. On average, in comparisons with other wealthy nations, the U. S. turns out not to take good care of its children. With few exceptions, our politicians appear less likely to kiss our babies and more likely to hang out with individuals and corporations that won’t pay the taxes needed to care for our children, thereby insuring that our schools will not function well.

But enough political commentary: Here is the most important part of this thought experiment for those who care about education. Everyone of you who predicted that Massachusetts would out perform Arizona did so without knowing anything about the unions’ roles in the two states, the curriculum used by the schools, the quality of the instruction, the quality of the leadership of the schools, and so forth. You made your prediction about achievement without recourse to any of the variables the anti-public school forces love to shout about –incompetent teachers, a dumbed down curriculum, coddling of students, not enough discipline, not enough homework, and so forth. From a few variables about life in two different states you were able to predict differences in student achievement test scores quite accurately.

I believe it is time for the President, the Secretary of Education, and many in the press to get off the backs of educators and focus their anger on those who will not support societies in which families and children can flourish. Massachusetts still has many problems to face and overcome—but they are nowhere as severe as those in my home state and a dozen other states that will not support programs for neighborhoods, families, and children to thrive.

This little thought experiment also suggests also that a caution for Massachusetts is in order. It seems to me that despite all their bragging about their fine performance on international tests and NAEP tests, it’s not likely that Massachusetts’ teachers, or their curriculum, or their assessments are the basis of their outstanding achievements in reading and mathematics. It is much more likely that Massachusetts is a high performing state because it has chosen to take better care of its citizens than do those of us living in other states. The roots of high achievement on standardized tests is less likely to be found in the classrooms of Massachusetts and more likely to be discovered in its neighborhoods and families, a refection of the prevailing economic health of the community served by the schools of that state.

One Teacher’s Dystopian Reality

Chris Gilbert, an English teacher from North Carolina, a state that uses the well-known and widely used (and also proprietary) Education Value-Added Assessment System (EVAAS) emailed the other day, sharing two articles he wrote for the Washington Post, on behalf of his fellow teachers, about his experiences being evaluated using the EVAAS system.

This one (click here) is definitely worth a full read, especially because this one comes directly from an educator living out VAMs in practice, in the field, and in what he terms his dystopian reality.

He writes: “In this dystopian story, teachers are evaluated by standardized test scores and branded with color-coded levels of effectiveness, students are abstracted into inhuman measures of data, and educational value is assessed by how well forecasted “growth” levels are met. Surely, this must be a fiction.”

 

The Gates Foundation and its “Strong Arm” Tactics

Following up on VAMboozled!’s most recent post, about the Bill & Melinda Gates Foundation’s $45 million worth of bogus Measures of Effective Teaching (MET) studies that were recently honored with a 2013 Bunkum (i.e., meaningless, irrelevant, junk) Award by the National Education Policy Center (NEPC), it seems that the Bill & Melinda Gates foundation are, once-again, “strong-arming states [and in this case a large city district] into adoption of policies tying teacher evaluation to measures of students’ growth.”

According to Nonprofit Quarterly, the Gates Foundation is now threatening to pull an unprecedented $40 million grant from Pittsburgh’s Public Schools “because the foundation is upset with the lack of an agreement between the school district and the teachers’ union over a core element of the grant” — the use of test scores to measure teachers’ value-added and to “reward exceptional teachers and retrain those who don’t make the grade.”

More specifically, the district and its teachers are not coming to an agreement about how they should be evaluated, rightfully because teachers understand better than most (even some VAM researchers) that these models are grossly imperfect, largely biased by the types of students non-randomly assigned to their classrooms and schools, highly unstable (i.e., grossly fluctuating from one year to the next when they should remain more or less consistent over time, if reliable), invalid (i.e., they do not have face validity in that they often contradict other valid measures of teacher effectiveness), and the like.

It seems, also, that Randi Weingarten, having recently taken a position against VAMs (as posted in VAMboozled! here and here), has also “added value,” at least in terms of the extent to which teachers in Pittsburgh are (rightfully) exercising some more authority and power over the ways in which they are to be (rightfully) evaluated. Unfortunately, however, money talks, and $40 million of it is a lot to give up for a publicly funded district like this one in Pittsburgh.

Elliot Eisner Obituary

Following up from our recent farewell to Elliot Eisner, here is his obituary just released from the Stanford School of Education.

The most pertinent tributes taken from this piece, also for readers of this blog, follow:

“Eisner eschewed the more popular argument for the arts — that some research showed music, dance, and painting actually boosted test scores in math and science. Eisner, rather, talked about art for art’s sake.”

“He figured out that there was something missing from mainstream educational theory and method,” said his friend and Stanford colleague Professor Raymond McDermott. “He wanted to address matters of the heart, whereas most of the discipline was pushing a more mechanical view of the child and the act of teaching or researching.”

“Eisner’s unrelenting advocacy of the arts continued during periods in which arts programs were cut in schools, and a chorus of administrators and policymakers, faced with budget constraints, focused on test scores, worried that spending time painting or drawing  was not academic enough…One of the casualties of our preoccupation with test scores is the presence — or should I say the absence — of arts in our schools,” he wrote in the Los Angeles Times in 2005. “When they do appear they are usually treated as ornamental rather than substantive aspects of our children’s school experience. The arts are considered nice but not necessary.” Eisner advocated a strict, more sophisticated and rigorous arts curriculum that would put arts instruction on par with lessons in reading, science and math.

“His work with the Getty Center advanced what is called Discipline-Based Art Education.  The curriculum structure advocated in DBAE stresses four aspects of the arts: making it, appreciating it, understanding it and making judgments about it.”

“His voice for evaluating teaching and student learning through many means, not just standardized testing, continued to be heard during the past three decades of standards-based school reform, testing and accountability,” said Larry Cuban, professor emeritus of education at Stanford. “Eisner’s eloquence in writing and speech gave heart to and bolstered many educators who felt that the humanities, qualitative approaches to evaluation and artistic criticism had been hijacked by those who wanted only numbers as a sign of effectiveness.”

*In lieu of flowers, the family requests donations to the National Art Education Association’s Elliot Eisner Lifetime Achievement Award, established by the Eisners to recognize individuals in art education whose career contributions have benefited the field.  The address for the NAEA is: 1806 Robert Fulton Drive, Suite 300, Reston, Virginia 20191.

The 2013 Bunkum Awards & the Gates Foundation’s $45 MET Studies

Tis the award season, and during this time every year, the National Education Policy Center (NEPC) recognizing the “lowlights” in educational research over the previous year, in their annual Bunkum Awards. To view the entertaining video presentation of the awards, hosted by my mentor David Berliner (Arizona State University), please click here.

Lowlights, specifically defined, include research studies in which researchers present, and often oversell thanks to many media outlets, “weak data, shoddy analyses, and overblown recommendations.”  Like the Razzies are to the Oscars in the Academy of Film, are the Bunkums to the best educational research studies in the Academy of Education. And like the Razzies, “As long as the bunk [like junk] keeps flowing, the awards will keep coming.”

As per David Berliner, in his introduction in the video, “the taxpayers who finance public education deserve smart [educational] policies based on sound [research-based] evidence.” This is precisely why these awards are both necessary, and morally imperative.

One among this year’s deserving honorees is of particular pertinence here. This is the, drum roll: ‘We’re Pretty Sure We Could Have Done More with $45 Million’ Award — Awarded to the Bill & Melinda Gates Foundation for Two Culminating Reports they released this year from their Measures of Effective (MET) Project. To see David’s presentation on this award, specifically, scroll to minute 3:15 (to 4:30) in the aforementioned video.

Those at NEPC write about these studies: “We think it important to recognize whenever so little is produced at such great cost. The MET researchers gathered a huge data base reporting on thousands of teachers in six cities. Part of the study’s purpose was to address teacher evaluation methods using randomly assigned students. Unfortunately, the students did not remain randomly assigned and some teachers and students did not even participate. This had deleterious effects on the study–limitations that somehow got overlooked in the infinite retelling and exaggeration of the findings.

When the MET researchers studied the separate and combined effects of teacher observations, value-added test scores, and student surveys, they found correlations so weak that no common attribute or characteristic of teacher-quality could be found. Even with 45 million dollars and a crackerjack team of researchers, they could not define an “effective teacher.” In fact, none of the three types of performance measures captured much of the variation in teachers’ impacts on conceptually demanding tests. But that didn’t stop the Gates folks, in a reprise from their 2011 Bunkum-winning ways, from announcing that they’d found a way to measure effective teaching, nor did it deter the federal government from strong-arming states into adoption of policies tying teacher evaluation to measures of students’ growth.”

To read the full critique of both of these studies, written by Jesse Rothstein (University of California – Berkeley) and William Mathis (University of Colorado – Boulder), please click here.

The Passing of Dr. Elliot Eisner – Stanford

Eisner-Profile-200x275It saddens me to announce, for those of you who do not know already, that the wonderful scholar and person, Dr. Elliot Eisner, passed away last weekend due to complications of Parkinson’s disease and pneumonia.

Elliot, professor emeritus at Stanford University, widely known for his contributions to art education, curriculum studies, and qualitative research methods, dedicated his career to advancing the role of the arts in education. He lectured throughout the world, received five honorary doctoral degrees, received numerous awards, and beyond scholarly journal articles authored/edited sixteen books (e.g., Educating Artistic Vision (1972), The Educational Imagination (1979), Cognition and Curriculum (1982), The Enlightened Eye (1991), The Kind of Schools We Need (1998),  Arts Based Research (2011 with Tom Barone)).

I had the pleasure of getting to know Elliot personally, starting about two years ago, when I interviewed him, about not only his scholarly accomplishments but also his extraordinary life, and his extraordinary history as such an extraordinary person. What I most admired in him from a scholarly standpoint was his continuous dedication to keeping the arts alive in America’s public schools. We agreed that tests, like those of more interest nowadays than really ever before, are not the things that really should “count” the most in America’s public schools. What I most admired about him as a person? His sense of humility, his passion for not only his life but the lives of others, his aesthetic sense of beauty, his keen capacity to find beauty elsewhere, and to also make wise and fine distinctions about it, his wonderful wife and family, and the like.

May he rest in peace, and rest assured, that his legacy will go on well past the precious time he devoted to those of us in the education profession here.

To see the interview I conducted with him click here. In this link you can also find a photo gallery including pictures of Elliot and his family, a series of wonderful tributes his friends, family members and colleagues wrote on his behalf, a list of his scholarly accomplishments, and the like. For the shorter, YouTube version of the above, click here.

A memorial symposium will be held at the 2014 American Educational Research Association (AERA) annual meeting in Philadelphia in April.