The ACT Testing Corporation (Unsurprisingly) Against America’s Opt-Out Movement

ShareTweet about this on TwitterShare on Facebook1Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

The Research and Evaluation section/division of the ACT testing corporation — ACT, Inc., the nonprofit also famously known for developing the college-entrance ACT test — recently released a policy issue brief titled “Opt-Outs: What Is Lost When Students Do Not Test.” What an interesting read, especially given ACT’s position and perspective as a testing company, also likely being impacted by America’s opt-out-of-testing movement. Should it not be a rule that people writing on policy issues should disclose all potential conflicts-of-interest? They did not here…

Regardless, last year throughout the state of New York, approximately 20% of students opted out of statewide testing. In the state of Washington more than 25% of students opted out. Large and significant numbers of students also opted out in Colorado, Florida, Oregon, Maine, Michigan, New Jersey, and New Mexico. Students are opting out, primarily because of community, parent, and student concerns about the types of tests being administered, the length and number of the tests administered, the time that testing and testing preparation takes away from classroom instruction, and the like.

Because many states also rely on ACT tests for statewide, not just college entrance exam purposes, clearly this is of concern to ACT, Inc. But rather than the corporation rightfully positioning itself on this matter as a company with clear vested interests, ACT Issue Brief author Michelle Croft frames the piece as a genuine plead to help others understand why they should reject the opt out movement, not opt out their own children, generally help to curb and reduce the nation’s opt-out movement, and the like, given the movement’s purportedly negative effects.

Here are some of the reasons ACT’s Croft give in support of not opting out, along with my research-informed commentaries per reason:

  • Scores on annual statewide achievement tests can provide parents, students, educators, and policymakers with valuable information—but only if students participate. What Croft does not note here is that such large scale standardized test scores, without taking into account growth over time (an argument that actually works in favor of VAMs), are so highly correlated with student test-takers’ demographics that test scores do not often tell us much that we would not have known otherwise from what student demographics alone tell us. This is a very true, and also very unfortunate reality, whereby with a small set of student demographics we can actually predict with great (albeit imperfect) certainty students’ test scores without students taking the tests. In other words, if 100% of students opted out, we could still use some of even our most rudimentary statistical techniques to determine what students’ scores would have been regardless; hence, this claim is false.
  • Statewide test scores are one of the most readily available forms of data used by educators to help inform instruction. This is also patently false. Teachers, on average and as per the research, do not use the “[i]ndividual student data [derived via these tests] to identify general strengths and weaknesses, [or to] identify students who may need additional support” for many reasons, including the fact that test scores often come back to teachers after their tested students have moved onto the next grade level. This is also especially true when these tests, as compared to tests that are administered not at the state, but at the district, school, or classroom levels, yield data that is much more instructionally useful. What Croft  does not note is that many research studies, and researchers, have evidenced that the types of tests at the source of the opt out movement are the tests that are also the least instructionally useful (see a prior post on this topic here). Accordingly, Croft’s claim here also contradicts recent research written by some of the luminaries in the field of educational measurement, who collectively support the design of more instructionally useful and sensitive tests in general, to combat the perpetual claims like these surrounding large scale standardized tests (see here).
  • Statewide test scores allow parents and educators to see how students measure up
    to statewide academic standards intended for all students in the state…[by providing] information about a student’s, school’s, or district’s standing compared to others in the state (or across states, if the assessment is used by more than one). See my first argument about student-level demographics, as the same holds true here. Whether these tests are better indicators of what students learned or students’ demographics is certainly of debate, and unfortunately most of the research evidence supports the latter (unless, perhaps, VAMs or growth models are used to measure large scale growth over time).
  • Another benefit…is that the data gives parents an indicator of school quality that can
    help in selecting a school for their children. See my prior argument, again, especially in that test scores are also highly correlated with property/house values; hence, with great certainty one can pick a school just by picking a home one can afford or a neighborhood in which one would like to live, regardless of test scores, as the test scores of the surrounding schools will ultimately reveal themselves to match said property/house values.
  • While grades are important, they [are not as objective as large-scale test scores because they] can also be influenced by a variety of factors unrelated to student achievement, such as grade inflation, noncognitive factors separate from achievement (such as attendance and timely completion of assignments), unintentional bias, or unawareness of performance expectations in subsequent grades (e.g., what it means to be prepared for college). Large-scale standardized tests, of course, are not subject to such biases and unrelated influences, we are to assume and accept as an objective truth.
  • Opt-outs threaten the overall accuracy—and therefore the usefulness—of the data provided. Indeed, this is true, and also one of the arguably positive side-effects of the opt out movement, whereby without large enough samples of students participating in such tests, the extent to which test companies and others can make generalizable results about, in this case, larger student populations is statistically limited. Given the fact that we have been relying on large-scale standardized tests to reform America’s education system for now over the past 30 years, yet we continue to face an “educational crisis” across America’s public schools, perhaps test-based reform policies are not the solution that testing companies like ACT, Inc. continue to argue they are. While perpetuating this argument in favor of reform is financially wise and lucrative, all at the taxpayer’s expense, no to very little research exists to support that using such large scale test-based information helps to reform or improve much of anything.
  • Student assessment data allows for rigorous examination of programs and policies to ensure that resources are allocated towards what works. The one thing large scale standardized tests do help us do, especially as researchers and program evaluators, is help us examine and assess large-scale programs’ and other reform efforts’ impacts. Whether students should have to take tests for just this purpose, however, may also not be worth the nation’s and states’ financial and human resources and investments. With this, most scholars also agree, but more so now when VAMs are used for such large-scale research and evaluation purposes. VAMs are, indeed, a step in the right direction when we are talking about large-scale research.

Author Croft, on behalf of ACT, then makes a series of recommendations to states regarding such large scale testing, again, to help curb the opt out movement. Here are their four recommendations, again, alongside my research-informed commentaries per recommendation:

  • Districts should reduce unnecessary testing. Interesting, here, is that that states are not listed as an additional entity that should reduce unnecessary testing. See my prior comments, especially the one regarding the most instructionally useful tests being at the classroom, school, and/or district levels.
  • Educators and policymakers should improve communication with parents
    about the value gained from having all students take the assessments. Unfortunately, I would not start with the list provided in this piece. Perhaps this blog post will help, however, present a fairer interpretation of their recommendations and the research-based truths surrounding them.
  • Policymakers should discourage opting out…States that allow opt-outs should avoid creating laws, policies, or communications that suggest an endorsement of the practice. Such a recommendation is remiss, in my opinion, given the vested interests of the company making this recommendation.
  • Policymakers should support appropriate uses of test scores. I think we can all agree with this one, although large scale tests scores should not be used and promoted for accountability purposes, as also suggested herein, given the research does not support that doing this actually works either. For a great, recent post on this, click here.

In the end, all of these recommendations, as well as reasons that the opt out movement should be thwarted, are coming via an Issue Brief authored and sponsored by a large scale testing company. This fact, in and of itself, puts everything they position as a set of disinterested recommendations and reasons, at question. This is unfortunate, for ACT Inc., and their roles as the authors and sponsors of this piece.

ShareTweet about this on TwitterShare on Facebook1Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

Leave a Reply

Your email address will not be published. Required fields are marked *