A few months ago, well over 10,000 educational researchers/academics from throughout the world attended the Annual Conference of the American Educational Research Association (AERA) in Philadelphia, Pennsylvania, during which many of the said researchers/academics also presented their newest educational research and findings.
One presentation I did not get to attend, but from which I fortunately received the PowerPoint Slides, was a presentation titled “Measuring College Value-Added: A Delicate Instrument” presented by Stanford University’s Richard Shavelson and University of Colorado – Boulder’s Benjamin Domingue.
As summarized from their presentation, the motivation for measuring value added in higher education, while similar to what is happening in America’s K-12 public schools (i.e., to theoretically measure college program quality and the “value” various higher education programs “add” to student learning), is also very different. Whereas in K-12 schools, for what they are worth, there are standardized tests that can be used to at least attempt to measure value-added. Most testing in higher education is course based, in many ways fortunately—in that it is closely linked to content covered by the professor, and in other ways not—in that it is typically unreliable and narrow. Hence, while tests used in higher education (for the most part) are often idiosyncratic and non-standardized, they are often more instructionally sensitive and relevant than the large-scale standardized tests found across America’s public schools (i.e., the tests used to measure value-added). In short, while both course-based and external types of tests are relevant in higher education, depending on their uses, they do yield different types of information.
On this note, see also Slide 7 of the PowerPoint Slides to examine the key and problematic, implausible assumptions with which people must agree should they try to do value-added research for said purposes in higher ed. This slide is interesting in and of itself.
In this study, however, Shavelson and Domingue gained access to a unique data set for modeling higher-ed value added. Colombia governmentally mandates estimation of value added. To this end it has created a unique assessment system where high-school seniors take the high-school leaving/college matriculation examination and all college seniors take the same test upon college leaving. Their sample included over 64,000 students and included 168 higher education institutions and 19 different reference groups by program areas including engineering, law, and education. In addition, a sample of college seniors in Colombia participated in the Organisation for Economic Co-operation and Development’s [OECD] Assessment of Higher Education Learning Outcomes [AHELO] generic skills assessment that all college graduates in Columbia also take).
Their findings? Even with Colombia’s unique assessment system, VAM is a delicate instrument. There are still major conceptual, methodological, and statistical issues involved when measuring value added in higher education. The value-added estimates showed about 5-15 percent variation among colleges (depending on the model), which is not unlike what has been reported for colleges. Consequently, the magnitude of variation among institutions left room for making descriptive, albeit non-causal distinctions. Moreover it provided an opportunity to compare like-situated institutions in an attempt to understand “what works” as the basis for hypotheses about changes.