All Recommended Articles about VAMs (n=121)

To link to the “Top 15″ suggested research articles, click here. To link to the “Top 25″ suggested research articles, click here. To link to all suggested research articles published in American Educational Research Association (AERA) journals, click here.

  1. American Educational Research Association (AERA) Council. (2015). AERA statement on use of value-added models (VAM) for the evaluation of educators and educator preparation programs. Educational Researcher, X(Y),1-5. doi:10.3102/0013189X15618385
  2. American Statistical Association (2014). ASA statement on using value-added models for educational assessment. Alexandria, VA.
  3. Amrein-Beardsley, A. (2008). Methodological concerns about the Education Value-Added Assessment System (EVAAS). Educational Researcher, 37(2), 65-75doi:10.3102/0013189X08316420
  4. Amrein-Beardsley, A. (2009). Buyers be-aware: What you don’t know can hurt you. Educational Leadership, 67(3), 38-42.
  5. Amrein-Beardsley, A. (2012). Value-added measures in education: The best of the alternatives is simply not good enough [Commentary]. Teachers College Record.
  6. Amrein-Beardsley, A., & Barnett, J. H. (2012). Working with error and uncertainty to increase measurement validityEducational Assessment, Evaluation and Accountability, 1-11. doi: 10.1007/s11092-012-9146-6
  7. Amrein-Beardsley, A., & Collins, C. (2012). The SAS Education Value-Added Assessment System (SAS® EVAAS®) in the Houston Independent School District (HISD): Intended and unintended consequences. Education Policy Analysis Archives, 20(12), 1-36.
  8. Amrein-Beardsley, A., Collins, C., Holloway-Libell, J., & Paufler, N. A. (2016). Everything is bigger (and badder) in Texas: Houston’s teacher value-added system. [Commentary]. Teachers College Record.
  9. Au, W. (2011). Neither fair nor accurate: Research-based reasons why high-stakes tests should not be used to evaluate teachers. Rethinking Schools.
  10. Baker, E. L., Barton, P. E., Darling-Hammond, L., Haertel, E., Ladd, H. F., Linn, R. L., Ravitch, D., Rothstein, R., Shavelson, R. J., & Shepard, L. A. (2010). Problems with the use of student test scores to evaluate teachers. Washington, DC: Economic Policy Institute.
  11. Baker, B. D., Oluwole, J. O., & Green, P. C. (2013). The legal consequences of mandating high stakes decisions based on low quality information: Teacher evaluation in the Race-to-the-Top era. Education Policy Analysis Archives, 21(5), 1-71.
  12. Ballou, D. (2009). Test scaling and value-added measurement. Education Finance and Policy, 4(4), 351-383. doi:10.1162/edfp.2009.4.4.351
  13. Ballou, D. (2012). Review of “The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood.” [Review of the report The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood, by R. Chetty, J Friedman, & J. Rockoff]. Boulder, CO: National Education Policy Center.
  14. Ballou, D., & Springer, M. G. (2015). Using student test scores to measure teacher performance: Some problems in the design and implementation of evaluation systems. Educational Researcher, 44(2), 77-86. doi:10.3102/0013189X15574904
  15. Bausell, R. B. (2013, January 15). Probing the science of value-added evaluation. Education Week [Commentary].
  16. Berliner, D. C. (2014). Exogenous variables and value-added assessments: A fatal flaw. Teachers College Record, 116(1).
  17. Blazar, D., Litke, E., & Barmore, J. (2016). What does it mean to be ranked a ‘‘high’’ or ‘‘low’’ value-added teacher? Observing differences in instructional quality across districts. American Educational Research Journal, 53(2), 324–359.  doi:10.3102/0002831216630407
  18. Bracey, G. W. (2004a). Serious questions about the Tennessee Value-Added Assessment System. Phi Delta Kappan, 85(9), 716-717.
  19. Bracey, G. W. (2004b). Value-added assessment findings: Poor kids get poor teachers. Phi Delta Kappan, 86(4), 331-333.
  20. Bracey, G. W. (2007). Value subtracted: A “debate” with William Sanders. The Huffington Post.
  21. Braun, H. (2015). The value in value-added depends on the ecology. Educational Researcher, 44(2), 127-131. doi:10.3102/0013189X15576341
  22. Braun, H., Chudowsky, N., & Koenig, J. (2010). Getting value out of value-added. Washington, DC: National Academies Press.
  23. Briggs, D. & Domingue, B. (2011). Due diligence and the evaluation of teachers: A review of the value-added analysis underlying the effectiveness rankings of Los Angeles Unified School District Teachers by the Los Angeles Times. Boulder, CO: National Education Policy Center.
  24. Briggs, D. C. & Weeks, J. P. (2009). The sensitivity of value-added modeling to the creation of a vertical scale score. Education Finance and Policy, 4(4), p. 384-414. doi:10.1162/edfp.2009.4.4.384
  25. Burris, C. C. & Welner, K. G. (2011). Letter to Secretary of Education Arne Duncan concerning evaluation of teachers and principals. Boulder, CO: National Education Policy Center.
  26. Chin, M., & Goldhaber, D. (2015). Exploring explanations for the “weak” relationship between value added and observation-based measures of teacher performance. Cambridge, MA: Center for Education Policy Research (CEPR), Harvard University.
  27. Cole, R., Haimson, J., Perez-Johnson, I., & May, H. (2011). Variability in pretest-posttest correlation coefficients by student achievement level. Washington, DC: Institute of Education Sciences, U.S. Department of Education.
  28. Collins, C. (2014). Houston, we have a problem: Teachers find no value in the SAS Education Value-Added Assessment System (EVAAS®). Education Policy Analysis Archives, 22(98), 1-42.
  29. Collins, C., & Amrein-Beardsley, A. (2014). Putting growth and value-added models on the map: A national overview. Teachers College Record, 16(1).
  30. Corcoran, S. P. (2010). Can teachers be evaluated by their students’ test scores? Should they be? The use of value-added measures of teacher effectiveness in policy and practice. Providence, RI: Annenberg Institute for School Reform.
  31. Corcoran, S. P., Jennings, J. L., & Beveridge, A. A. (2011). Teacher effectiveness on high- and low-stakes tests. New York, NY: New York University.
  32. Darling-Hammond, L. (2010). Too unreliable. The New York Times.
  33. Darling-Hammond, L. (2013). Getting teacher evaluation right: What really matters for effectiveness and improvement. New York, NY: Teachers College Press.
  34. Darling-Hammond, L. (2015). Can value-added add value to teacher evaluation? Educational Researcher, 44(2), 132-137. doi:10.3102/0013189X15575346
  35. Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8-15.
  36. Darling-Hammond, L. & Haertel, E. (2012). A better way to grade teachers. Los Angeles Times [op-ed].
  37. Deming, D. J. (2014). Using school choice lotteries to test measures of school effectiveness. American Economic Review, 104(5), 406–411.
  38. Di Carlo, M. (2013). A few points about the instability of value-added estimates. The Shanker Blog.
  39. Dragoset, L., James-Burdumy, S., Hallgren, K., Perez-Johnson, I., Herrmann, M., Tuttle, C., Angus, M. H., Herman, R., Murray, M., Tanenbaum, C., & Graczewski, C. (2015). Usage of policies and practices promoted by Race to the Top. Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
  40. Eckert, J. M., & Dabrowski, J. (2010). Should value-added measures be used for performance pay? Phi Delta Kappan, 91(8), 88-92.
  41. Ehlert, M., Koedel, C., Parsons, E., & Podgursky, M. (2012). Selecting growth measures for school and teacher evaluations. Washington, DC: National Center for Analysis of Longitudinal Data in Education Research (CALDER).
  42. Ehlert, M., Koedel, C., Parsons, E., & Podgursky, M. (2013). The sensitivity of value-added estimates to specification adjustments: Evidence from school- and teacher-level models in Missouri. Statistics and Public Policy, 1(1), 19-27. doi: 10.1080/2330443X.2013.856152
  43. Ewing, D. (2011). Leading mathematician debunks ‘value-added.’ The Washington Post [blog post].
  44. Fryer, R. G. (2013). Teacher incentives and student achievement: Evidence from New York City Public Schools. Journal of Labor Economics, 31(2), 373-407.
  45. Fuller, E. (2014). An examination of Pennsylvania school performance profile scores. University Park, PA: The Pennsylvania State University, Center for Evaluation and Education Policy Analysis (CEEPA).
  46. Gabriel, R. & Lester, J. N. (2013). Sentinels guarding the grail: Value-added measurement and the quest for education reform. Education Policy Analysis Archives, 21(9), 1-30.
  47. Glass, G. V. (1990). Using student test scores to evaluate teachers. In Jason Millman & Linda Darling-Hammond (Eds.), The new handbook of teacher evaluation: Assessing elementary and secondary school teachers (pp. 229-240). Newbury Park, CA: SAGE Publications.
  48. Glazerman, S. M., & Potamites, L. (2011). False performance gains: A critique of successive cohort indicators. Princeton, NJ: Mathematica Policy Research.
  49. Glazerman, S., & Seifullah, A. (2010). An evaluation of the Teacher Advancement Program (TAP) in Chicago: Year two impact report. Washington, DC: Mathematica Policy Research.
  50. Goldhaber, D. (2015). Exploring the potential of value-added performance measures to affect the quality of the teacher workforce. Educational Researcher, 44(2), 87-95.doi:10.3102/0013189X15574905
  51. Goldhaber, D. D., Goldschmidt, P., & Tseng, F. (2013). Teacher value-added at the high-school level: Different models, different answers? Educational Evaluation and Policy Analysis, 35(2), 220-236. doi:10.3102/0162373712466938
  52. Goldring, E., Grissom, J. A., Rubin, M., Neumerski, C. M., Cannata, M., Drake, T., & Schuermann, P. (2015). Make room value-added: Principals’ human capital decisions and the emergence of teacher observation data. Educational Researcher, 44(2), 96-104. doi:10.3102/0013189X15575031
  53. Graue, M. E., Delaney, K. K., & Karch, A. S. (2013). Ecologies of education quality. Education Policy Analysis Archives, 21(8), 1-36.
  54. Grossman, P., Cohen, J., Ronfeldt, M., & Brown, L. (2014). The test matters: The relationship between classroom observation scores and teacher value added on multiple types of assessment. Educational Researcher, 43(6), 293–303 doi:10.3102/0013189X14544542
  55. Guarino, C. M., Maxfield, M., Reckase, M. D., Thompson, P., & Wooldridge, J.M. (2012). An evaluation of Empirical Bayes’ estimation of value-added teacher performance measures. East Lansing, MI: Education Policy Center at Michigan State University.
  56. Guarino, C. M., Reckase, M. D., & Wooldridge, J. M. (2012). Can value-added measures of teacher education performance be trusted? East Lansing, MI: The Education Policy Center at Michigan State University.
  57. Haertel, E. (1986). The valid use of student performance measures for teacher evaluation. Educational Evaluation and Policy Analysis, 8(1), 45-60.
  58. Haertel, E. H. (2013). Reliability and validity of inferences about teachers based on student test scores. Princeton, NJ: Education Testing Service.
  59. Haladyna, T. M., Nolen, N. S., & Haas, S. B. (1991). Raising standardized achievement test scores and the origins of test score pollution. Educational Researcher, 20(5), 2-7.doi:10.2307/1176395
  60. Hermann, M., Walsh, E., Isenberg, E., & Resch, A. (2013). Shrinkage of value-added estimates and characteristics of students with hard-to-predict achievement levels. Princeton, NJ: Mathematica Policy Research.
  61. Hill, H. C., Kapitula, L., & Umland, K. (2011). A validity argument approach to evaluating teacher value-added scores. American Educational Research Journal, 48(3), 794-831. doi:10.3102/0002831210387916
  62. Ho, A. D., Lewis, D. M., & Farris, J. L. (2009). The dependence of growth-model results on proficiency cut scores. Educational Measurement: Issues and Practice, 28, 15-26. doi:10.1111/j.1745-3992.2009.00159.x
  63. Holloway-Libell, J. (2015). Evidence of grade and subject-level bias in value-added measures. Teachers College Record.
  64. Holloway-Libell, J., & Amrein-Beardsley, A. (2015). Truths” devoid of empirical proof: Underlying assumptions surrounding value-added models (VAMs) in teacher evaluation [Commentary]. Teachers College Record.
  65. Holloway-Libell, J., Amrein-Beardsley, A., & *Collins, C. (2012). All hat and no cattle: The value-added approach to educational reform. Educational Leadership, 70(3), 65-68.
  66. Ishii, J., & Rivkin, S. G. (2009). Impediments to the estimation of teacher value added. Education Finance and Policy, 4, 520-536. doi:10.1162/edfp.2009.4.4.520
  67. Jackson, C. K. (2012). Teacher quality at the high-school level: The importance of accounting for tracks. Cambridge, MA: The National Bureau of Economic Research.
  68. Jennings, J. L., & Corcoran, S. P. (2009). “Beware of geeks bearing formulas:” Reflections on growth models for school accountability. Phi Delta Kappan. 90(9), 635-639.
  69. Jiang, J. Y., Sporte, S. E., & Luppescu, S. (2015). Teacher perspectives on evaluation reform: Chicago’s REACH students. Educational Researcher, 44(2), 105-116. doi:10.3102/0013189X15575517
  70. Johnson, W. (2012). Confessions of a ‘bad’ teacher. The New York Times.
  71. Johnson, M., Lipscomb, S., & Gill, B. (2013). Sensitivity of teacher value-added estimates to student and peer control variables. Journal of Research on Educational Effectiveness, 8(1), 60-83. doi:10.1080/19345747.2014.967898
  72. Jones, N. D., Buzick, H. M., & Turkan, S. (2013). Including students with disabilities and English learners in measures of educator effectiveness. Educational Researcher, 42(4), 234-241. doi:10.3102/0013189X12468211
  73. Kappler Hewitt, K. (2015). Educator evaluation policy that incorporates EVAAS value-added measures: Undermined intentions and exacerbated inequities. Education Policy Analysis Archives, 23(76). doi:10.14507/epaa.v23.1968
  74. Kennedy, M. M. (2010). Attribution error and the quest for teacher quality. Educational Researcher, 39(8), 591-598. doi:10.3102/0013189X10390804
  75. Kersting, N. B., Chen, M., & Stigler, J. W. (2013). Value-added added teacher estimates as part of teacher evaluations: Exploring the effects of data and model specifications on the stability of teacher value-added scores. Education Policy Analysis Archives, 21(7), 1-39. Retrieved from http://epaa.asu.edu/ojs/article/view/1167
  76. Koedel, C., & Betts, J. (2010). Value-added to what? How a ceiling in the testing instrument influences value-added estimation. Education Finance and Policy, 5(1), 54-81.
  77. Koedel, C., Mihaly, K., & Rockoff, J. E. (2015). Value-added modeling: A review. Economics of Education Review, 47, 180–195. doi:10.1016/j.econedurev.2015.01.006
  78. Kupermintz, H. (2003). Teacher effects and teacher effectiveness: A validity investigation of the Tennessee Value-Added Assessment System. Educational Evaluation and Policy Analysis, 25, 287-298. doi:10.3102/01623737025003287
  79. Lavigne, A. L., & Good, T. L. (2014). Teacher and student evaluation: Moving beyond the failure of school reform. New York, NY: Routledge.
  80. Linn, R. L. (2008). Methodological issues in achieving school accountability. Journal of Curriculum Studies, 40, 699-711. doi:10.1080/00220270802105729
  81. Linn, R L., & Haug, C. (2002). Stability of school-building accountability scores and gains. Educational Evaluation and Policy Analysis, 24, 29-36. doi:10.3102/01623737024001029
  82. McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., & Hamilton, L. S. (2003). Evaluating value-added models for teacher accountability. Santa Monica, CA: RAND Corporation.
  83. McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A. & Hamilton, L. (2004a). Let’s see more empirical studies on value-added modeling of teacher effects: A reply to Raudenbush, Rubin, Stuart and Zanutto, and Reckase. Journal of Educational and Behavioral Statistics, 29(1), 139-143. doi:10.3102/10769986029001139
  84. McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004b). Models for value-added modeling of teacher effects. Journal of Educational and Behavioral Statistics, 29(1), 67-101.
  85. McCaffrey, D. F., Sass, T. R., Lockwood, J. R., & Mihaly, K. (2009). The intertemporal variability of teacher effect estimates. Education Finance and Policy, 4(4), 572–606. doi:10.1162/edfp.2009.4.4.572
  86. Martineau, J. A. (2010). The validity of value-added models: An allegory. Phi Delta Kappan, 91(7), 64-67.
  87. Mathis, W. (2012). Research-based options for education policy making: Teacher evaluation. Boulder, CO: National Education Policy Center.
  88. Moore Johnson, S. (2015). Will VAMS reinforce the walls of the egg-crate school? Educational Researcher, 44(2), 117-126. doi:10.3102/0013189X15573351
  89. Morgan, G. B., & Hodge, K. J. (2014). The stability of teacher performance and effectiveness: Implications for policies concerning teacher evaluation. Education Policy Analysis Archives, 22(95). doi:http://dx.doi.org/10.14507/epaa.v22n95.2014
  90. National Council on Teacher Quality (NCTQ). (2015). State of the states 2015: Evaluating teaching, leading and learning. Washington, DC.
  91. Newton, X., Darling-Hammond, L., Haertel, E., & Thomas, E. (2010). Value-added modeling of teacher effectiveness: An exploration of stability across models and contexts. Educational Policy Analysis Archives, 18(23), 1-27.
  92. Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How large are teacher effects? Educational Evaluation and Policy Analysis, 26(3), 237-257. doi:10.3102/01623737026003237
  93. Papay, J. P. (2010). Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal, 48(1), 163-193. doi:10.3102/0002831210362589
  94. Paufler, N. A., & Amrein-Beardsley, A. (2014). The random assignment of students Into elementary classrooms: Implications for value-added analyses and interpretations. American Educational Research Journal, 51(2), 328-362. doi:10.3102/0002831213508299
  95. Pittenger, B. F. (1917). Problems of teacher measurement. Journal of Educational Psychology, (8)2, 103-110.
  96. Polikoff, M. S. (2014). Does the Test Matter? Evaluating teachers when tests differ in their sensitivity to instruction. In T. J. Kane, K. A. Kerr, & R. C. Pianta (Eds.). Designing teacher evaluation systems: New guidance from the Measures of Effective Teaching project (pp. 278-302). San Francisco, CA: Jossey-Bass.
  97. Polikoff, M. S., & Porter, A. C. (2014). Instructional alignment as a measure of teaching qualityEducation Evaluation and Policy Analysisdoi:10.3102/0162373714531851
  98. Popham, W. J. (2013). Evaluating America’s teachers: Mission possible? Thousand Oaks, CA: Corwin Press.
  99. Porter, E. (2015). Grading teachers by the test. New York Times.
  100. Pullin, D. (2013). Legal issues in the use of student test scores and value-added models (VAM) to determine educational quality. Education Policy Analysis Archives, 21(6), 1-27. Retrieved from http://epaa.asu.edu/ojs/article/view/1160
  101. Raudenbush, S. W. (2004). What are value-added models estimating and what does this imply for statistical practice? Journal of Educational and Behavioral Statistics, 29(1), 121-129. doi:10.3102/10769986029001121
  102. Raudenbush, S. W. (2015). Value added: A case study in the mismatch between education research and policy. Educational Researcher, 44(2), 138-141. doi:10.3102/0013189X15575345
  103. Ravitch, D. (2013). Reign of error: The hoax of the privatization movement and the danger to America’s public schools. New York, NY: Knopf, Random House.
  104. Reardon, S. F., & Raudenbush, S. W. (2009). Assumptions of value-added models for estimating school effects. Education Finance and Policy, 4(4), 492-519. doi:10.1162/edfp.2009.4.4.492
  105. Reckase, M. D. (2004). The real world is more complicated than we would like. Journal of Educational and Behavioral Statistics, 29(1), 117-120. doi:10.3102/10769986029001117
  106. Rothstein, J. (2009). Student sorting and bias in value-added estimation: Selection on observables and unobservables. Education Finance and Policy, 4(4), 537-571. doi: http://dx.doi.org/10.1162/edfp.2009.4.4.537
  107. Rothstein, J. (2010). Teacher quality in educational production: Tracking, decay, and student achievement. Quarterly Journal of Economics, 175-214. doi:10.1162/qjec.2010.125.1.175
  108. Rothstein, J. (2014). Revisiting the impacts of teachers. Berkeley, CA: Working Paper, University of California – Berkeley.
  109. Rothstein, J., & Mathis, W. J. (2013, January). Review of two culminating reports from the MET Project. Boulder, CO: National Education Policy Center.
  110. Rubin, D. B., Stuart, E. A., & Zanutto, E. L. (2004). A potential outcomes view of value-added assessment in education. Journal of Educational and Behavioral Statistics, 29(1), 103-116. doi:10.3102/10769986029001103
  111. Scherrer, J. (2011). Measuring teaching using value-added modeling: The imperfect panacea. NASSP Bulletin, 95(2), 122-140. doi:10.1177/0192636511410052
  112. Schochet, P. Z. & Chiang, H. S. (2010). Error rates in measuring teacher and school performance based on student test score gains. Washington DC: U.S. Department of Education.
  113. Schochet, P. Z., & Chiang, H. S. (2013). What are error rates for classifying teacher and school performance using value-added models? Journal of Educational and Behavioral Statistics, 38(2), 142-171. doi:10.3102/1076998611432174
  114. Sparks, S. D. (2011). Value-added formulas strain collaboration. Education Week.
  115. Springer, M. G., Ballou, D., Hamilton, L. S., Le, V.-N., Lockwood, J.R., McCaffrey, D.F., Pepper, M., & Stecher, B.M. (2010). Teacher pay for performance: Experimental evidence from the project on incentives in teaching. Nashville, TN: National Center on Performance Incentives.
  116. Stacy, B., Guarino, C., Recklase, M., & Wooldridge, J. (2012). Does the precision and stability of value-added estimates of teacher performance depend on the types of students they serve? East Lansing, MI: Education Policy Center at Michigan State University.
  117. Tekwe, C. D., Carter, R. L., Ma, C., Algina, J., Lucas, M. E., Roth, J., …Resnick, M. B. (2004). An empirical comparison of statistical models for value-added assessment of school performance. Journal of Educational and Behavioral Statistics, 29(1), 11-36. doi:10.3102/10769986029001011
  118. Vigdor, J. L. (2008). Teacher salary bonuses in North Carolina. Nashville, TN: National Center on Performance Incentives.
  119. Yeh, S. S. (2013). A re-analysis of the effects of teacher replacement using value-added modeling. Teachers College Record, 115(12), 1-35.
  120. Zeis, C., Waronska, A. K., & Fuller, R. (2009). Value-added program assessment using nationally standardized tests: Insights into internal validity issues. Journal of Business and Economics, 9(1), 114-127.
  121. Zwerling, H. (2014). State tests: Instructional sensitivity and (small) differences between extreme teachers. VAMboozled!

6 thoughts on “All Recommended Articles about VAMs (n=121)

  1. Hello There. I discovered your weblog the usage of msn. This is a really neatly written article. I’ll make sure to bookmark it and come back to read more of your helpful information. Thank you for the post. I’ll definitely return.
    🙂 #$# 🙁

  2. Thanks. Nice list that I will send on to AZ lawmakers, who never take facts into consideration when it goes against their ideology.

  3. I. It is useful to arrange this list as a chronology so the deep history of skepticism and criticism is obvious. I did a list like that for arts educators, in an unpublished paper.
    2. I contributed a short list on the SLO SGOs that apply to about 70 percent of teachers who do not have VAM scores. It is posted on Diane’s site where your list came to my attention. Thanks for this work, and much else. I have cited some of your papers for arts educators.

    • Laura, where on Diane’s blog is your list on SLOs? Have you updated it? We need the research on this topic as well. Thanks all, for making this research easily accessible.

Leave a Reply

Your email address will not be published. Required fields are marked *