Student Learning Objectives, aka Student Growth Objectives, aka Another Attempt to Quantify “High Quality” Teaching

After a previous post about VAMs v. Student Growth Percentiles (SGPs) (see also VAMs v. SGPs Part II) a reader posted a comment asking for more information about the utility of SGPs, but also about the difference between SGPs and Student Growth Objectives.

“Student Growth Objectives” is a new term for an older concept that is being increasingly integrated into educational accountability systems nationwide, and also under scrutiny (see one of Diane Ravitch’s recent posts about this here). But the concept underlying Student Growth Objectives (SGOs) is essentially just Student Learning Objectives (SLOs). Why they insist on using the term “growth” in place of the term “learning” is perhaps yet another fad. Related, it also likely has something to do with various legislative requirements (e.g., Race to the Top terminologies), although evidence in support of this transition is also void.

Regardless, and put simply, an SGO/SLO is an annual goal for measuring student growth/learning of the students instructed by teachers (or principals, for school-level evaluations) who are not eligible to participate in a school’s or district’s value-added or student growth model. This includes the vast majority of teachers in most schools or districts (e.g., 70+%), because only those teachers who instruct reading/language arts or mathematics in state achievement tested grade levels, typically grades 3-8, are eligible to participate in the VAM or SGP evaluation system. Hence via the development of SGOs/SLOs, administrators and others were either unwilling to allow these exclusions to continue or forced to establish a mechanism to include the other teachers to meet some legislative mandate.

New Jersey, for example, defines an SGO as “a long-term academic goal that teachers set for groups of students and must be: Specific and measureable; Aligned to New Jersey’s curriculum standards; Based on available prior student learning data; A measure of what a student has learned between two points in time; Ambitious and achievable” (for more information click here).

Denver Public Schools has been using SGOs for many years; their 2008-2009 Teacher Handbook states that an SGO must be “focused on the expected growth of [a teacher’s] students in areas identified in collaboration with their principal,” as well as that the objectives must be “Job-based; Measurable; Focused on student growth in learning; Based on learning content and teaching strategies; Discussed collaboratively at least three times during the school year; May be adjusted during the school year; Are not directly related to the teacher evaluation process; [and] Recorded online” (for more information click here).

That being said, and in sum, SGOs/SLOs, like VAMs, are not supported with empirical work. As Jersey Jazzman summarized very well in his post about this, the correlational evidence is very weak, the conclusions drawn by outside researchers are a stretch, and the rush to implement these measures is just as unfounded as the rush to implement VAMs for educator evaluation. We don’t know that SGOs/SLOs make a difference in distinguishing “good” from “poor” teachers; and in fact, some could argue (like Jersey Jazzman does) that they don’t actually do so much of anything at all. They’re just another metric being used in the attempt to quantify “high quality” teaching.

Thanks to Dr. Sarah Polasky for this post.

Arizona’s Teacher Evaluation System, Not Strict Enough?

Last week, Arizona State Superintendent of Public Instruction, John Huppenthal, received the news that Arizona’s No Child Left Behind (NCLB) waiver extension request had been provisionally granted with a “high-risk” label (i.e., in danger of being revoked). Superintendent Huppenthal was given 60 days to make two revisions: (1) adjust the graduation rate to account for 20% of a school’s A-F letter grade instead of the proposed 15% and, as most pertinent here, (2) finalize the guidelines for the teacher and principal evaluations to comply with Elementary and Secondary Education Act (ESEA) Flexibility (i.e., the NCLB waiver guidelines).

Within 60 days, Superintendent Huppenthal and the Arizona Department of Education (ADE) must: (1) finalize its teacher and principal evaluation guidelines; (2) give sufficient weighting to student growth so as to differentiate between teachers/principals who have contributed to more/less growth in student learning and achievement; (3) ensure that shared attribution of growth does not mask high or low performing teachers as measured by growth; and (4) guarantee that all of this is done in time for schools to be prepared to implement for the 2014-2015 school year.

These demands, particularly #2 and #3 above, reflect some of the serious and unavoidable flaws with the new teacher evaluations that are based on student growth (e.g., and all other VAMs).

As per #2, the most blatant problem is with the limited number of teachers (typically around 30%, although reported as only 17% in the recent post about DC’s teacher evaluation system) who are eligible for classroom-level student growth data (i.e., value-added). Thus, one of the key expectations—to ensure sufficient weight to student growth scores so as to differentiate between teachers’/principals’ impact on student learning and achievement—is impossible for probably around seven out of every ten of Arizona’s and other states’ teachers. While most states, including Arizona, have chosen to remedy this problem by attributing a school-level (or grade-level) value-added score to classroom-level ineligible teachers (sometimes counting as much as 50% of the teacher’s overall evaluation), this solution does not (and likely never will) suffice as per #2 written above. It seems the feds do not quite understand that what they are mandating in practice leaves well over half of teachers’ evaluations based on both students and/or content that these teachers didn’t teach.

As per #3, Arizona (and all waiver-earning states) is also to demonstrate how the state will ensure that shared attribution of growth does not mask high or low performing teachers as measured by growth. Yet, again, when these systems are implemented in practice, 70+% of teachers are assigned a school-level student growth score, meaning that all teachers in any given school who fall into this group will all receive the same score. In what way is it feasible to “ensure” that no high or low performing teacher is “masked” by such a method of attributing student growth to teachers in this way? Yet this is another example of the type of illogical circumstances by which schools must abide in order to meet the arbitrary (and often impossible) demands of ESEA Flexibility (and Race to the Top).

If Arizona fails to comply with the USDOE requests within 60 days, they will lose their ESEA waiver and face the consequences of NCLB. In a statement to Education Week, however, AZ Superintendent Huppenthal stood by his position on providing school districts with as much flexibility as possible within the constraints of the waiver stipulations. He said he will not protest the “high risk” label and will instead attempt to “get around this and still keep local control for those school districts.” The revised application is due at the end of January.

Post contributed by Jessica Holloway-Libell

VAMs v. Student Growth Percentiles (SGPs) – Part II

A few weeks ago, a reader posted the following question: “What is the difference [between] VAM and Student Growth Percentiles (SGP) and do SGPs have any usefulness[?]”

In response, I invited a scholar and colleague who knows a lot about the SGP. This is the first of two posts to help others understand the distinctions and similarities. Thanks to our Guest Blogger – Sarah Polasky – for writing the following:

“First, I direct the readers to the VAMboozled! glossary and the information provided that contrasts VAMs and Student Growth Models, if they haven’t visited that section of the site yet. Second, I hope to build upon this by highlighting key terms and methodological differences between traditional VAMs and SGPs.

A Value-Added Model (VAM) is a multivariate (multiple variable) student growth model that attempts to account or statistically control for all potential student, teacher, school, district, and external influences on outcome measures (i.e., growth in student achievement over time). The most well-known example of this model is the SAS Education Value-Added Assessment System (EVAAS)[1]. The primary goal of this model is to estimate teachers’ causal effects on student performance over time. Put differently, the purpose of this model is to measure groups of students’ academic gains over time and then attribute those gains (or losses) back to teachers as key indicators of the teachers’ effectiveness.

In contrast, the Student Growth Percentiles (SGP)[2] model uses students’ level(s) of past performance to determine students’ normative growth (i.e., as compared to his/her peers). As explained by Castellano & Ho[3], “SGPs describe the relative location of a student’s current score compared to the current scores of students with similar score histories” (p. 89). Students are compared to themselves (i.e., students serve as their own controls) over time; therefore, the need to control for other variables (e.g., student demographics) is less necessary. The SGP model was developed as a “better” alternative to existing models, with the goal of providing clearer, more accessible, and more understandable results to both internal and external education stakeholders and consumers. The primary goal of this model is to provide growth indicators for individual students, groups of students, schools, and districts.

The utility of the SGP model lies in reviewing, particularly by subject area, growth histories for individual students and aggregate measures for groups of students (e.g., English language learners) to track progress over time and examine group differences, respectively. The model’s developer admits that, on its own, SGPs should not be used to make causal interpretations, such as attributing high growth in one classroom to the teacher as the sole source of growth[4]. However, when paired with additional indicators, supporting concurrent-related evidence of validity (link to glossary), such inferences may be more appropriate.”

 


[1] Sanders, W.L. & Horn, S.P. (1994). The Tennessee value-added assessment system (TVAAS): Mixed-model methodology in educational assessment. Journal of Personnel Evaluation in Education, 8(3): 299-311.

[2] Betebenner, D.W. (2013). Package ‘SGP’. Retrieved from http://cran.r-project.org/web/packages/SGP/SGP.pdf.

[3] Castellano, K.E. & Ho, A.D. (2013). A Practitioner’s Guide to Growth Models. Council of Chief State School Officers.

[4] Betebenner, D. W. (2009). Norm- and criterion-referenced student growth. Educational Measurement: Issues and Practice, 28(4), 42-51. doi:10.1111/j.1745-3992.2009.00161.x

VAMs v. Student Growth Percentiles (SGPs)

Yesterday (11/4/2013) a reader posted a question, the first part of which I am partially addressing here: “What is the difference [between] VAM[s] and Student Growth Percentiles (SGP[s]) and do SGPs have any usefulness[?]” One of my colleagues, soon to be a “Guest Blogger” on TheTeam, but already an SGP expert, is helping me work on a more nuanced response, but for the time-being please check out the Glossary section of this blog:

VAMs v. Student Growth Models: The main similarities between VAMs and student growth models are that they all use students’ large-scale standardized test score data from current and prior years to calculate students’ growth in achievement over time. In addition, they all use students’ prior test score data to “control for” the risk factors that impact student learning and achievement both at singular points in time as well as over time. The main differences between VAMs and student growth models are how precisely estimates are made, as related to whether, how, and how many control variables are included in the statistical models to control for these risk and other extraneous variable (e.g.,  other teachers’ simultaneous and prior teacher’s residual effects). The best and most popular example of a student growth model is the Student Growth Percentiles (SGP) model. It is not a VAM by traditional standards and definitions, mainly because the SGP model does not use as many sophisticated controls as does its VAM counterparts.

See more forthcoming…