The Tripod Student Survey Instrument: Its Factor Structure and Value-Added Correlations

ShareTweet about this on TwitterShare on Facebook4Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

The Tripod student perception survey instrument is a “research-based” instrument increasingly being used by states to add to state’s teacher evaluation systems as based on “multiple measures.” While there are other instruments also in use, as well as student survey instruments being developed by states and local districts, this one in particular is gaining in popularity, also in that it was used throughout the Bill & Melinda Gates Foundation’s ($43 million worth of) Measures of Effective Teaching (MET) studies. A current estimate (as per the study discussed in this post) is that during the 2015–2016 school year approximately 1,400 schools purchased and administered the Tripod. See also a prior post (here) about this instrument, or more specifically a chapter of a book about the instrument as authored by the instrument’s developer and lead researcher in a  research surrounding it – Ronald Ferguson.

In a study recently released in the esteemed American Educational Research Journal (AERJ), and titled “What Can Student Perception Surveys Tell Us About Teaching? Empirically Testing the Underlying Structure of the Tripod Student Perception Survey,” researchers found that the Tripod’s factor structure did not “hold up.” That is, Tripod’s 7Cs (i.e., seven constructs including: Care, Confer, Captivate, Clarify, Consolidate, Challenge, Classroom Management; see more information about the 7Cs here) and the 36 items that are positioned within each of the 7Cs did not fit the 7C framework as theorized by instrument developer(s).

Rather, using the MET database (N=1,049 middle school math class sections; N=25,423 students), researchers found that an alternative bi-factor structure (i.e., two versus seven constructs) best fit the Tripod items theoretically positioned otherwise. These two factors included (1) a general responsivity dimension that includes all items (more or less) unrelated to (2) a classroom management dimension that governs responses on items surrounding teachers’ classroom management. Researchers were unable to to distinguish across items seven separate dimensions.

Researchers also found that the two alternative factors noted — general responsivity and classroom management — were positively associated with teacher value-added scores. More specifically, results suggested that these two factors were positively and statistically significantly associated with teachers’ value-added measures based on state mathematics tests (standardized coefficients were .25 and .25, respectively), although for undisclosed reasons, results apparently suggested nothing about these two factors’ (cor)relationships with value-added estimates base on state English/language arts (ELA) tests. As per authors’ findings in the area of mathematics, prior researchers have also found low to moderate agreement between teacher ratings and student perception ratings; hence, this particular finding simply adds another source of convergent evidence.

Authors do give multiple reasons and plausible explanations as to why they found what they did that you all can read in more depth via the full article, linked to above and fully cited below. Authors also note that “It is unclear whether the original 7Cs that describe the Tripod instrument were intended to capture seven distinct dimensions on which students can reliably discriminate among teachers or whether the 7Cs were merely intended to be more heuristic domains that map out important aspects of teaching” (p. 1859); hence, this is also important to keep in mind given study findings.

As per study authors, and to their knowledge, “this study [was] the first to systematically investigate the multidimensionality of the Tripod student perception survey” (p. 1863).

Citation: Wallace, T. L., Kelcey, B., &  Ruzek, E. (2016). What can student perception surveys tell us about teaching? Empirically testing the underlying structure of the Tripod student perception survey.  American Educational Research Journal, 53(6), 1834–1868.
doiI:10.3102/0002831216671864 Retrieved from

ShareTweet about this on TwitterShare on Facebook4Email this to someoneShare on Google+0Share on LinkedIn0Share on Reddit0

3 thoughts on “The Tripod Student Survey Instrument: Its Factor Structure and Value-Added Correlations

  1. My opinion. The 7C structure is about marketing. I looked at the Tripod items before these were totally commercialized. I did not do a detailed item analysis, but a good teacher would be sage on the stage and close to a charming “helicopter hoverer,” assigning and checking on homework, etc. As a worker in the visual arts I could see no way that the survey would value the variegated styles of teaching and intentional predicaments in studio work that make such teaching lively and memorable. The correlations with VAM? Why bother? That is another case of aggrandizing test scores. I am also up to my eyebrows with studies that keep mining the MET database as if there are good educational reasons for doing so. I think it is a matter of convenience, and truly a convenience sample given the failed attempt to make random assignments part of the design of the study. Sorry to be so critical. Up to my ears with Gates-funded forays into research with economists in charge.

    • Fair enough, and I hear you!
      The test/VAM scores are NOT that around which all else should revolve. Likewise, as per an email exchange with David Berliner on this topic is that most of “…these observational instruments…(actually almost all), correlate under .30 with a standardized test score, VAM or not VAM. [His] point is, always, that no matter how reliably you observe a teacher or test their kids, no more than 10% of the thingamajig they each are measuring is in common.”

      Thus, one or both of these measures of teacher quality is no good!

Leave a Reply

Your email address will not be published. Required fields are marked *