Class Attendance and Students’ Evaluations of Teaching: Do No-Shows Bias Course Ratings and Rankings?
In: Evaluation Review, Vol. 36, No. 1: pp. 72-96
Background: Many university departments use students’ evaluations of teaching (SET) to compare and rank courses. However, absenteeism from class is often nonrandom and, therefore, SET for different courses might not be comparable. Objective: The present study aims to answer two questions. Are SET positively biased due to absenteeism? Do procedures, which adjust for absenteeism, change course rankings? Research Design: The author discusses the problem from a missing data perspective and present empirical results from regression models to determine which factors are simultaneously associated with students’ class attendance and course ratings. In order to determine the extent of these biases, the author then corrects average ratings for students’ absenteeism and inspect changes in course rankings resulting from this adjustment. Subjects: The author analyzes SET data on the individual level. One or more course ratings are available for each student. Measures: Individual course ratings and absenteeism served as the key outcomes. Results: Absenteeism decreases with rising teaching quality. Furthermore, both factors are systematically related to student and course attributes. Weighting students’ ratings by actual absenteeism leads to mostly small changes in ranks, which follow a power law. Only a few, average courses are disproportionally influenced by the adjustment. Weighting by predicted absenteeism leads to very small changes in ranks. Again, average courses are more strongly affected than courses of very high or low in quality. Conclusions: No-shows bias course ratings and rankings. SET are more appropriate to identify high- and low-quality courses than to determine the exact ranks of average courses.