The amount of students who consistently complete teacher evaluations for each class after every quarter is at best negligible. We have all at one point or another turned a blind eye to this university’s 12 emails reminding us to complete the evaluations and, though we lament our disregard for our inner idealists, deleting the reminder tends to be the most common answer to this dilemma.

Non-responses to evaluations are just one of the many issues that contribute to the inefficacy of students’ evaluations of teachers. In addition to errors in statistics and the subjectivity of those who interpret evaluations, students are not equipped with the skills to accurately judge how much they’ve learned relative to the teaching practices. A notable case in which this occurs might be a student who, upon receiving a failing grade, might not be the best judge of a professor’s ability to teach. This is not an argument of whether or not we should account for students’ voices, but a questioning of the reliability of teacher evaluations when conducted by students. As it is currently formatted, the Evaluation System for Courses and Instruction (ESCI) does not judge a teacher’s abilities effectively.

Makena Sumi / Daily Nexus

The ESCI has been in place for over thirty years at UC Santa Barbara. The formats for evaluations vary from department to department, but is primarily structured to “provide the essential feedback on our courses to the faculty so that they have the information they need to continually improve the courses that they teach,” according to UCSB’s ESCI website.

We, as educated college students, should invariably trust the numbers — right? Because, after all, that’s what rational human beings do, and there is no way our biases could ever affect our judgement. Keep telling yourself that, just like I will.

In his book “How to Lie with Statistics”, Darrell Huff confronts our ardent claims of unbiased rationality with this sad truth: “If you can’t prove what you want to prove, demonstrate something else and pretend they are the same thing. In the daze that follows the collision of statistics with the human mind, hardly anyone will notice the difference.” Too often, we praise those who can provide us with statistics before we even know what they really mean or if they even mean anything at all.

ESCIs do provide accurate statistics, but that does not inherently mean they were interpreted correctly. By only providing a professor’s average score, we cannot account for the variability in scores. On a one to seven scale, a professor with ratings of ones and sevens could have the same score as a professor with ratings of fours even though their actual range of scores and teaching styles likely look incredibly different. Providing a standard deviation alongside these scores helps address this issue, but doesn’t necessarily get the attention it should.

This is not an argument of whether or not we should account for students’ voices, but a questioning of the reliability of teacher evaluations when conducted by students.

Aside from the averages, the one to seven scale that is used is what is known as an ordinal scale — meaning, the numbers are in order, even though the differences between them might significantly differ from number to number. In other words, in terms of ratings, a seven is greater than a six. However, the difference between rating a six and rating a seven ranges on an individual basis.

Though the averages and scales are important points, they are not relevant when you consider how few people actually fill out the evaluations. Think about it this way: even if 70 percent of people fill out an evaluation (which is a relatively generous estimate), the 30 percent that did not respond could be enough to radically change the outcome of a teacher’s rating. Moreover, the decision to grant tenure or promote a teacher or not rests on the difference between being rated below average, average, or something of that sort on a 7-point scale.

This is comparable to surveying ten people about their love for Subway, and then expecting those people to represent UCSB’s in its entirety (apparently, whoever was in charge of that poll must have talked to ten die hard fans of Subway, seeing as we have a whopping three on campus). Obviously, my analogy is not the mathematical equivalent, but it highlights something important: ESCIs are far from representative. The analogy also holds in terms of who’s responding to these surveys and why, as normally students respond when they feel strongly one way or another.

We are grasping at straws here; and, even if we had properly interpreted statistics, who is actually using these evaluations? Have you ever wondered why you keep hearing about the same professor who has a reputation of not caring about their classes? Why isn’t anything being done about those professors? *Cough cough* tenure *cough cough* (we’ll save that for another article).

There must be some level of accountability. Yes, students should fill out evaluations — but I also understand why so few students fill them out if they believe nothing is going to change. Evaluations are widely used in academic personnel decisions (hiring, firing, tenure, and other promotions) when considering a teacher’s qualifications, but ESCIs are not an accurate measure. For the sake of current and future students at UCSB, further investments in professor evaluations are vital.

Aryana Kamelian holds the university accountable for creating a better teacher evaluation system.

Print