No, not the planet. It’s cool.
Mercury is the name for McGill’s online student course evaluation system. For most of my career as a teacher, there has been a day at the end of the term when students have filled out course evaluation forms, commenting on the instruction, their impressions of the course, and other related matters. Sometime after the end of term I’d get some aggregate numerical data (e.g., “rate this prof on a scale of 1-5”) and a slew of anonymous comments. The virtue of this arrangement was that because you did it in class, you got a wide swath of students, even if you didn’t have 100% attendance on a particular day.
These were incredibly useful to me, though perhaps not for the reasons that students think when they fill them out. I looked for patterns. If one person complained about some aspect of the course out of 200 or 250 students, it was immaterial unless it was something I hadn’t thought of. It was when I got the same comment over and over that I could see something needed to be changed. Or that I would simply have to warn students about an aspect of my course they might not like.
For instance, in my intro course, I warn students that they will likely hate the pop quizzes. True to form this term (as it is every term), under the “what could be improved about this course” a number of students said “get rid of the pop quizzes; I didn’t feel like I could skip the readings before class.” Or some other complaint about being coerced to do the readings. Good! I want them to feel coerced. The irony is that the pop quiz grade is 5% of the semester grade and we drop the lowest two, which meant that you could skip readings for two weeks and probably get lucky and nothing would happen to your semester grade.
The one term I didn’t use pop quizzes in my intro course, the number of students who showed up having done the reading fell off dramatically. Ergo, the value for learning outweighs whether or not students like it.
Anyway, that was an excursus. The problem is the evaluations are now entirely online. The university reminds students to fill them out, and instructors are encouraged to encourage students to fill them out, but come on. I am not going to stand up in a lecture and beg my students to fill out evals. It’s demeaning all around. Imagine if they had to beg me to get around to grading their papers.
The results are pretty bad. Where I used to have between 60-80% response rate because my lectures have decent attendance (okay, in part because of those coercive pop quizzes), I now get about a 30% response rate. And you know in the comments section it’s bimodal. It’s all “Sterne is the best instructor ever” or “this class is way too hard for an intro course.” True, my aggregate numerical ratings remain about the same (maybe the malcontents get a little more weight, but it doesn’t really matter), but I lose the vast middle who might give me a clue as to whether a problem exists or not. This is not just bad for me, it’s bad for the students, as I’m left wondering when I see a complaint whether it’s acutally indicative of a problem or a malcontent. The same could be said for the praise, though of course I want to believe all the praise.
The problem gets worse if you consider it administratively. What if student ratings were used in assessing merit raises? (Evals are not used for such purposes in our department and I don’t believe they should be, though I know they are used in other departments). Or if you want to put someone up for a teaching award? Or if a student complains about an instructor? The evidence is thinner all around.
The solution is simple: require students to submit an evaluation of the course by a certain date in order to receive their semester grads, and give them an option to opt out that takes almost as much time as filling out the form (so if they really care, they can opt out, but they won’t opt out because it’s easier). I guarantee you that you’d get a higher response rate. But paper evals had a real advantage over their electronic ones because of how they were administered — especially if they came at the beginning of class (which I always requested). Students had to sit there anyway, so they would likely write something in the comment section. The result is that I had lots of bad handwriting to decrypt, but much much more information upon which to make decisions about changing my course for the following year.
We still use paper evals, which are fine. But the one other advantage of online evals are that students can fill them out after the full semester is done, not in the midst of final week stress. Ideally, students can fill them out after a bit of time to reflect on the semester as a whole, not focus on the micro-moments of the end of the semester. But I agree, some incentive (like grade reporting) is needed to promote participation.
have we reached the point yet where we can require the students to bring their laptops to class and fill out the forms while they sit there, same as before? we must be close… that seems to me to be the best solution to your problem… perhaps we can get MIT to donate 250 XO laptops to you for the purpose 😉
Funny, I was contemplating an experimental 2-week laptop ban. Requiring them all to bring laptops on another day would just make it even more of a messed up control issue.
We could, of course, also do away with student evaluations entirely and find better ways to assess our teaching.
We moved to online evals three years ago, and it’s been an unremitting fucking disaster. I don’t even bother reading them any more, because the response rate varies between 10-30%, depending on circumstances. We argued about this for an entire year within Faculty Senate; in spite of near-universal faculty disapproval, the administration imposed the system anyhow. The questions are fewer in number, and they’re so vaguely phrased as to be crap.
We might as well be using RateMyProfessor.