It's a pattern seemingly as old as education itself: Students do the work and teachers grade it.
At the University of Utah and schools throughout the country, though, students turn the tables briefly by evaluating their courses and professors, giving feedback that can influence the trajectory of a career.
But a report presented at the U.'s Academic Senate this week calls into question whether those evaluations are truly an indicator of good teaching. It found most students 58 percent mark the instructor forms with the same answer from top to bottom, a practice that, according to the report, "casts substantial doubt on the character of student rankings."
Analysis of the results also raises the concern that students might slam classes that "are more difficult, require better preparation or take students out of their comfort zone," it said.
The U.'s Academic Senate decided Monday to keep discussing the student evaluation system, which seems to satisfy no one as is: While professors worry the evaluations reward popularity rather than rigor, students say they feel like the scores and sometimes their voices Â aren't taken seriously.
"Some students block vote because they already feel like they don't matter," said Wychester Whetten, a 21-year-old senior chemical engineering major.
Overall, the ratings are very high 93 percent of block ratings are 5s or 6s on a 6-point scale, said author and communication professor James Anderson. But that means "very small differences become very important," said Anderson, who reviewed more than 76,000 evaluations submitted in 2009 and 2010.
Those dips tend to occur more often for foreign male graduate student teaching assistants, as well as women in traditionally male-dominated fields, or men in fields often populated by women, Anderson said. Courses in science, diversity or quantitative studies also scored low.
That's a concern for physics professor David Ailion.
"You inevitably invite lower scores because the class is too hard," he said. "If you're a young professor going up for tenure, it can be a damaging thing."
But students say it's unfair to assume they ding a professor only because the coursework is challenging.
"If you grade a professor low, they might think it's because you just didn't do the work," rather than legitimate criticism, said Taylor Thompson, a 22-year-old senior biology and environment major.
The student evaluations matter because a third of U. professors' performance reviews are based on how well they teach, and some departments rely on the evaluations for 90 percent of that section, potentially affecting raises, promotions and tenure.
But over-reliance on the evaluations is a mistake, Anderson said. "As long as we claim we are measuring instructor competency, these measures are invalid. Period," he said.
The U. should keep doing student evaluation surveys, he said, but they should be seen as user-experience or opinion surveys rather than a reliable gauge of skill.
Students said they've noticed the impact of the evaluations varies widely from one department to the next. Whetten, for example, said she had a professor whose class markedly improved after a batch of bad reviews.
"He took the concerns and asked other professors how he could do better," she said, making lessons clearer and grading more consistently.
Student evaluations aren't a new idea, but they came into wider use at the U., and around the country, in the 1970s, according to Anderson. Their profile has gone up in recent years with the rise of websites such as ratemyprofessors.com.
At the U., students can get their grades early in exchange for filling out the evaluations online, a practice Anderson writes may be "too sweet a carrot."
Instead, he recommended students be allowed to opt out, the data be studied more often to evaluate them and that students and teachers start talking about the reports.
"We're still stuck on doing the same old things the same old way," he said. "Let's think outside the box."