Columnist Annie Tao arrives at the right destination ““
urging students to take seriously the “privilege” of
evaluating teaching ““ but she makes some questionable turns
in getting there (“Course evaluations flawed,
unreliable,” Oct. 25).
She should be forgiven these errors, however, because it is a
difficult subject, and one that is made more difficult by the
presence of a strong emotional atmosphere. Some faculty feel they
are unfairly judged and they have too much at stake in the
outcome.
This provides all the more reason for students to take
evaluations seriously.
We should consider some of the qualifying issues that underlie
Tao’s piece.
First, while it is widely accepted that student evaluations of
teaching can be both reliable and valid, there is almost no support
for student efforts to judge content in a class.
Peer review by other faculty is the most common, and preferred,
method of evaluating content, and Tao’s hope to influence
content in future classes is not likely to be fulfilled.
She has every right to comment on her perception of the
organization of the content, the pace at which it is presented, and
even the manner in which it is offered, but she is not in a
position to determine the content itself.
I’m not surprised that she has found a study that suggests
instruction style is more important than substance in evaluations.
As Peter Seldin of Pace University often remarks, there are over
2,200 published articles on student evaluations, and people can
find support for almost any viewpoint they want.
The question is not what one article suggests (especially one
with a questionable research design at its center), but what the
preponderance of evidence suggests.
Are students immune to deliberate attempts to influence their
responses? Of course not. But it would be unwise to assume that
Wendy Williams and Stephen Ceci, the researchers responsible for
the study, could control all variables, and that the difference is
attributable to style alone.
It is further unlikely that they could sustain these changes
over time. Almost every systematic attempt to evaluate teaching
urges that faculty provide evidence of their teaching from multiple
courses over a period of years. Ideally, student evaluations are
only one form of data in such a system.
A larger context, and not one isolated class, provides a better
basis for looking at someone’s true teaching performance.
Also, it is reasonable to ask why so many research studies fail
to provide a solidly consistent set of results. The answer, of
course, is that they are not operating from a consistent basis. The
forms, the wording, the size of the classes, the characteristics of
the students, and so on all vary.
Instead of looking at a single study, we must look for
reasonable efforts to design valid studies, accept that a perfect
experimental model is unlikely, and consider where the mass of the
evidence leads us.
Finally, I should reveal that I share Tao’s concern about
the adequacy of the present evaluation forms at our campus.
An effort to significantly improve the form a few years ago was
met with widespread interest from faculty members ““
especially because the opportunity for qualitative responses had
been increased. Most professors read these comments with great
interest.
But a few professors became irate that these changes had been
made to the form and argued that they might materially damage their
teaching evaluations. The effort was dropped due to technical
reasons, but the motivation for improvement endures.
Chancellor Albert Carnesale has urged us to seek excellence in
all aspects of the university, and this is an area where we can do
better. I invite Tao to join the effort.
Loeher is the associate vice provost in the Office of
Instructional Development.