At SFU, the process used for student evaluation of courses and instructors hasn’t changed substantially in 30 years. Corinne Pitre-Hayes (above) is on her way to ending that dubious streak.
As leader of SFU’s Teaching and Course Evaluation Project (TCEP), Pitre-Hayes has a straightforward assignment: to recommend a replacement for the instrument (the survey form) used by the university for student evaluations of teaching and courses and to develop a best-practice guide for using and interpreting the evaluation data.
The assignment was handed to Pitre-Hayes and her team by Jon Driver, Vice-President, Academic, in December 2011 in response to recommendations by the Senate Committee on University Teaching and Learning and the Task Force on Teaching and Learning.
Pitre-Hayes is aware that many members of the academic community view student evaluations – both the data gathered and the way it is used – with skepticism, and she readily enumerates the sources of concern, including doubts about reliability and validity, suspicions about bias, and worries about academic freedom. Her response, in a word, is research.
“There’s a lot of evidence in the research about the concerns that most people talk about,” she says. “These things have been researched for more than 50 years.”
She cites the common concern that evaluation results will be used inappropriately for tenure and promotion decisions. “Such decisions should not be made on the basis of teaching and course evaluations alone. That’s a key finding that surfaces repeatedly in the research. The results should be combined with other evaluative processes.”
More useful feedback
But for Pitre-Hayes, providing a better instrument and best-practice guide is only “square one.” What really excites her is the possibility of enabling faculty members and instructors to make greater formative use of the evaluation data.
“There’s this enormous opportunity that relates to teaching and learning,” she says. “We have a bunch of data here that could be incredibly useful to instructors and that we could be making constructive use of.” It’s a message she has been spreading at community consultations with administrators and faculty members in various Faculties, beginning with Education in May.
“I would like to plant the seeds for that shift [in thinking]. The key will be putting infrastructure in place that enables this to happen.”
Pitre-Hayes imagines a tool that will give instructors more control and flexibility: “I can envision instructors potentially using the instrument and the system for the purpose of getting specific student feedback regularly or on an ad hoc basis at various points of the year so that they can experiment with things in advance, during, and at the end of the course.”
The vision of an evaluation tool that responds to the requirements of individual instructors and departments will shape the recommendations of her project team: “The instrument needs the flexibility to be fine-tuned so that it’s useful for a wide variety of courses with a range of formats.”
It’s all part of her effort to move in the direction of formative uses of student evaluations in a way that she hopes instructors themselves will embrace.