Student Course Evaluations – A UOITFA Position Paper

Student Course Evaluation Working Group
UOITFA Position Statement
Summer 2016

The UOITFA has identified the use, storage and value of student opinion surveys (SOS) as a significant priority. A number of studies have been produced on this topic; while debate on the usefulness of these as a measure of teaching effectiveness abounds, virtually all scholarship on this topic supports that they are most effective when used in combination with other measures of teaching effectiveness.[1]  The Association thus embraced the opportunity to explore this important issue further through the establishment of a joint working group to “review the current tool and its use and provide recommendations on amendments, if any, and how the instrument should be used and recorded for various purposes.”[2]

These SOS were formalized in the January 2004 policy 8.4 on Course Evaluations[3] by Academic Council and little or no changes have been seen since that time.  The opening paragraph of the Preamble to that policy is worth noting in terms of the intended purposes of these evaluations (emphasis added):

The main purposes of seeking student evaluation of teaching are to assist faculty members in monitoring and developing their effectiveness as teachers and to assist faculties in monitoring the quality of their curricula. Important additional purposes include identifying professional development needs, assisting in decisions regarding tenure and promotion, assisting in identifying exceptional teachers for teaching awards and documenting exceptional teaching.

Indeed, the only significant change to the SOS has been the in-housing of the surveys, which does allow for more control over them in terms of both content and use of the results.

The University and Association were unable to reach agreement in bargaining on student opinion surveys. The Association’s research into this topic indicates it is preferable for student opinion surveys to be used primarily as a tool of self-reflection and self-improvement; where review committees do have access to this type of information, it is commonly in aggregated statistical form. A number of studies highlight the many misrepresentations, biases and inequities that can be reproduced through such forums and anonymous commentary. As Phillip Stark and Richard Freishtat argue in their statistical perspective of student ratings of teaching, relying on averages of teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned. They point to strong evidence that student responses do not measure teaching effectiveness, even if they can be valuable if they ask the right questions, report response rates and detailed score distributions, and are balanced by a more holistic approach with a variety of other sources and methods to evaluate teaching.[4] Another study on this topic explores how these are not always products of pure truth, showing that students can ignore or falsify answers in light of variables considered more important, give subjective impressions in response to objective questions, and may give purposefully misleading and false responses.[5]

Of even greater concern is the number of studies which show student opinion surveys can be highly influenced by gender, sexuality, racialization, to name a few examples, rather than learning outcomes and teaching effectiveness.[6] Indeed, student comments can produce completely inappropriate human rights violations.

Studies also indicate student opinion surveys can be a much better measure of student expectations and/or grade inflation than teaching effectiveness. As Scott Carrell and James West illustrate, student evaluations reward professors who increase short-term achievement, not deep, long-term learning, calling into question the value and accuracy of using student evaluations in promotion and tenure decisions.[7] Another analysis argues such “methods fail to encourage, guide, or document teaching that leads to improved student learning outcomes.”[8] And as one anonymous academic aptly expresses it, creativity, a love of knowledge and a thirst for discovery are not easily measured.[9]

Along with a review of some of the general literature, the Association has spent some time surveying its members on student opinion surveys at UOIT, which culminated in a teaching evaluation report and recommendations in April 2014[10]. We hope this, along with an understanding of the literature in this area, will be a good springboard from which we can navigate the muddy waters of student opinion surveys and find agreement on best practices which will foster a fair, reasonable and equitable use of such opinion surveys in combination with other measures of teaching effectiveness.

[1] “Do Student Evaluations of Teaching Really Get an ‘F’?” Rice Centre for Teaching Excellence.

[2] Letter of Understanding #4 re Student Course Evaluations Working Group, UOIT-UOITFA Tenured and Tenure-Track Collective Agreement.

[3] 8.4 Course Evaluations, approved by UOIT Academic Council January 2004.

[4] Philip B. Stark and Richard Freishtat “An Evaluation of Course Evaluations,” University of California, Berkeley (26 September 2014).

[5] Dennis E. Clayson and Debra A. Haley, “Are Students Telling Us the Truth? A Critical Look at the Student Evaluation of Teaching,” Marketing Education Review 21:2 (December 2014), 101-112.

 

[6] Anne Boring, Kellie Ottoboni and Philip B. Stark, “Student evalutions of teaching are not only unrealiable, they are significantly biased against female instructors,” LSE Impact Blog (4 February 2016); see also Colleen Flaherty “Bias Against Female Instructors” Inside Higher Ed (11 January 2016); Lillian MacNeil, Adam Driscoll and Andrea N. Hunt, “What’s in a Name: Exposing Gender Bias in Student Ratings of Teaching,” Innovative Higher Education 40:4 (August 2015), 291-303; Silke-Maria Weineck, “Viewpoint: Student evaluations – treat with caution” Michigan Daily (6 December 2015).

[7] Scott E. Carrell and James E. West, “Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors” Journal of Political Economy 118:3 (2010), 409-432.

[8] Dan Berrett, “Can the Student Course Evaluation Be Redeemed?” The Chronicle of Higher Education (29 November 2015).

[9] Anonymous Academic, “Our obsession with metrics turns academics into data drones,” Guardian (27 November 2015).

 

[10] UOITFA Teaching Evaluation Report, April 2014, available online: https://uoitfa.wpengine.com/course-evaluations/

Comments 1

Leave a Reply

Your email address will not be published. Required fields are marked *