Teaching Effectiveness

School of Science, Technology, Engineering & Mathematics Evaluation of Teaching Effectiveness

1. PURPOSE & CONTEXT

This document defines the policies and practices for the School of STEM for assessing teaching effectiveness, in compliance with two sections of UW Faculty Code Chapter 24. Section 24-32 Subsection C describes the scope of teaching, elements of effective teaching, and evidence that should be included in evaluating teaching effectiveness. Section 24-57 Subsection A defines the minimum requirements for student feedback and peer teaching evaluations.

The policies and guidance provided in this document are intended for all faculty with respect to supplying information on their teaching effectiveness for personnel decisions, as well as any reviewers for evaluating teaching effectiveness.

2. ASSESSING TEACHING EFFECTIVENESS

Anyone reviewing the teaching effectiveness of a School of STEM faculty member should begin by reading the faculty member’s narrative. This narrative provides the context needed to understand the artifacts submitted by the faculty member, including peer observations and student evaluations of teaching (SETs). SET refers to any school, campus or tri-campus approved method of collecting student feedback on teaching. The School of STEM expects its faculty to engage in a continuous process of assessing and improving their teaching over time. As such, reviewers should look for improvement over the time period under review. The rest of this document provides details about the narrative reflection, peer observations, SETs and other artifacts that may be provided.

3. NARRATIVE REFLECTION

The School of STEM expects faculty members to actively improve their teaching on an annual basis by understanding the scholarship associated with their teaching, by applying best practices appropriate to their content domain, goals, teaching styles, and inclusive pedagogy, and by using evidence as a basis for reflection on improving their professional practice.

The School of STEM emphasizes narrative reflection in assessing teaching effectiveness for personnel decisions. We ask colleagues to discuss aspects of their teaching such as (a) pedagogical goals as they relate to student learning outcomes, (b) what teaching practices they employ to achieve those goals, (c) what evidence they use to assess teaching effectiveness, (d), how their students engage with the course content, and (e) how they will adapt and improve in the future. Faculty are encouraged to develop rich and course-tailored sets of evidence (student feedback, assignments, activities, etc.), discuss how that evidence demonstrates defined student learning outcomes, and describe how they use the evidence to modify teaching and pedagogy.

We encourage faculty to participate in mentorship, learning communities, collaboration, and scholarship to improve teaching at the personal, division, school, and professional level. We recommend faculty

share experiences, methods, and best practices and report how these interactions have helped them improve their teaching.

The School of STEM expects reviewers to focus their evaluation of teaching effectiveness primarily on the contextualized narrative reflection, evidence cited, and the elements of assessing effective teaching as suggested in Section 24-32 Subsection C.

4. COLLEGIAL TEACHING EVALUATIONS

The School of STEM requires faculty to have collegial evaluations of teaching effectiveness aligned with the UW Faculty Code Chapter 24, Section 24-57 that states evaluations shall be conducted prior to recommending any renewal of appointment or promotion of a faculty member.

Peer observations of teaching are opportunities to discuss teaching practices more deeply with colleagues to support improvements and reflection of teaching. UW Faculty Code Section 24-32C indicates that “The assessment of teaching effectiveness shall include student and faculty evaluation.” Peer evaluators are requested to include in their reports a description of the teaching methods and activities observed, a summary of student engagement with the material and the instructor, and any collegial recommendations for improvement. The School of STEM recommends that peer observation include pre-work where the peer observer closely examines course materials and engages in conversation with the instructor about class expectations and context.

The School of STEM practice is to have reports of peer observations returned to the instructor in a timely manner. It is recommended that the faculty meet with the peer observer to discuss findings and suggestions to enhance teaching. It is the instructor’s responsibility to include peer observations as needed in their materials for personnel decisions.

5. STUDENT FEEDBACK

The School of STEM expects faculty members to gather and reflect on student feedback in their courses to improve their teaching effectiveness. We recognize that student evaluations of teaching (SETs) are highly subjective and should never be treated in a decontextualized or uncritical manner. In order to better understand how to utilize the information in SETs for personnel decisions, we provide observations and recommendations based on a metastudy of measurement bias and equity bias research (Kreitzer and Sweet-Cushman, 2022) in Table 1 below to guide reviewers as they consider student feedback.

a. Student Evaluations Indicate Student Perceptions

Student evaluations represent students’ perception or experiences in a course. Current scholarship indicates that SET scores are highly correlated with factors such as students’ reasons for enrolling in the course, their expected grades, class size, and response rate, and hence do not always accurately reflect student learning or teaching effectiveness (Hornstein 2017, Uttl et al. 2017, Boring et al. 2016, Spooren et al. 2013, Kreitzer and Sweet-Cushman, 2022). Nevertheless, we value the voice of students in assessing their learning experience and expect faculty to be responsive to students’ perception of courses they teach.

b. Bias in Student Evaluations

SETs may be biased against women, under-represented minorities, and LGBTQ+ faculty depending on the instrument used, the delivery method, and how they are employed. Gender bias may prevent SETs from measuring teaching effectiveness accurately and fairly. SETs are more strongly related to instructor’s perceived gender and to students’ grade expectations than they are to learning, as measured by performance on anonymously graded, uniform final exams (Boring et al. 2016). These biases persist across disciplines and instructor characteristics. Professors who present as white receive higher SETs than non-white faculty, and non-native speaking instructors receive lower SETs than native speakers. While this is not true of every individual case, pervasive patterns of bias exist. This bias sometimes takes the form of insults, abusive comments, and ad hominem attacks expressed in racist or misogynist terms in written comments. This manifestation of bias adds an extra emotional burden for colleagues who are put in the position of having to experience this material and to include it in materials for personnel decisions.

While the School of STEM welcomes constructive feedback on teaching, it also encourages chairs to share with all faculty — but especially faculty from groups historically excluded from STEM — School of STEM procedures for dealing with threatening, racist or abusive student comments. Comments that are clearly based on gender or gender identity, race, ethnicity, primary language, or other known bias will not be considered in any review of faculty for any process.

c. Quantitative Data in Student Evaluations

SET scores also have several statistical problems related to sample size and statistical power. If not corrected for class size and response rate, comparison of scores between different classes taught by the same instructor, can imply differences in teaching effectiveness that are not statistically valid.

Given the high sensitivity of SET data to student response rate, we recommend that faculty monitor student response rates and encourage students to participate. Many faculty have implemented various incentives to encourage students to respond. We find this practice acceptable provided it is not punitive and maintains anonymity. We also encourage faculty to use scheduled class time for students to complete the online forms.

d. Use of Student Evaluations

Taking the above factors into consideration, the School of STEM does not use SETs to draw conclusions about any faculty member’s teaching effectiveness without considering other sources of evidence (narrative reflection, collegial teaching evaluations, assessment of student learning outcomes, etc.). This applies to faculty members who receive both “high” and “low” student evaluation scores. In particular,

SET scores should not be used to compare teaching effectiveness between faculty members (e.g, when comparing student learning outcomes across multiple sections of the same course), but rather should be used to demonstrate the trajectory of teaching effectiveness over time for a single faculty member. Regardless of the instrument used (e.g., IASystem, SGID., SALG, etc), SET scores and student written comments should be contextualized within the entire history of teaching by a faculty member under evaluation. Table 1 summarizes some of the acceptable and unacceptable uses of SET data.

Table 1. Acceptable and unacceptable uses of SETs for personnel processes.

SET data Acceptable useUnacceptable use
Numerical values such as OSR and CEI values from IASystemDemonstrate teaching effectiveness trajectory of a single faculty member over timeCompare faculty members to one another
Identify areas of growth for faculty membersDetermine that a teacher is effective or not effective without further evidence
Written commentsDetermine trends in overall student perceptionsReference isolated comments to determine overall teaching effectiveness
Identify areas for mentoring opportunitiesUtilize any comments based on gender or gender identity, race, ethnicity, primary language, or other known bias

(After Kreitzer and Sweet-Cushman, 2022)

e. Collection Expectations

The UW Faculty Code Section 24-57 Subsection A requires that each faculty member obtain student evaluations of teaching using the standardized instrument or an approved alternative from at least one course each academic year; however, the School of STEM values the importance of providing a method for students to provide feedback for each course. Therefore, the School of STEM expects faculty to collect anonymous student feedback each time a course is taught (excluding sections with fewer than five students and individualized student courses). Additional methods approved by the School of STEM will be based on the use, validity, and reliability of the SET method.

f. Reporting Expectations

Reflection of teaching effectiveness based on formal evidence such as from SETs is important in the assessment of faculty’s teaching effectiveness in personnel decisions so faculty should be prepared to submit such evidence. Faculty may also be expected to share collected evidence with others providing mentoring and professional development. Since reporting requirements may be defined as part of other School of STEM policies and guidelines and may vary over time, they are not provided here.

6. OTHER EVIDENCE OF TEACHING EFFECTIVENESS

The School of STEM Creating Promotion and Tenure Dossiers document provides guidance on artifacts that may be included in promotion and tenure dossiers to demonstrate teaching effectiveness. Faculty may choose to incorporate any evidence of teaching effectiveness for personnel decisions as they see fit.

7. REFERENCES

  • Boring, Anne, Kellie Ottoboni, and Philip Stark. 2016. “Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness.” ScienceOpen Research.
  • Faith E. Fich, “Are Student Evaluations of Teaching Fair?”, Published originally in the May 2003 edition of Computing Research News, Vol. 15/No. 3, pp. 2, 10.
  • Hornstein, Henry A, 2017. “Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance”, Cogent Education.
  • Kreitzer, Rebecca, and Jennie Sweet-Cushman, 2022. “Evaluating Student Evaluations of Teaching: a Review of Measurement and Equity Bias in SETs and Recommendations for Ethical Reform.” Journal of Academic Ethics 20 (1): 73-84.
  • “Student Assessment of their Learning Gains (SALG)”, https://salgsite.net/.
  • Spooren, Pieter, Bert Brockx, and Dimitri Mortelmans. 2013. “On the Validity of Student Evaluation of Teaching:
  • The State of the Art.” Review of Educational Research 83 (4): 598–642.
  • Uttl, Bob; Carmela White; and Daniela Wong Gonzalez, 2017. “Meta-Analysis of faculty’s teaching
  • effectiveness: Student evaluation of teaching ratings and student learning are not related.” Studies in
  • Educational Evaluation. 24: 22-42.
  • “Assessment of Teaching Effectiveness”, UW Faculty Code, Chapter 24, Section 24-57, Subsection A.
  • https://www.washington.edu/admin/rules/policies/FCG/FCCH24.html#2457
  • “Scholarly and Professional Qualifications of Faculty Members”, UW Faculty Code, Chapter 24, Section 24-32,
  • Subsection C, https://www.washington.edu/admin/rules/policies/FCG/FCCH24.html#2432

Version 1 Approved by the STEM Faculty Council on April 23, 2021.
Version 2 Approved by the STEM Faculty Council on May 14, 2021.
Version 3 Approved by the STEM Faculty Council on May 21, 2021.
Updated Version 3 Approved by the STEM Faculty Council on November 16, 2021.