by Dennis Selder
A wonderful example of Foucault’s observations about how power and knowledge interact is teaching evaluation. Teaching evaluation is a special sort of knowledge that requires a difference in the power relationship among participants for it to even be produced. By its nature, the person being assessed is acted upon and the assessor is empowered to act on the other. Crucially, institutional conditions establish the relationship as a partial enactment of the power invested in it, and the exercise of this power then has the function of reifying the structure of power upon which it is based. In this way, it’s fair to say that if something is “learned” from a particular evaluation, it is beside the point. In effect, a given evaluation can be used to justify any sort of decision one in a position of power wishes to enact. Anecdotes about abuse among part-timers are legion.
Historically teaching evaluation arose in the late seventies as one of the primary tools that sought to reshape higher education in accordance with so-called neoliberalist thinking, also called managerialism or market-based principles. This ideology, which has successfully shaped global business practices, has also sharpened the divisions between a managerial class (administrators, deans, and the like) and faculty. Likewise teaching evaluation has been promulgated as a way to provide for efficiency and accountability to the public, just as the managers are expected to be held accountable to shareholders. But this link is more than just analogy. To implement assessment as reform in line with this neoliberalist thinking, adherents funneled corporate capital into foundations, think tanks, and research, published literature in professional journals and books, and created conferences and workshops, promoting a rhetoric around metrics, efficiency, accountability, student learning outcomes, continuous improvement. The line of reasoning argued that these tools, which have allowed corporate businesses to flourish, would do the same for higher education.
While university and college administrators were dazzled by the apparent benefits assessment would bring to the management side of higher education, faculty were persuaded that teaching evaluation would improve the quality of instruction. This hope appears, however, to have been misguided. Instead of improving pedagogy, assessment practices have given administrators a way to legitimize dispossessing faculty of academic freedom, full-time work, and tenure-track jobs while at the same time appearing to exercise the necessary control over them to maintain standards. According to the department of education, non-tenured tenure track faculty is at an all-time low (7.4%), while non-tenured faculty now teach 76% of all classes on college campuses across the United States (US Dept of Education report 2011). The quality of education for students has suffered because dispossessed teachers cannot engage in high impact teaching practices, that is, the sort of one-on-one teaching relationships that are shown to be critical in helping students to identify with a particular discipline (Reference). This reality is belied by the data administrators provide to the public, however, based as it is on teacher evaluation. In fact, the AAUP cites in its Statement of Teaching Evaluation that some institutions use it as marketing.
And chairs of departments, in some instances, are attempting to exercise similar managerial control over their instructional staff. To quote a current English chair, she describes her program as “highly planned, highly structured, carefully-scaffolded, highly student-centered, integrated Reading and Writing instruction” (personal communication). Programs described in such a way do not appear to provide breathing space for authentic interactions with students or the much-needed development of critical thinking skills, much less academic freedom. They do, however, move student writing performance toward a more consistent product that can be easily tested through surface-level features of written Standard English.
The other major damaging influence teaching evaluation has had, given the way it is currently practiced, has been to promote a simplistic model of human learning. Instead of providing feedback on the teaching in disciplines, assessment sees only “subjects.” And conveniently, a subject by itself can be measured according to the performance of a set of skills. For instance, if one examines a first-year Calculus course, then according to the “Student Learning Outcomes” model, if a student can successfully solve a set of Calculus problems—integrate and differentiate a set of equations—when he or she was not able to do so beforehand and demonstrate these skills on a test, then the teaching has been successful.
But a subject does not account for what higher education actually does for students, which is to introduce and socialize them into disciplines. So, in the example above, the apparent skill in solving a new set of algorithms associated with first-year calculus cannot account for how the student is being socialized into the complex set of practices, social norms, methodologies, and ways of thinking that inform the field of Mathematics. Is the student prepared to solve a problem that go beyond applying a particular algorithm? Is the student developing the critical consciousness about the field that may lead that student to produce original ideas in the field?
In spite of how teaching evaluation has damaged higher education, it should not be abandoned, nor does it seem likely that it could be, as it has become institutionalized throughout U.S. colleges and universities, and public priorities that provided its original impetus have not gone away. And evaluation, used appropriately, offers insights into good pedagogy. But in order to use teaching evaluation constructively, one needs to understand the nature and mechanisms by which it creates knowledge and the uses to which that knowledge is put. As Angelo and Cross point out in their famous Classroom Assessment Techniques: A Handbook for College Teachers, the purposes for which assessment is used include “to “appoint, to award tenure, to inform salary and promotion decisions, to terminate, and to help teachers improve” (319). What Angelo and Cross do not include in this list—though one can argue that by “improve” Angelo and Cross refer to it— is that assessment is also a way to learn.
To this end, if full-time faculty are really serious about improving the quality of instruction among part-time faculty, then they can counteract Foucaultian effects of teaching evaluation by allowing part-timers to conduct observations of other teachers without regard to the hierarchy. This “reciprocal assessment,” as I have dubbed it could be particularly important given that the real learner in a teacher evaluation is not the one being evaluated but the one who has to write the evaluation–taking the agreed-upon pedagogical criteria and applying it to an actual performance in a classroom.
I advocate bringing in a reciprocating function into assessment as a partial antidote to the power structure that current assessment reinforces. In this context, reciprocity in relation to assessment refers to two aspects of the knowledge making process: reciprocating roles of assessor and assesse, and reciprocating the knowledge generated with the parties involved in making it. I argue that such reciprocation would undermine hierarchy rather than reinforce it. More importantly, it would focus attention away from the reification of power itself to actually learning about situation to which it applies. Given that teaching evaluation occurs in academic settings, it seems reasonable to assert that learning should be the primary focus, and that the other functions of assessment noted by Angelo and Cross above will be more efficacious if learning is kept in mind. I argue that this is best accomplished through reciprocity, and that reciprocity be institutionalized to ameliorate the more toxic effects of assessment as it is currently practiced.
You can follow Dennis Selder on twitter at @erasmusonline