Maybe it’s the fact that to earn a doctorate one had to research and write an innovative, new, previously un-researched aspect of one’s discipline. The mindset that permits one to succeed in that environment may also be a mindset that prevents one from merely adopting another’s practices. Maybe it’s also the fact that each institution believes that its students and environment are so unique that what works for one institution will not necessarily work for another.
It is the latter belief in each institution’s uniqueness, that is the topic discussed in Beating the ‘Not Invented Here’ article by Josh Fischman in the Chronicle’s Wired Campus. In the article, the author summarizes a panel presentation by stating “There are plenty of good ideas, the two said, but colleges are reluctant to adopt solutions that did not arise from their own campuses.”
One example of that on our campus is student evaluations. At the end of each semester, students complete evaluation forms for every course taught by adjunct and tenure track faculty. Each college in the University has a different evaluation form and many of the forms were developed by a group of faculty within each school. There are commercial instruments available composed of validated, reliable questions-yet faculty choose not to use them because, in part, our campus is so unique.
Student course evaluations can have an inordinate impact on faculty retention and promotion. This is true whether the course evaluations are composed of rigorously tested questions or not. And, this is true even though students may not be entirely honest about their answers to the questions. In my post Another A Word-Course Evaluations, I talk about a study in which one of its findings was that students lie in course evaluations. Even though that is probably true, and it is also true that faculty can (and may have an incentive to) manipulate course evaluations, faculty committees and administrators continue to place inordinate weight on those evaluations when making hiring, promotion and tenure decisions. The point here is that if course evaluations are to be used to make such decisions, those evaluations should be based on reliable, validated questions created by experts.
The point of the example is that universities should embrace best practices that have been successful and universities should focus upgrading the wheel rather than reinventing it. That would be more efficient, more effective and permit faculty to focus on improving teaching and learning.