Archive for the ‘course evaluations’ Category

You Work Only 12 Hours Per Week, Do Not Work in the Summer & Have a Sabbatical Every 7 Years?!

In course evaluations, Education, faculty responsibilities, institutional responsibilities, teaching, universal design for learning on June 11, 2012 at 10:58 am

Consumatory Scholarship!?! Sounds like someone eating books and articles!

In the Chronicle article Just Because We’re Not Publishing Doesn’t Mean We’re Not Working, Bruce Henderson argues that faculty Eatingwork is inadequately recognized by the public and by legislators who make demands for accountability. He also notes that “teaching” as an activity in higher education, is not respected. He notes, as an example, that those who do the most teaching (adjunct faculty) receive lower pay.

I agree.

We do not honor teaching as we should. Universities usually measure and reward teaching by counting publications (research), looking for a key number on student evaluations (e.g. 4.0 on a 5 point scale or meeting the department average) and relying on peer evaluations, Publications in one’s area of expertise do not necessarily translate to good teaching, student evaluations are notoriously unreliable (see my latest post on student evaluations) and peer evaluations are only an indicator of one (or two) colleagues’ attendance at one or two classes. Adjunct faculty’s jobs are at risk if they have low student evaluations, even though the link between student evaluations and teaching is tenuous. So, let’s begin measuring teaching effectively: let’s show students, faculty and legislators how and what students learn. Let’s do that using evidence-based teaching practices, explaining how innovations can help improve learning and reward faculty who do their part (and remind others how learners must do their part).

University administration should reward faculty for their teaching accomplishments. And that means ALL teaching faculty, not only tenure-track faculty. Then, the public can begin to see that not only do many teachers work hard, that they work more than 12 hours per week but that we provide a substantial benefit to society.

I sometimes wonder whether there’s an element of classism, anti-feminism and racism in the continual demands for accountability. The University faculty and administrators were overwhelmingly middle-class white males in the 60s. Now, it’s much more diverse. The increase in diversity parallels the increased demands for accountability. And while I know correlation doesn’t mean causality (and accountability demands have complicated causes), it is frustrating to know that for years, higher education faculty faced no obligation to justify existence. During those times, faculty presented material in a way that only certain types of learners (those you might call read-write learners) could succeed. Student studyingTenure was awarded based on a handshake (at least according to some of the faculty who retired just as I came on board) or solely based on the school from which the faculty member obtained his Ph.D.  And while I was successful in that environment, I recognize that my success shouldn’t be the only measure of whether anyone else can garner educational success. I have met students and others who were just as intelligent, but who learn in different ways. So, I recognize that this system of teaching is not the only means of communicating.

I also wonder whether the accountability demands reflect an attack on intellectualism; that the demands represent an attack on those who want to explore and learn. In his blog posts, The Real Ken Jones discusses this in more depth in his “Celebrating Stupidity” series. He focuses on some of the contradictions between science and what some what to believe. Whether the attack on education is related to an attack on intellectualism in general is subject to debate, but there does continue to be a significant attack on education: justified on some grounds but not on others.

So this discussion returns to the topic line: what should we as educators do to let the public and legislators know what we do in the classroom? Regardless of the cause of the controversy, we need to figure out how to address it–how to rebuff the attacks and to go on the offensive. We provide an invaluable service to the community, yet that gets lost in the rhetoric about accountability.  Is using the term “Consumatory Scholarship” and defining it a way to address it? I think not-the essence is in the details. But to the core question I do not yet have an answer.

Do you?

Still Adrift in Education

In assessment of learning, course evaluations, critical thinking, faculty responsibilities, institutional responsibilities, teaching on February 15, 2012 at 1:19 pm

In his essay,’ Academically Adrift’: the News Gets Worse and Worse, Kevin Carey, explains that there is more information that not only do college students fail to learn in college, but also that students who perform lower on the CLA (Collegiate Learning Assessment) also fail to find financial security after graduation.

In an earlier post, I discussed some of the conclusions I reached from the sections of the book which I had read. Those conclusions were:

  • There is an inverse relationship between the number of faculty publications and a faculty orientation toward students.
  • The higher students’ grades in the course, the more positive the student evaluations.
  • Grade inflation probably exists.

In a later post, I discussed critical thinking as a concern: that students don’t “enjoy” the challenge of traditional problem solving the way I (and other faculty) do and that has an impact on whether students learn. If students do not see tackling and solving problems as a challenge (and we as educators should do as much as we can to make problem-solving interesting), then there will be a significant impact on student learning.

A Not So Radical Transformation in a Core Business Course

In the introductory business law course that is required for all business majors, all the faculty teaching the course agreed to make substantial changes in the way the course was taught in order to acknowledge and address perceived efficiencies: students lack of college-level ability to read, college-level ability to write and need to improve critical thinking. Students complained a great deal about the additional work.

Assessing and Working to Improve Reading Skills

Although my own experience with students confirms that it would help for them to have more practice reading and writing, the students did not agree. When asked whether My Reading Lab (a publisher-created product) helped them, students said no:


Note that this response is only the student’s perceptions. We have not yet completed an analysis to determine whether those who performed better on My Reading Lab performed better on the tests or in the course. We will work on analyzing that data later. This also does not included longitudinal data, i.e. would students, upon reflection, decide that they had learned more than they thought by the additional practice reading. However, what this data does show is that students did not embrace the additional reading practice and testing requirement.

Reading the Textbook

Student preparation for class is a concern. Many students do not read before attending class; they attended class then read after class.  In addition, students did not study. As part of the course redesign, we required quizzes prior to students attending class. Students (74.2%) agreed that the quizzes helped them keep up with the reading.  Even though the students said the quizzes helped them keep up with the reading, many still didn’t read everything. The following graph lists the students responses about whether they had read the textbook (this is at the end of the semester):


Note that 40/202 or 19.8% read 90% or more of the readings and 80/202 or 39.6% read 80-89% of the readings. That means that nearly 60% of the class read 80% or more of the readings. These are the results obtained after faculty required that students read and take a quiz on the material before attending class. Thus, students were more motivated to keep up with the reading. How would these results differ if the students had not been required to take a quiz before attending class?


Student preparation and studying. The following graph includes information on the hours that students studied.


According to these self-reports, 21.2% of students studied between 1 and 3 hours per week, 27.7% of students studied between 3 and 5 hours per week, and 21.7% of students studied between 5 and 7 hours per week.  Students should have studied nearly 8 hours per week (2 hours per week outside class for each hour of class-this was a 4 unit course). In Chapter 4 of Academically Adrift, the authors note that students report spending 12 hours per week on their courses outside of class.  According to figure 4.2 of the book, in a 7 day week, students spent approximately 7% of their time studying.

Conclusions so far

The educational process requires that the faculty and the student participate, and if the students have not completed their share, then education and learning wouldn’t necessarily take place. I don’t know how this data compares to other studies on student reading, but it is challenging to help learning if both parties are not fully invested. Students have a variety of reasons for that lack of involvement, but if the investment in education is relatively small, then improvement in learning will be small.

In addition, this past semester, my student course evaluations were much lower (this was also partly due to a change in the institution’s survey evaluation instrument). Because I am tenured, I do not face losing my job over the changes in my student evaluations (although adjunct faculty face a different reality when it comes to being rehired). However, adjunct faculty depend on good student evaluations in order to keep their jobs. If that is the case, adding rigor to a class could cost that faculty member his or her job.

Reinventing the Wheel in Academia

In course evaluations, innovation in teaching, teaching with technology on January 7, 2011 at 9:28 am

image wooden wheelWhy is it that in academia we do not routinely adopt “best practices” created by other institutions? Why is it that we prefer to reinvent the wheel?

Maybe it’s the fact that to earn a doctorate one had to research and write an innovative, new, previously un-researched aspect of one’s discipline. The mindset that permits one to succeed in that environment may also be a mindset that prevents one from merely adopting another’s practices. Maybe it’s also the fact that each institution believes that its students and environment are so unique that what works for one institution will not necessarily work for another.

It is the latter belief in each institution’s uniqueness, that is the topic discussed in Beating the ‘Not Invented Here’ article by Josh Fischman in the Chronicle’s Wired Campus. In the article, the author summarizes a panel presentation by stating “There are plenty of good ideas, the two said, but colleges are reluctant to adopt solutions that did not arise from their own campuses.”

One example of that on our campus is student evaluations. At the end of each semester, students complete evaluation forms for every course taught by adjunct and tenure track faculty. Each college in the University has a different evaluation form and many of the forms were developed by a group of faculty within each school. There are commercial instruments available composed of validated, reliable questions-yet faculty choose not to use them because, in part, our campus is so unique.

Student course evaluations can have an inordinate impact on faculty retention and promotion. This is true whether the course evaluations are composed of rigorously tested questions or not. And, this is true even though students may not be entirely honest about their answers to the questions. In my post Another A Word-Course Evaluations, I talk about a study in which one of its findings was that students lie in course evaluations. Even though that is probably true, and it is also true that faculty can (and may have an incentive to) manipulate course evaluations, faculty committees and administrators continue to place inordinate weight on those evaluations when making hiring, promotion and tenure decisions. The point here is that if course evaluations are to be used to make such decisions, those evaluations should be based on reliable, validated questions created by experts.

The point of the example is that universities should embrace best practices that haveimage sports wheel been successful and universities should focus upgrading the wheel rather than reinventing it. That would be more efficient, more effective and permit faculty to focus on improving teaching and learning.

Another “A” word-Course Evaluations

In course evaluations, teaching on December 15, 2010 at 5:14 pm

In a summary of a future to-be-published article, the Chronicle noted in an article titled  “Students Lie on Course Evaluations” that students admitted lying in a way that harms faculty in student course evaluations. That’s the “A” word that relates to faculty promotion and evaluation-assessments students make of faculty.

Faculty who study the field know that course evaluations should be only one of many items that are considered when evaluating faculty performance for retention, tenure or promotion. Our University’s policy is that there are many factors that should be factored in to a decision about how well faculty encourage student learning. Yet, many who are on faculty committees will spend an inordinate about of time developing and applying complex formulas to incorporate the results of course evaluations in a way that makes those course evaluations the pre-eminent determinant of faculty performance.

Why is it that we are so comfortable with using course evaluation numbers as the primary factor to determine whether someone is a good teacher? Those numbers can be so easily manipulated. I know of some faculty who give students treats before the evaluations are administered. That has an impact on perception.

A couple of times I’ve returned exam results immediately before evaluations were administered. Since everyone was not happy with those results, my course evaluations declined. Although my overall course evaluation numbers have been good, I know that I have done things that have an impact on the evaluations and that those things are not directly related to teaching.

Perhaps the study referenced in the article will help us agree on the appropriate weight for student evaluations so that faculty can work on creating a learning environment instead of pleasing students.