What Can We Do About Low Returns for Online Student Evaluations?

Lynn ZimmermannBy Lynn Zimmerman
Editor, Teacher Education

In my previous article, “Are Low Returns the Norm for Online Student Evaluations?” (10.7.10), I asked several questions about return rates of online student evaluations. Two people responded, a lower return rate than I hoped for. However, both offered some information that I decided to work into another article addressing these issues in more depth.

As I reported then, my institution piloted a commercial online student evaluation system for evaluating faculty and courses in spring 2010. Five instructors participated, collecting feedback for nine courses. The classes were taught face-to-face and online. The response rate on the online evaluations was 44%. I also have spoken with other faculty who have used various online  evaluation systems, including Blackboard, our Learning Management System, which students are accustomed to using. They reported that about 50% of the students who are asked to complete online evaluations do so. That has been my experience as well.

Even though my university (and I) would like to move toward online evaluations, these low return rates are a major concern because student evaluations are a critical part of promotion and tenure reports and annual reviews. Until higher rates of response on online evaluations can be guaranteed, faculty will be reluctant to use them. In response to the questions in my initial essay, I was directed toward several articles which address these issues of low response rates and how to solve this problem.

Miller (2010) states in “Online Evaluations Show Same Results, Lower Response Rate” (Chronicle of Higher Education ) that there are advantages to using the online evaluations, including saving class time, reducing paper use, and easier and more efficient record-keeping. However, she referred to an analysis of data from Kansas State University’s IDEA Center, which showed a large gap between responses on paper evaluations, 78%, and online evaluations, 53%.

In “In-class Course Evaluations Ditched for Online Surveys,” Smilovitz (2008) also points to the almost 20% difference in response rates. He was reporting on the University of Michigan’s move from paper to online evaluations, which was being touted as a green initiative that will also result in money-savings. Low response rates was a major concern for university faculty and administration. They felt these were linked to the lack of incentives. Based on data analysis, it appears that students tend to take the time to respond to online evaluations if they really like or really dislike the instructor (not the course, but the instructor). University officials are counting on good communication and marketing to solve the low response rate issue.

In addition to the reasons stated above for using online evaluations, some people reported that when students do respond, they tend to provide longer and more thoughtful responses to open-ended questions than they do in traditional paper-and-pencil evaluations.

Because using online evaluations makes sense for the reasons mentioned, how can we move forward in using them and getting response rates that satisfy faculty and administrators?

Studies by Dommeyer et al. (2004) and Ballantyne (2003), as reported by Laubsch (2006) in “Online and In-person Evaluations: A Literature Review and Exploratory Comparison,” showed that faculty actively promoting the completion of online evaluations had higher response rates.

Thorpe (2003), in “Online Student Evaluation of Instruction: An Investigation of Non-Response Bias,” suggests incentives as the way to increase student participation. He specifically suggests blocking access to grades as one incentive and telling students that this is a mechanism for sharing information with other students about courses and instructors. The first seems to be more of a punishment than an incentive, although he suggests that the block can be easily overridden by the student. The second would require that the evaluation system be capable of providing this information publicly.

Because Miller’s article was published online, readers were able to make comments and suggestions. Several suggested ways to encourage higher participation, including offering students early access to final grades; locking students out of their courses if they do not participate; and providing a monetary reward. The system that my university piloted had a function that would send the students reminder emails at specified periods if they did not submit an evaluation.

When a university makes a decision about student evaluations, these issues must be carefully considered. The university’s attitude and approach toward the use of such instruments for faculty assessment may need to be re-evaluated and revised. Creating a climate in which students understand their responsibility in the process is important. Whether this is done through active and successful marketing or through a variety of incentives, universities have to realize that, if they want to use online evaluations, they cannot assume the transition will be successful without some background work and effort on the part of all the stakeholders, administrators, staff, faculty, and students.

6 Responses

  1. I still communicate with many of my former teachers. (The ones still living anyway) I think great teaching should be like any other great service or product where the producer is proud enough to warrantee their work. So I would suggest trying something like this.

    Dear student: Thank for being a part of yet another great educational opportunity. If you’re ever not happy with the mental models I have helped you to carefully create during this course then write me at any time during my life and I will do my best to service them for free. But this warrantee comes with a small price; I would appreciate it if you would fill out the on-line evaluation form. If for any reason you still don’t think the warrantee is worth a few minutes of your time, then I would like you think of all the time I spent accepting your late work, answering your silly questions over and over again, spending hours trying to correct what you only spent ten minutes trying to write and last but not least continuing repeatedly to believe you could do it even after you had long given up hope. You owe me big! So just take a few minutes from all the time you waste on the internet anyway, and fill out the dumb evaluation! Thank You…

  2. When learning is a process, recursive rather than linear, formative makes much more sense than summative evaluations. Instead of a single end of year survey with a one-size-fits-all set of questions, instructors need the kind of feedback that can inform practice and outcomes as they’re occurring.

    Teachers can easily develop and implement these IF they can be included as part of the instructional process and be considered an option to the traditional end-of-course survey.

    However, they’ll need to have released time to do this. It’ll take time, and a full teaching load pretty much precludes it. But the returns would be much more useful.

    This move, however, assumes that the purpose of the survey is to improve instruction and not to serve an administrative need. -Jim S

  3. Good point Jim. I would think that in a true formative environment that the on-going dialog between teacher and student would naturally include this type of evaluative feedback. for this reason, evaluation forms at the end of a term are hopefully more useful to administrators. WZ

  4. . . . and, IF the purpose is administrative, then it should not be dumped on teachers. The responsibility for implementing the entire process ought to be in their hands. The return rate should then be their concern — not the teachers’. Teachers have their hands full. They have their own mission. -Jim S

  5. Lynn: “We are held accountable for getting maximum returns.”

    And there’s the rub. There’s no incentive except fear of punishment — punishment for failing to do someone else’s job.

    If administrators were truly interested in surveys as instruments to improve instruction, then they ought to be looking for alternatives with built-in incentives. For example, as I mentioned earlier, give teachers time (release from one class per term) to develop and test, on an ongoing basis, their own formative surveys, surveys that will help them to continuously build their strategies.

    And ask them to share the ongoing process in blogs that are part of a college-, system-, nation-, or world-wide network. This network would be self-sustaining with extremely rich resources.

    Instructors could share plans, strategies for achieving and evaluating them, data from informal formative surveys, conclusions, revised plans, etc. in a continuous recursive cycle. And they could receive feedback from colleagues interested in the same topic.

    In official reports etc., instructors could point to their blogs as a measure of their success in using surveys to improve instruction. In the end, we empower teachers to become responsible for their own growth. This empowerment, alone, is a priceless reward, resulting in increased confidence and a sense of purpose in managing one’s professional growth. -Jim S

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s