Are Low Returns the Norm for Online Student Evaluations?

Lynn ZimmermannBy Lynn Zimmerman
Editor, Teacher Education

At our faculty senate meeting this month, a member presented data she had collected from a new online student evaluation system that was piloted in spring 2010. Students use this system to evaluate faculty and courses. Five instructors participated, collecting feedback in nine courses. The one statistic that struck me most forcefully was the 44% response rate. In my experience and that of other faculty I have talked to, about 50% of the students who are asked to complete online evaluations do so.

My colleague’s report dismayed me. For faculty at my university who are promotion/tenure track, student evaluations are a critical part of their documentation. In my department, they are also a critical part of the annual review process. Therefore, I am compelled, when I teach face-to-face and hybrid classes, to give my students paper-pencil evaluations to complete so that I have a higher rate of return. My university (and I) would like to move toward online evaluations, but until higher rates can be guaranteed faculty will be reluctant to use them.

Does anyone out there have research that indicates these low return rates for online evaluations are the norm? What reasons have been identified for students’ lack of response? I know that there are “tips” for how to improve rates. Which ones have you found to be the most effective?

[Note: Please see Lynn’s follow-up article, “What Can We Do About Low Returns for Online Student Evaluations?” (10.12.10). -js]

2 Responses

  1. Lynn, here are a few results from a quick search:

    Mary Helen Miller, “Online Evaluations Show Same Results, Lower Response Rate” (Chronicle, May 6, 2010. URL
    Excerpt: “Seventy-eight percent of students enrolled in classes with paper surveys responded to them, but only 53 percent of students enrolled in classes with online surveys responded.”

    Jacob Smilovitz, “In-class course evaluations ditched for online surveys” (Michigan Daily, Sep. 11, 2008) URL

    Paulette Laubsch, “Online and In-person Evaluations: A Literature Review and Exploratory Comparison” (JOLT, June 2006). URL
    Excerpt: “In 1999, high level of computer literacy and access to computer equipment for 24 hours per day were needed for one school to initiate a pilot program of an online evaluation system for students enrolled in in-person classes (Ballantyne, 2003). The online process resulted in 30% response rate compared to the 65% response rate for the paper surveys. This raises the issue of reliability of the responses (Ballantyne, 2003). Various researchers have provided parameters of the percentage of responses needed for a valid response. Dommeyer, et al. (2002) cited Centra, who stated at least two-thirds of the student responses were needed, and Gilmore, Kane, and Naccarato, who argued that a smaller number could be acceptable if the course were taught multiple times during the period of faculty review.”

    “When faculty actively promote completion of online surveys, there is an increase response (Dommeyer et al, 2004; Ballantyne, 2003). In the pilot process reported by Ballantyne (2003), responses rose to 54% and then 72% in 2000. The following two years, the response rates dropped to the 50% level. Ballantyne also reported on the use of online evaluations for online classes that provided wide ranging response rates. Rates for paper surveys were approximately 40% in one school, and when online surveys were implemented, the rates stayed the same initially. It was determined that there were significant response rates for online courses, with the rates varying between 30 and 95 percent.”

    -Jim S

  2. Lynn, Here is another paper: Online Student Evaluation of Instruction: An Investigation of Non-Response Bias, Stephen Thorpe ( You can download the pdf for free.

    I also found this: Online Course Evaluations at Princeton: FAQ. The 4th question:
    Q: Why will response rates go up?
    A: Students will have two incentives to complete the online evaluations. In the first instance, they will not be able to view their grades online until they have completed the evaluation. The bar is set fairly low for completion. In fact, if they enter the evaluation and indicate that they decline to evaluate the course, they will have immediate access to their grades. Experience suggests, however, that very few students actually select this option. Our completion rates to date have been above 80%, which is substantially higher than our completion rates on the paper forms.
    In the second instance, students will have the opportunity to provide advice to other students about their courses, and those responses will later be made available to students. At present, the only formal mechanism for students to share advice with each other about courses comes through the online Student Course Guide (SCG). Because there is no link between the SCG and enrollment data, there is no way to ensure that comments are posted by students who actually took the course in question. In addition, because the number of responses in the SCG is so small, one or two students can skew the distribution of responses. With the data we collect moving forward, the information shared with students will be much more accurate and useful.

    (I included the second paragraph even though it is less relevant because I knew people would wonder what the second incentive was).

    Personally I think incentives are the way to go. Some of the courses that I work with are “on demand” for the corporate sector. Feedback is important here too, and we always tie the certificate to the completion of the course evaluation–no surprise that we have a very high rate of compliance!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: