By Jim Shimabukuro
Editor
[Updated 10/27/13: see footnote 1.]
If we can get past the magic of “massive” and “free,” we begin to see Coursera/edX-type MOOCs for what they really are — online courses, no different from the ones your college or university is offering right now. Open up registration to anyone anywhere, don’t charge a fee, and you’re MOOCing.
The MOOCs that we’re now associating with Stanford, Harvard, and MIT are actually based on video lectures, and this alone should give us a hint as to why they’re being embraced by so many higher ed institutions. In a very real sense, it’s like having your cake and eating it, too. You can serve a theoretically infinite number of students with a single course featuring lectures by a single instructor. And the best part is, you don’t need to provide prohibitively costly infrastructure for the course. Digitize the lectures in video, store them somewhere, post links, and you’re good to go — as far as content goes.
Besides content, the basic foundation of any course is learning strategy, or a process (“instruction”) that’s designed to help students achieve course objectives. For MOOCs, traditional classroom approaches become less effective as the teacher-to-student ratio (1:X) climbs in disproportion. Simply put, when X reaches a certain point, methodology must change.
The change of least resistance is peer facilitation. Nothing new, really. Teachers, especially in writing, have been using peer feeedback strategies for decades. Results are mixed, as can be expected, depending on any number of variations in instructional quality. Still, when done right, students can learn to provide effective feedback on their classmates’ performance. The key is in the rubrics.
In courses with enrollments so massive that a teacher is unable to monitor student mastery (understanding and application) of rubrics, relatively “simple” technical innovations are needed to ensure quality peer feedback. I say “simple” because we already have the technology to automatically gather, integrate, and report data from and across a wide range of different online activities. For example, students can be quizzed on their understanding of rubrics, and the result could be used in different contexts, e.g., to rate their comments on the quality of their classmates’ work1. With this rating, authors would be able to determine how much weight to assign reviews received from classmates. A low rating would mean the critic doesn’t have a clue about the requirements for a particular assignment. Continue reading
Filed under: MOOC | 20 Comments »





























































































































































































































































































































































































































































