Encounters: USDE 2009 Report on Effectiveness of Online Learning

encounters9Introduction: This encounter begins with a bump from Judith McDaniel (ETC editor, web-based course design), who posted a comment to Steve Eskow re Evaluation of Evidence-Based Practices in Online Learning: A Judith_McDaniel2_80Meta-Analysis and Review of Online Learning Studies (Washington, D.C., 2009), conducted by the U.S. Department of Education, Office of Planning, Evaluation, and Policy Development.

After reviewing the excerpts or the complete report, please post your extended comments re the findings. Some or all of the comments will be appended to this article as they are submitted.

Here are some of the key findings:

• Students who took all or part of their class online performed better, on average, than those taking the same course through traditional face-to-face instruction.

• The observed advantage for online learning in general, and blended learning conditions in particular, is not necessarily rooted in the media used per se and may reflect differences in content, pedagogy and learning time.

• Most of the variations in the way in which different studies implemented online learning did not affect student learning outcomes significantly.

• The effectiveness of online learning approaches appears quite broad across different content and learner types.

• Studies in which analysts judged the curriculum and instruction to be identical or almost identical in online and face-to-face conditions had smaller effects than those studies where the two conditions varied in terms of multiple aspects of instruction.

• When a study contrasts blended and purely online conditions, student learning is usually comparable across the two conditions.

• Elements such as video or online quizzes do not appear to influence the amount that students learn in online classes.

• Online learning can be enhanced by giving learners control of their interactions with media and prompting learner reflection.

• Providing guidance for learning for groups of students appears less successful than does using such mechanisms with individual learners.

encounters: ideas that go bump

thompson80John Thompson, editor, green computing, on 17 August 2009, at 5:43 am, said:

This discussion on F2F, blended, and online learning reminds me of Matthew Arnold’s quote:

Wandering between two worlds, one dead,
The other powerless to be born.
With nowhere yet to rest my head
Like these, on earth I wait forlorn.

F2F proponents (right up there with the Luddites supporting print publications against digital encroachment) refuse to acknowledge a broken system. F2F served educational purposes well in another age (in a galaxy far, far away). And while there are still excellent F2F synchronous instructors (e.g., Randy Pausch), by and large the student audience has moved on. This government study merely confirms the obvious — almost anything is as good or even better than F2F instruction, at least as too many F2F instructors practice it. For all intents and purposes, F2F instruction is dead. Yet, online still remains in the wings, albeit with one foot on stage.

encounters: ideas that go bump

keller80Harry Keller, editor, science education, on 17 August 2009, at 7:53 am, said:

I also noted the small coverage of K-12 learning.

I did not see any discussion of the self-selection effect. Students in online or blended learning may have chosen to do so. Such students may be more motivated to do well, on average.

I believe that the instructor remains the key to success. Really good teachers manage to get good results regardless of the surroundings. By providing excellent tools to instructors, we can make the good ones very good. Perhaps, these same tools can help us identify and weed out the poor ones — emphasis on perhaps.

Even if online learning does not, by itself, make learning better, it has and will continue to provide incentives for new ideas in education from which these important new tools will arise.

Of course, as a creator of such a new tool, I have a bias.

encounters: ideas that go bump

jims80Jim Shimabukuro, editor, on 17 August 2009, at 11:15 am, said:

We tend to forget that communication is at the heart of learning, and that schools and classrooms are basically a medium or form of communcation. The problem is that we’ve become so accustomed to the classroom that we no longer view it as a medium of communication but equate it with learning. The danger of this equation is the tendency to dismiss other critical media such as the web.

Another way of viewing this dichotomy is the notion of formal and informal learning. For many educators, the distinction is clear: formal happens in the classroom, and informal, outside. Since the web appears to be clearly “outside” the classroom, it’s informal and irrelevant.

Fortunately, students don’t buy into the belief that learning is limited to what happens in the classroom. They understand, intuitively, that the web is a natural medium for communication and learning, and that the distinctions between formal and informal learning are all too often arbitrary and meaningless.

John Thompson, in his comment above, says, “By and large the student audience has moved on” to online modes of communication. I agree. For them, traditional F2F classrooms are becoming, like telephone landlines, anachronisms, sharing the same fate as typewriters, newspapers, and horse-drawn carriages. The web’s instant, anywhere, anytime communication with anyone or with any information source in the world is a given in their daily lives.

Increasingly, for students today, the question isn’t “Online or F2F?” but “Why limit learning to classrooms?” And increasingly, they’ll want to know, “Why do we have to gather in a classroom for instruction that could be delivered much more effectively and efficiently via the web?”

In their lives outside the classroom, students have become expert at informal learning or learning that’s not guided by an instructor. They use their mobile electronic communication devices to get information instantly on the latest news, entertainment, products and sales. If they need information, they automatically turn to the web simply because it’s there and they have access to it from anywhere at anytime. And more importantly, it’s a way to keep in touch with friends, allowing for the creation of social networking that’s unprecedented. Through the web, they can stay in touch with all their friends 24-7. They’re never more than a few seconds apart, regardless of the physical distances between them.

Replacing some of their F2F class meetings with online activities is a way for educators to acknowledge the undeniable impact of web technology in the lives of their students. This adjustment is considered “blending,” and the result is blended instruction. It seems to be working very well, and many if not most claim that it’s superior to both completely F2F and completely online methods. The USDE report seems to support this contention, but the gap between online and blended is apparently closing.

My concern with the term “blended” is its inclusiveness. It includes such a wide range of practices that it has little or no power to define an actual pedagogy.

Like a storm building at sea, online learning is gradually making its way to landfall, and all indications are that it’s strengthening rather than weakening, and when it hits shore, the impact will change the educational landscape.

The significance of the USDE report is not so much in telling us where we are but in showing us where we’re headed. There’s a trend, and its direction is unmistakable and unavoidable. In the meantime, as Harry Keller says above, “Even if online learning does not, by itself, make learning better, it has and will continue to provide incentives for new ideas in education from which these important new tools will arise.”

The coming years will be exciting, but we can’t really see the dramatic changes that are coming. However, we can read the signs and imagine.

encounters: ideas that go bump

john_sener2_80John Sener, ETC writer, on 17 August 2009, at 10:41 am, said:

There is an inherent danger and limitations to these studies, even meta-analyses such as this one. In particular, the danger is in absorbing the report’s summary findings (e.g., “the use of video and online quizzes…does not appear to enhance learning”) and applying it in a blanket fashion, when in reality the report itself describes findings which indicate that a more nuanced interpretation/response is needed. (Why reports like this one are so schizoid about this is one of the things that bugs me about them.)

For example, the actual language of the report states that the existing research on online quizzes “does not provide evidence that the practice is effective,” which means that:

1) The research does not indicate that the practice of using online quizzes is ineffective either.
2) As the report indicates, each study looked at slightly different things. The above comment was based on very few studies.
3) There are several important but unstated qualifiers. For example, one study found that discussions worked just as well as quizzes; that doesn’t mean that the quizzes weren’t effective.
4) Effectiveness depends on other variables. (Duh!) Interestingly, one study found that one LMS platform was better than another (WebCT vs. IDLE), suggesting that “details of their user interfaces” may have been the key variable in that case. As this example shows, there are LOTS of elements that can explain differences — elements that IMO are impossible to control using (quasi-) experimental designs.

Likewise, the Media Elements section of the report provides clues about possible practices related to using video effectively. For example, the Zhang study “found that the effect of video on learning hinged on the learner’s ability to control the video.” Now, read that sentence juxtaposed with the report’s summary paragraph for this section:

‘In summary, many researchers have hypothesized that the addition of images, graphics, audio, video or some combination would enhance student learning and positively affect achievement. However, the majority of studies to date have found that these media features do not affect learning outcomes significantly.’

Do you see the same disconnect that I do? On one level, this is simply an echo of Clark’s findings from 25+ years ago, as the report itself notes:

“Clark (1983) has cautioned against interpreting studies of instruction in different media as demonstrating an effect for a given medium inasmuch as conditions may vary with respect to a whole set of instructor and content variables.”

On another level, the report’s summary findings do NOT point out significant findings such as the Zhang study because, as one of my colleagues has put it, they are asking the wrong questions. But if you take the summary findings at face value, it’s easy to lose the more important and useful findings such as Zhang’s.

Also IMO, here is the report’s real message:

“That caution applies well to the findings of this meta-analysis, which should not be construed as demonstrating that online learning is superior as a medium. Rather, it is the combination of elements in the treatment conditions, which are likely to include additional learning time and materials as well as additional opportunities for collaboration, that has proven effective. The meta-analysis findings do not support simply putting an existing course online, but they do support redesigning instruction to incorporate additional learning opportunities online.”

To me, that means that you’re much better off in looking at the “combination of elements in the treatment conditions” than in taking the report’s summary findings as stated and running with them.

One other important point about this report: it apparently fails to take differences in learning outcomes assessment methods into account. in some cases, they simply report that learning outcomes were the same (or not) without telling us what methods were used. This is a clear yellow flag IMO.

2 Responses

  1. This discussion on F2F, blended, and online learning reminds me of Matthew Arnold’s quote:

    Wandering between two worlds, one dead,
    The other powerless to be born.
    With nowhere yet to rest my head
    Like these, on earth I wait forlorn.

    F2F proponents (right up there with the Luddites supporting print publications against digital encrouchment) refuse to acknowledge a broken system. F2F served educational purposes well in another age (in a galaxy far, far away). And while there are still excellent F2F synchronous instructors (e.g., Randy Pausch), by and large the student audience has moved on. This government study merely confirms the obvious — almost anything is as good or even better than F2F instruction, at least as too many F2F instructors practice it. For all intents and purposes, F2F instruction is dead. Yet, online still remains in the wings, albeit with one foot on stage.

  2. There is an inherent danger and limitations to these studies, even meta-analyses such as this one. In particular, the danger is in absorbing the report’s summary findings (e.g., “the use of video and online quizzes…does not appear to enhance learning”) and applying it in a blanket fashion, when in reality the report itself describes findings which indicate that a more nuanced interpretation/response is needed. (Why reports like this one are so schizoid about this is one of the things that bugs me about them.)

    For example, the actual language of the report states that the existing research on online quizzes “does not provide evidence that the practice is effective,” which means that:

    1) The research does not indicate that the practice of using online quizzes is ineffective either.
    2) As the report indicates, each study looked at slightly different things. The above comment was based on very few studies.
    3) There are several important but unstated qualifiers. For example, one study found that discussions worked just as well as quizzes; that doesn’t mean that the quizzes weren’t effective.
    4) Effectiveness depends on other variables. (Duh!) Interestingly, one study found that one LMS platform was better than another (WebCT vs. IDLE), suggesting that “details of their user interfaces” may have been the key variable in that case. As this example shows, there are LOTS of elements that can explain differences — elements that IMO are impossible to control using (quasi-) experimental designs.

    Likewise, the Media Elements section of the report provides clues about possible practices related to using video effectively. For example, the Zhang study “found that the effect of video on learning hinged on the learner’s ability to control the video.” Now, read that sentence juxtaposed with the report’s summary paragraph for this section:

    ‘In summary, many researchers have hypothesized that the addition of images, graphics, audio, video or some combination would enhance student learning and positively affect achievement. However, the majority of studies to date have found that these media features do not affect learning outcomes significantly.’

    Do you see the same disconnect that I do? On one level, this is simply an echo of Clark’s findings from 25+ years ago, as the report itself notes:

    “Clark (1983) has cautioned against interpreting studies of instruction in different media as demonstrating an effect for a given medium inasmuch as conditions may vary with respect to a whole set of instructor and content variables.”

    On another level, the report’s summary findings do NOT point out significant findings such as the Zhang study because, as one of my colleagues has put it, they are asking the wrong questions. But if you take the summary findings at face value, it’s easy to lose the more important and useful findings such as Zhang’s.

    Also IMO, here is the report’s real message:

    “That caution applies well to the findings of this meta-analysis, which should not be construed as demonstrating that online learning is superior as a medium. Rather, it is the combination of elements in the treatment conditions, which are likely to include additional learning time and materials as well as additional opportunities for collaboration, that has proven effective. The meta-analysis findings do not support simply putting an existing course online, but they do support redesigning instruction to incorporate additional learning opportunities online.”

    To me, that means that you’re much better off in looking at the “combination of elements in the treatment conditions” than in taking the report’s summary findings as stated and running with them.

    One other important point about this report: it apparently fails to take differences in learning outcomes assessment methods into account. in some cases, they simply report that learning outcomes were the same (or not) without telling us what methods were used. This is a clear yellow flag IMO.

    John

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s