‘Academically Adrift’ Redux: The Memes Have Spoken

John SenerBy John Sener

As everyone knows, college has become less demanding, students don’t learn much in college, and students spend much less time studying in college than they used to. At least that’s what most everyone thinks they know — thanks to the unfortunate residue of the study Academically Adrift, whose legacy has endured long after the storm of attention and controversy which accompanied its initial publication in early 2011 has faded.

An article by New York Times columnist David Brooks about this “landmark study” is our latest reminder of what happens when important opinion makers trade in the currency of catchy memes as received truth. As Brooks asserts in his recent NYT article (based on Academically Adrift’s findings, e.g., p.69):

Colleges today are certainly less demanding. In 1961, students spent an average of 24 hours a week studying. Today’s students spend a little more than half that time…

“Certainly”?? It’s understandable that informed laymen such as Brooks equate time spent studying with level of intellectual demand, since professional educational organizations such as ASCD do the same — but that doesn’t make it any less specious.

The real problem is that the destructive memes which Academically Adrift has spawned are based on a seriously outdated perspective about what constitutes a “demanding” college education. For starters, nothing says “welcome to the 20th century” quite like equating rigor or effectiveness with time spent studying. In almost any other endeavor, spending less time to accomplish a task would be called “improved productivity.” Back when students spent an average of 24 hours a week studying, a lot of that “study time” involved sifting through card catalogs, browsing book stacks, writing papers by hand or on typewriters, and doing many other tasks which can be done much more efficiently now via other means. But instead of giving higher education any credit for this, critics both inside and outside higher education persist in treating the act of studying as if it were analogous to punching a time clock. Study time is work; socializing time is play and doesn’t count, which is not surprising since Academically Adrift reflects an almost total lack of regard for the value of student time, and the notion of optimizing student time is nowhere to be found in its memes.

Then there are the methodological problems which riddle Academically Adrift. Alexander Astin’s February 2011 Chronicle of Higher Education commentary describes these in detail, perhaps most notably that the test instruments used have low reliability and that “student-level reliability coefficients [were] not computed for this study.” Astin’s criticisms are damning enough, and there are more: Academically Adrift based its findings on a sample of first-time full-time students, which are now less than 25% of the total undergraduate population, and only those at four-year institutions, which means that community college students were excluded as well. But the Academically Adrift memes are applied to all colleges and all college students in general, including the vast majority of those who were not represented in the study’s sample.

Academically Adrift also relied on student self-reporting of time spent. I don’t know whether the comparative studies from the past (1961 et al.) used the same method, but self-reporting is not exactly the most reliable one. It becomes even more suspect when one considers what “time spent” really means in an age of social media that irretrievably blur the line between studying and socializing. A student on Facebook can be socializing one moment, studying the next, or doing both at the same time. Do you really believe that students spend the time parsing these out to the point of being capable of provding accurate estimates? Of course, students have long been mixing studying and socializing in their dorm rooms, student unions, and elsewhere on campus as well — a rich and sophisticated means of learning which has evolved well beyond the forms of “bull sessions” and has been recognized for decades. However, the Academically Adrift survey treated this form of learning as if it does not exist, as there is no category for “learning from my peers,” for example. The study’s survey instrument essentially says, if you’re not studying, you’re not learning. Countless college-educated people know better from firsthand experience, yet on this the Academically Adrift memes also remain deceptively silent.

Even worse, the survey instrument might have also said that if you’re using a computer, you’re not learning or studying. The survey included a category called “using computer for schoolwork,” but as Michelle Flinchbaugh’s February 2011 critique observed, it is not clear whether this category is included in the hours spent studying totals. My reading of the survey methodology is that it is not included, as it is not mentioned in the coding descriptions for the variable names “Hours studying alone” and “Hours studying with peers” (p.159). If so, this omission by itself invalidates the study’s findings; in any event, the study’s failure to clarify this variable illustrates how out of touch its notions of the role of studying in higher education are.

Then we get to the quality of the assessments themselves. Academically Adrift reduces improvements in critical thinking and other skills to Collegiate Learning Assessment scores and other multiple choice exams (pp.35-37), and thus fails to measure the learning that is actually going on. As for testing writing objectively, I’m with those writing experts who assert that it’s not really a very good idea. As for critical thinking skills, what the CLA really measures best is tractability; presumably, students score better when they are more willing to play along with what is otherwise a low-stakes or meaningless test for them. When I read the CLA questions cited in Academically Adrift that purport to test critical thinking (pp.21-22), I imagined thousands of test takers saying, “Who the heck cares about DynTech? I’ve got better ways to spend my time,” and then blowing off the test. Which, of course, is an exemplary application of critical thinking and time management skills that will never be reflected in the CLA scores. Certainly the numerous college students whom I’ve met while doing college visits over the past year clearly have more important things to do than perform in standardized test dog-and-pony shows.

Still, all of the above feels like shouting into a windstorm, to be honest. Millions of Americans believe college has become less demanding, students don’t learn much in college, and students spend much less time studying in college because the memes from a fatally flawed study told them so. The actual truth of the matter, or the degree to which these assertions are real issues, has become beside the point. The insidious power of the Academically Adrift memes has already coursed through the body politic, and otherwise thoughtful commentators continue passing them thoughtlessly through the system, becoming little more than meme carriers in the process.

As a result, practitioners will continue to be asked to figure out new ways to use educational technology to solve nonexistent or ill-defined problems such as how to enable students to pass standardized tests more efficiently or how to write essays that will satisfy their makers and the stakeholders who buy into them. We already see this happening in the rush to develop effective tools for computer-graded essays. One recent study reported that computer scoring is now just as accurate as human scoring — but a researcher who looked more closely at one such system (e-Rater) figured out how to game it, writing essays that earned perfect scores but which were nonsensical garbage.

We can remember that, as is always the case with such technologies, the problem is not with using them but with how they are used. But whatever we do will be shaped in part by having to respond to notions about how educational technologies should be used, based on distorted perceptions which we cannot control and can only hope to manage. The memes have already spoken.

__________
John Sener recently published a book, The Seven Futures of American Education: Improving Learning and Teaching in a Screen-Captured World (CreateSpace, 21 March 2012). A brief summary and reviews by Burks Oakley (4.11.12) and Gary R. Brown (4.14.12) are available at the Amazon site. -Editor

2 Responses

  1. Wow. This is amazing food for thought. How do/can we measure rigor, especially given technological advance? Great piece!

    • Thanks, Jessica! I’e been working on a series of followup blog posts on my web site (www.senerknowledge.com) which deal with the issue you raise about how to measure rigor. I’m working on one more related to the assessment piece, which IMO is key.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: