Writing Centers and Academically Adrift: Why We Might Have to Start Reading These Books

By Rebecca Lorimer.
Are college students today academically adrift?

Perhaps this question might make more sense if I had typed the last word in all caps or added a prolonged shriek sound effect after the question mark. You are correct if you feel an implied “yes” in the question and a not-so-implied eye-roll in my own reaction.

I’ve been reading Academically Adrift, the newest installment in the narrative that college students have gone emphatically, tragically awry in their pursuit of higher education. This book-length empirical report follows in the tracks of Mark Bauerlein’s The Dumbest Generation and Derek Bok’s Our Underachieving Colleges by asking, again, why college students these days are so lazy, busy, distracted, and lacking in their reading, writing, and thinking abilities.

But here I will attempt to halt my snarky tone. Certainly we should regard such a pattern with fair curiosity—why are administrators, academics, and policy makers so intent on investigating college students’ abilities and the educational context that shapes them? And why is the public so interested? My advisor Deborah Brandt once said that the more books published about a subject, the more anxiety such activity reveals. (I’m sure she doesn’t remember saying that, but I do since she was talking about freshman composition readers.)

I think a better question is why we should be interested. What are the implications of these books for those of us who teach and tutor writing, especially in the uniquely interdisciplinary and collaborative spaces of writing centers?

Academically Adrift, written by sociologists Richard Arun and Josipa Roksa, has been reviewed in the Chronicle of Higher Ed and Inside Higher Ed, and has entered public conversations via news articles like the recent New York Times piece on low-achieving business majors. In other words, people are beginning to make claims about college learning based on this book. And when politicians and policy makers also begin to pick up on these claims, this is generally our cue to take interest, since as we’ve learned so well in the last few months, their decisions affect us in very real ways.

Most relevantly for our work in writing centers, Arun and Roksa claim that,

  • at least 45% of students in their sample of 24 colleges did not improve in a statistically significant way during their first two years of college on the skills assessed by the Collegiate Learning Assessment (CLA): writing, critical thinking, and complex reasoning skills;
  • 25% of students have never taken courses that required either 20 pages of writing over the course of a semester or more than 40 pages of reading per week and only 42% had experienced a course that required both; and
  • collaborative learning is not associated with gains on the CLA.

Already we can see the potential positive and negative effects on our work of such research-based claims: Students are not reading and writing enough in their courses. We agree! Collaborative learning is not associated with college learning. Uh-oh.

But first things first with any instructional claims made by research based on test results: we need to know what the test is and what it claims to assess. This helps us understand the test’s validity (measures what it purports to measures) and its ability to support any claims about “college learning,” a phrase used regularly by the authors. The CLA is three open-ended assessment tasks: a performance task and two analytical writing tasks (to make an argument and to break an argument). The performance task asks students to respond to a problem in writing using a set of background documents. As the authors say, “The CLA thus attempts to identify ‘real-world tasks that are holistic and drawn from life situations.’” (23).

To be honest, at first glance, this sounds pretty good to me. Since most large-scale assessment of college achievement and engagement are multiple-choice tests or self-reported surveys, a test which asks students to perform the skills it claims to measure is a vast improvement. However, we must remain cautious about its claims for a few reasons, the nuances of which I will summarize due to space and my own shallow knowledge of psychometrics.

First, the CLA was not given as part of a course or as part of a “real-world” context—it is a test on hypothetical writing scenarios and thus is not a valid measure of student reading/writing/thinking skills in their disciplinarily distinct college courses. (See Richard Haswell, Brian Huot, or Kathleen Blake Yancey for work on such problems of validity in writing assessment.) Second, the authors are measuring change over a span of the first two years of college, but then making claims about college learning in general. Third, the claims about learning alone vs. learning in groups are based on self-reported studying habits, without explaining what is meant by “studying” or where or with whom this out-of-classroom studying takes place.

These are not my critiques alone, since the authors themselves admit, “The CLA measures a specific set of skills—namely critical thinking, complex reasoning, and writing—that is far from the totality of learning or the full repertoire of skills acquired in higher education” and thus students “will perform well on the CLA to the extent that their disciplines emphasize the skills assessed” (108). They state in their conclusion the fact that “our findings here can do little more than identify factors associated with improvement in critical thinking, complex reasoning, and written communication” and thus “there are limitations to its precision at the individual level that should caution policy makers from imposing high-stakes accountability schemes based on it or similar assessment indicators” (141).

So. Given these complications we would hope Arun and Joksa would make very cautious, very hedged claims about their findings. But even if they do—and they did, in the conclusion—it’s worrisome that various decision-makers and budget-crunchers might not take the time to look for nuance or get to this book’s conclusion. We are back, then, to the question of why we should be interested. If readers miss the whole part about how this test on which many of the book’s claims stand might not be so valid, they might take at face value the following conclusions:

  1. “There is a positive association between learning and time spent studying alone, but a negative association between learning and time spent studying with peers. Thus, the more time students spend studying alone, the more they improve their CLA performance” (100).
  2. “The results from our work show that learning is related first and foremost to academic activities, and particularly to individual studying. Social activities, including studying with peers, have either no consequences or negative consequences for learning” (135).
  3. “While these social experiences [student-student and student-faculty interaction… talking with faculty outside of class, and being a guest in a professor’s home] may yield higher graduation rates, it is not clear that they would also facilitate students’ cognitive development. In our analyses, interactions neither with peers nor with faculty outside the classroom had positive consequences for learning” (135).

Is writing an essay studying? Reading a novel in preparation for class? Working with a tutor outside the classroom on a paper? We don’t know. So the implications of these conclusions could be a real bummer for those of us who work so hard under collaborative models.

There’s much more to be said about this book (they often slide between reporting on “20 pages written per semester” and “a 20-page paper written per semester”—oh man are those different animals), but I can’t. This is already too long and you can see that I’ve downshifted into listing block quotes for you.

So yes, we must be feeling anxious about higher education. Rising tuition and a sinking economy play a role in this, as do the increasing power and presence of our old friendly scapegoats technology and the media. But while these books importantly suggest academia rethink its teaching practices, they simultaneously supply decision-makers with conclusions that may endanger the writing/reading/thinking practices they promote. And we might want to start paying attention to that.

Arum, Richard and Josipa Roksa. Academically Adrift: Limited Learning on College Campuses. Chicago: U of Chicago P, 2011. Print.

Leave a Reply

Your email address will not be published. Required fields are marked *