AI for effort

I don’t really want to read another article about how generative AI is changing everything. But I do read many of them out of a sense of obligation. Working in higher education, overseeing several academic programs, I feel I would not be doing my job if I wasn’t at least familiarizing myself with the discourse developing around gen AI and its impact on education and the work of teaching. But it’s impossible, I find, to keep up with it all.

Many of the faculty teaching in the professional master’s programs I oversee are integrating gen AI tools and topics into their courses, as they should. We have programs in data science, information systems, and several in healthcare, and our job is to prepare students for careers in these fields. To be a good administrator, I have to help my faculty and staff adapt.  

When my university hosted TeachX this year, I went to hear keynote speaker José Antonio Bowen, higher ed leader turned consultant, talk about “Educating Humans to Thrive in an AI World.”  He said 100% of jobs will change and there will be a sorting out of which tasks should be left for humans; AI is a new form of labor. And the labor of teaching is no different. His was a flashy, funny, engaging presentation offering lots of ways to adapt assignments and classroom activities to get students to use various AI tools creatively and collaboratively, rather than simply to cheat. He talked about ways that AI could be a thought partner, and a way to generate many more ideas than you could think of alone, because AI isn’t hampered by inhibition or conventional thinking. Bowen presented AI in education as an inevitability – all assignments are now AI assignments, he said: AI inclusive, AI resistant, or AI transparent. Part of his message is that we shouldn’t be afraid of AI; that, with the right approach, these tools can be extremely helpful and raise the bar on what we expect of ourselves and our students.

I really am trying to stay open-minded. But if I’m being honest, I remain skeptical about gen AI as a tool for learning, especially when you are still learning how to learn. So far, what I have heard in many of these presentations about how to teach with AI is that the real learning can come from students having to evaluate the outputs from AI, that the critical thinking happens when students have to assess the accuracy of the facts, the soundness of a thesis, the most innovative among the dozens of “brainstormed” ideas, the errors in the hundreds of lines of code. What no one addresses, however, is the question of how a student would develop these skills of discernment in the first place.

The work of writing is different from the work of editing. And where does the work of writing begin? Is it only when you start to compose the words? Or does it begin with choosing the appropriate topic to fulfill an assignment or sifting through one’s sources to identify a problem worth addressing? Is asking a machine for research ideas or sources or possible thesis statements okay because you are still responsible for picking the best one of the bunch, and that counts as writing? Let’s take, for example, summarizing other people’s ideas. In Bloom’s taxonomy, summary is considered a “lower order thinking skill.” But this hierarchizing of cognitive skills is deceptive. Perhaps it’s “lower order” not because it’s less difficult to do or is not original in the sense of developing one’s own ideas, but because summary is foundational. There is no invention in a vacuum; one’s own thoughts emerge in relation to the thoughts of others. This is especially true in academic writing and research. Learning how to accurately summarize the work of another person, without bias or misrepresentation, is a critical skill, and as anyone who has taught first-year writing knows, it’s a skill that needs to be taught and practiced again and again in order to be learned. So, what happens when students use AI to summarize articles, chapters, or whole books and then stitch together those summaries into an “argument” that supports a thesis also suggested by AI? How would they evaluate these summaries for accuracy if they haven’t read and digested the sources? And why take the time to read those sources if AI has done the work for you?

Jane Rosenzweig, a writer and Director of the Harvard Writing Center, was an early humanist voice in the conversation, with an essay published in the Boston Globe in 2022 and picked up by other media outlets. In “What We Lose When Machines Do the Writing,” she warns, “If a machine is doing the writing, then we are not doing the thinking,” and that neatly encapsulates my main concern. Jane and I started at the Harvard Writing Program at the same time, in the early 2000’s, where we were trained in the same writing pedagogy premised on the idea that writing is a form of thinking. That is, writing is a process of figuring out what you think, of arriving at ideas and insights that you could not have had without having written. Thus, the emphasis was on drafting, feedback, revision, and ongoing reflection on one’s writing process. Writing is often an arduous, frustrating, time-consuming process, and part of our job as writing instructors was to show our students that engaging in this kind of work was the path to truth and deeper understanding, that moving from vagueness, confusion, incoherence, and uncertainty to greater clarity and precision of thought was worth the effort.  As Jane elaborates, “If the end point rather than the process were indeed all that mattered, then there might be good reason to turn to GPT-3. But if, as I believe is the case, we write to make sense of the world, then the risks of turning that process over to AI are much greater.”

Now, several years later, as the tools have gotten better and fully integrated into word processing software and search engines, are we well past trying to assess the risks? Is it too late? In a follow-up essay in 2024, Jane points out that the ease and frictionlessness promised by AI may be welcome in some contexts, but can very well undermine learning, especially learning how to write and think. Like resistance training in sports, the amount and kind of effort in the process matters. Bowen, too, made a similar point in his talk: “the one who does the work gets the benefit.” His focus was on how to get students to do more of the work, and therefore reap the benefits of using AI as an assistant, not a substitute, for thinking, but “only if we use it well.” That’s a big “if”! He didn’t address what happens when we don’t use these tools well.

Despite the optimistic face that people like Bowen are putting on AI and education, it’s clear that students are already using AI in all sorts of ways that don’t facilitate their learning, or at least, not what we’re trying to get them to learn. In contrast to the evangelizers, there’s also a certain fatalism creeping into the headlines in places like The Chronicle of Higher Education and mainstream media. For example, contemplating the demise of the college essay in The New Yorker, Hua Hsu (a staff writer who also teaches at Bard) seems resigned about students’ use of AI, because “college is all about opportunity costs.” Having interviewed a group of college students, all of whom use AI to varying degrees, Hsu concludes that it’s hard to blame the students for trying to succeed in a system not of their making, with incentive structures that reward not so much learning as the performative activities required of them: “None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.”

Hsu seems to admire this resourcefulness, sympathetic to the challenges their generation faces: “Their pressures are so different from the ones I felt as a student. Although I envy their metabolisms, I would not wish for their sense of horizons.” I can understand the admiration, but I’m also sad and afraid for students like them. And also sad and afraid for educators like me. Yes, the consumer model of education has been problematic for a long time; and yes, AI is simply exploiting the incentive structures already in place that reward performance rather than learning, product more than process. This latest disruptive technology will force us to rethink education for the better, educators say, trying to put a positive spin on a detonation that no one asked for.

Right now, my faculty report that the graduate students who are using AI (with permission) are already the strongest ones, who are exploring the tools as a way to challenge themselves, curious about their capabilities. They are generally working adults, who have had years of practice learning how to learn. But again, I wonder what the future holds for younger students who will be increasingly immersed in these tools, with no choice but to use them.

In a recent New York Times piece, Meghan O’Rourke articulates my fears most clearly, drawing on her own experience of using AI and finding herself increasingly reliant on its seductive ease, until it “began to interfere with [her] own thinking.” Interestingly, the essay was originally titled “The Seductions of A.I. for the Writer’s Mind,” but later edited to “I Teach Creative Writing. This is What A.I. is Doing to Students,” I suppose to strike the more alarmist tone, losing some of the nuance. The following passages resonated with me the most:

The uncanny thing about these models isn’t just their speed but the way they imitate human interiority without embodying any of its values. That may be, from the humanist’s perspective, the most pernicious thing about A.I.: the way it simulates mastery and brings satisfaction to its user, who feels, at least fleetingly, as if she did the thing that the technology performed. [. . . ]

 I’ve spent decades writing and editing; I know the feeling — of reward and hard-won clarity — that writing produces for me. But if you never build those muscles, will you grasp what’s missing when an L.L.M. delivers a chirpy but shallow reply? What happens to students who’ve never experienced the reward of pressing toward an elusive thought that yields itself in clear syntax? [ . . . ]

What we stand to lose is not just a skill but a mode of being: the pleasure of invention, the felt life of the mind at work. I am a writer because I know of no art form or technology more capable than the book of expanding my sense of what it means to be alive.

Not everyone needs to be a writer, of course, but what will it mean when people can outsource their writing, and thus “outsource thinking”? It’s worth asking, if there is much to be gained from AI use (speed, efficiency, affirmation, confidence), what will be lost in exchange (depth, contemplation, self critique and self knowledge)? I acknowledge the great potential of these tools to accelerate medical breakthroughs, for example. But I’m also going to hang on to my hard-earned skepticism.

Leave a comment