Illustration from The Cubies’ ABC (1913) | Earl Harvey Lyall / Internet Archive / The Getty
In the last 18 months, publicly available artificial intelligence (AI) programs have turned conversations about the humanities on college campuses into conversations about cheating.
Do you remember when life was easy, and all you had to detect was whether a student had purchased a paper or plagiarized portions of it? I do. It was called cheating. In the age of AI, it seems, it’s hard to even know what “cheating” is. At the end of 2024, otherwise known as Year One of ChatGPT, “academic-integrity boards realized they couldn’t fairly adjudicate uncertain cases,” tech journalist Ian Bogost wrote. “Students who used AI for legitimate reasons, or even just consulted grammar-checking software, were being labeled as cheats. So, faculty asked their students not to use AI, or at least to say so when they did, and hoped that might be enough. It wasn’t.”
In July, Elsie McDowell, a student, warned that “students are cheating en masse in our assessments or open-book, online exams using AI tools, all the while making ourselves stupider.” Hua Hsu, a faculty member at Bard College, believes that these students are not “lazy or passive,” but “resourceful,” mixing and matching different AI tools to cobble together written work. And yet, what is to be done to restore the central role that writing, as a form of humanities thinking, used to have across a college curriculum? “Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism,” Hsu wrote recently in The New Yorker, “none of the potential fixes could turn back the preconditions of American youth. Professors can reconceive of the classroom, but there is only so much we control.”
But wait—why are we talking about cheating, or the endless creativity that students will exercise when they prefer not to do the work they have been assigned, and not the role that writing plays in our curricula and systems of evaluation? Why aren’t we talking about the role that writing and critical thinking play in a twenty-first-century global economy? Why aren’t we talking about whether the forms of cultural production that proliferate, and are monetized, on social media are a form of what we used to call “writing”?
Just to let you know, I am actually writing this post—by which I mean making it up in my head in response to something another smart person wrote—although the temptation to go to one of the new services and type in Give me a thoughtful post about how AI is disrupting university teaching in the voice of Claire Potter” was hella tempting. This week, Meghan O’Rourke, the editor of The Yale Review and a creative writing teacher at Yale, recently wrote about her own forays into machine-generated text, and weighed its pros and cons. She did it so well that— honestly?—I will never again know whether I am hearing from Meghan or from ChatGPT.
For the massive amount of email that anyone employed by a contemporary university must write, O’Rourke admits that AI is pretty good. For creative work? Not so much, and AI has the habit—unnervingly like an undergraduate pretending to have done that day’s reading, or a member of the Trump administration—of making up elaborate lies rather than admitting that it doesn’t know its ass from its elbow. O’Rourke found that AI was passably good at generating responses to assigned readings, writing memos, offering suggestions, and generating emails—all of which she could then tweak and put into her own voice. However, when asked to “generate a poem in the style of Elizabeth Bishop,” O’Rourke writes, “it fumbled the sestina form, apologized when I pointed that out, then failed again while announcing its success.”
In other words, AI cannot (yet) do properly scholarly or uniquely creative writing. It cannot make a new argument about well-known facts or perceive new possibilities in vocabulary, grammar, and style. The things it cannot do are what we call, variously, art, imagination, insight, and perception—all of which are elements of great writing. Instead, it regurgitates (one hopes in the right combinations and without making shit up).
On the other hand, all writing does not have to be great. Most of the writing that adults do just has to be good enough, and O’Rourke reveals what all of us who have spent our lives in universities know: 85 percent of the writing academics do is in service of bureaucracy, and it is enormously time-consuming and soul-sapping. In O’Rourke’s view, AI does not diminish creativity; it saves the energy we need for creativity in a university world where, stripped of secretarial help and pressed by an ever accelerating need to document everything, many faculty feel chained to their keyboards for much of the day and the evening. “Formerly overtaxed, I found myself writing warmer emails simply because the logistical parts were already handled,” O’Rourke writes. “I had time to add a joke, a question, to be me again. Using A.I. to power through my to-do lists made me want to write more. It left me with hours—and energy—where I used to feel drained.”
Yet such work isn’t exactly writing, in the sense that what I am doing right now is writing, and I suspect that is one reason why O’Rourke has let the beast into her own life. When she is really writing, she is that unique thing called Meghan O’Rourke, just as I am the historian Claire Potter, or the Political Junkie, or, if you are old enough, the Tenured Radical—all of which are slightly different voices, all of which can probably be replicated by AI, but not created or further developed by it (at least for now).
And here’s the thing: Most students will need to produce writing, but they will never actually be writers in the sense that those of us who devote ourselves to craft, voice, argument, and audience are writers. So why do we insist that they act like aspiring writers and treat writing as anything but a tool, when their most creative selves may value Instagram, product development, fashion, science, or other forms of creative expression?
“The context here is that higher education, as it’s currently structured, can appear to prize product over process,” O’Rourke points out:
Our students are caught in a relentless arms race of jockeying for the next résumé item. Time to read deeply or to write reflectively is scarce. Where once the gentleman’s C sufficed, now my students can use A.I. to secure the technocrat’s A. Many are going to take that option, especially if they believe that in the jobs they’re headed for, A.I. will write the memos, anyway.
Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn’t read. Then—tentatively—to help them outline, say, an essay on Nietzsche. The bot does this, and asks: “If you’d like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?”
“At that point,” O’Rourke notes, “students or writers have to actively resist the offer of help.”
Ethically, she is 100 percent correct, but here’s the part that’s missing: Most courses at most colleges and universities are now building blocks towards majors and requirements. Why wouldn’t students write memos in response to curricula that ask them to achieve specific “learning outcomes,” rather than make sense of the materials in their own way? We say that we ask students to write because writing is thinking, but that is only partially true. What students know is that writing allows us to give a grade and enable them to complete the building block on which majors, degrees, and professional school will be built.
Let me take this a step further. If we didn’t think that students’ writing was product, we wouldn’t read it like product, giving each assignment characteristics such as page length, citation style, or grading rubrics showing multiple characteristics that must be fulfilled for the assignment to win a high grade. If we didn’t think writing was a product, we wouldn’t believe that there was a fake version of the product.
It’s a conversation that would be fun to have with Walter Benjamin, isn’t it? He, at least, saw where technology was going, perhaps even pointing out that students disinvested in writing long before AI came along, before expressive technology and social media became more urgent locations for creativity and fame, before writing became unnecessary to many cultural careers. The up-and-coming creative classes don’t even think in words anymore—or, at least, in words alone. Look at Instagram: They think in images, motion, colors, and style. Most importantly, they imitate, mix genres, and mash them up, often using words they have borrowed to support other forms of creativity that properly express how they see the world.
So, here’s what I wonder: If AI wrecks the humanities as we know it, and higher education as we know it, could that be a good thing? Could it create new and vivid forms of “writing” that students can teach us?
Because, as O’Rourke discovers by putting herself in the place of the student, there is something profound that is not working anymore in higher education. Let’s keep that conversation going. Because until faculty stop fretting and recirculating old narratives about cheating and start trying to understand how and why students are so attracted to AI, we can’t build a humanities practice that will withstand the onslaught of technology—much less one that connects the creative future to our cultural past.
This essay was first published on the author’s newsletter, Political Junkie, on July 22, 2025.