Photograph with black background of crawling white baby doll with realistic head and limbs and wooden mechanical body

Creeping baby doll patent model (1871) | George P. Clarke / American History Museum / CC0


Is it ok to use AI for spell check? What about grammar? Writing image captions? Smoothing transitions between paragraphs? Translating from other languages? Search Engine Optimization? Is my email autoresponder some kind of AI? If I accidentally click an ad for a Large Language Modal (LLM) targeted to me by Anthropic, am I feeding the war machine?

Listening to the ways that writers are haggling with ourselves about AI reminds me of what Claire Dederer once wrote on the art of monstrous men:

How can one watch The Cosby Show after the rape allegations against Bill Cosby? I mean, obviously it’s technically doable, but are we even watching the show? Or are we taking in the spectacle of our own lost innocence? … Do we vote with our wallets? If so, is it okay to stream, say, a Roman Polanski movie for free? Can we, um, watch it at a friend’s house?

By all means, let us define the distinctions. There is surely a spectrum of ethical AI use. A lot of it is obviously, hyperbolically, materially bad. Some is closer to watching Rosemary’s Baby at a friend’s house. The latter end is what your average writer is likely up to when checking their grammar. But the only way for a person to stay away from it completely is to take an abstinence policy. And yet if you are reading this on the internet, you are not 100 percent abstaining. You are strapped in a chair, eyes squinting but not closed, watching Roman Polanski. (This is where the comparison ends. I know that advanced AI is not an exiled film director.)

Arguing against AI use on individual case-by-case grounds is largely missing the point of how it works and what it is doing to us. It’s a misunderstanding of the way certain types of automation are already embedded in most facets of our lives and of how exploitation happens. Why are writers so hung up on the question?

LLMs are scary to writers in particular because they are language models and language is our medium. We are supposed to be the experts on language and maybe even the arbiters on its usage (nah). As Nina Beguš said in a talk I saw recently, “literature often serves as a proxy for culture”—it’s a broad way of evaluating what’s happening. But beyond the medium-specificity, it’s because writers, like other intellectuals and artists, tend to see our labor as exceptional. So we become obsessed with a) artistic exceptionalism as a way of proving b) human exceptionalism.

When it comes to human exceptionalism (we are special, machines threaten our specialness) there is endless ethical and historical and theoretical and legal territory to be dealt with. For my purposes: When the idea of the human is threatened by technology, people, including artists, are wont to see creativity as the last bastion of human exceptionalism, and so the conversation about “how do we defend ourselves from becoming robots” often (still!) gets funnelled into what are essentially Turing tests for art. Preserving human exceptionalism becomes dependent on the preservation of artistic exceptionalism.

We know that the supposed exceptionalism of artmaking is an obstacle for class solidarity. When artists/intellectuals see their work as pure, untainted, singular, sui generis, and superior to other kinds of work, we lose sight of our own complicity in structures of power. (I’m talking about work as verb, not work as noun.) And the more we obsess about whether machines can write like humans—instead of the more interesting question: how they are not like humans—the easier it is to lose sight of the big questions and real threats.


Does it write good

I cannot tell you how many times this month I’ve heard arguments for and against whether AI can write good by human standards, and I feel like I’m reliving a past age. The level of language-model advancement has recently happened very quickly, but the obsession with whether it’s as “good” as a human writer keeps proceeding along the same argument lines, which actually become less and less relevant the more advanced the tech gets.

Journalist Ben Mauk wrote an excellent, solid, and smart reckoning on the creative capacity of LLMs, which I’m choosing as a straw man because I liked it so much:

I have many moral, humanistic, and aesthetic qualms with using GenAI for most of the tasks that fill my day, and like Kang I’m skeptical that it will ever produce creative work that thinking people will want to spend time with. I’m willing to be proven wrong, and I admit that the technology has use cases, even elements that can seem, on first blush, magical. And I don’t think writing books or making movies are, in a literal sense, mystical activities that require a human soul. Just as poetic meter and rhyme scheme are patterns that GenAI can reproduce in doggerel verses, plots and dialogue are fundamentally based on patterns—albeit much more complicated ones. From what I can see, they lie beyond the abilities of GenAI, which cannot tell a story without a human at the wheel. Prove me wrong!

But since he asked …

I don’t want to spend hundreds of hours fiddling with Claude to prove anyone wrong. I want to figure out why we have been asking to be proven wrong since at least 1952!

Perennially asking to be proven wrong is just asking to be Turing-ed. It is an uninteresting experiment with an outsized cost. Global AI will soon require as much energy as Japan. It is possible that AI can pass as a human writer, and that is definitely happening. I see it happening all the time (and I, ha, don’t see it happening all the time). Can it be made to imitate or produce virtuosity? I don’t care. This is not about an individual talent vs. an LLM. It’s about a collective population vs. tech and the Pentagon.

Like Mauk, I am also “skeptical that AI will ever produce creative work that thinking people will want to spend time with”—but that’s not because AI “cannot tell a story without a human at the wheel.” It can. It’s because of the other thing: “thinking people don’t want to spend time with it.” I want to decide whether to spend time with any e.g. memoir based partly on who made it. Most people, when given the chance, also say they want to read human-made stuff. The issue is in the reception, not the authorship.

What a machine writes might be bad or good (and if it’s good, imo, it’s not because it’s writing like a human but writing like an alien) yet at this precise moment in history, I’d prefer to read books largely written by people. Therefore: I just want to be told whether something is written by/with AI so I can judge whether I want to engage with it—not to be tricked. I want to evaluate its merit myself; every reader will evaluate it differently. But trying to convince readers that AI is a great writer (see? see?) is a boring and bad use of Kilowatts and H2O.

The aesthetic goodness of a work is as always up to interpretation; the context the work is displayed in is always part and parcel of the interpretation; the transparency of the artist as far as their working methods is what allows us to calibrate our interpretations. I used to be a little interested in how humanlike AI could write, but now what I’m focused on is whether AI (of any kind) can scramble broad social consensus about authorship, take away my rights, and drink my water. All things that I need in order to be a writer. I would rather have those things than $3k from Anthropic for scanning my book.

Instead of figuring out Whether AI Writes Good and Whether We Can Use it at a Friend’s House, we have to figure out what kinds of writing a) matter b) need contextualization c) are nothing but food for machines and d) are valued on the labor market. We need to be openly debating attribution and IP. We need to figure out how we let our work be coopted and microwaved. We need to choose our Red Lines based on our means.

Everyone has to set their boundaries. For instance, there are several things I will use an LLM for, but as long I am beholden to no employer except ELVIA INC, I will never use one to edit someone else’s creative work. Big Red Line. Not because I think it would do a bad job. Even though I think it could do a good job. Because I don’t want to, and for now, I can afford not to.


Back to work

There is something repugnant about the idea of using Claude to generate your memoir. It is gross. It is grosser if you lie about it. But the grossness of the possibility that Claude is turning writers into unthinking unfeeling robots is overtaken by the reality that LLMs are decimating rights: yours and everyone else’s, with whom your work is interdependent.

Use of AI in the art, writing, publishing world is—still—a labor issue. We have to stop Turing-ing about whether AI can write a good (aka humanlike) book. Our creative genius was never really ours. We were always using tools and technologies and other people’s ideas. There is a huge amount of crap work that we shouldn’t have to do in the first place. The horror here is that these tools—in the long run—are extracting more from us than we can ever extract from them.

Is OpenAI part of the war machine? Yes. Is a food writer using GPT to “make my lasagna recipe sound saucier” going to make a material difference in the number of tons of bombs dropped tomorrow, or the number of server farms added in the Nevada desert? Hm. What about 100,000 writers? Well. Amazon has 1.5 million employees, Google has nearly 200k, Microsoft over 200k, Apple over 150k, and AI is embedded into every single process of their workflows.

Does that mean we shouldn’t boycott it? No, boycott away. Do not give it a drop of brainpower if you don’t want to! If there is any chance of making a difference in the financing of military models, or stopping the words you generate from contributing to the efficiency of those models, draw Red Lines. Vote with your wallet, if you have enough in your wallet. If you can halt the slide down the slippery slope of normalization, do that. Unfortunately, AI is way too useful a tool, and too embedded a tool, for most people to ignore, given the speed of work demanded of us, which only gets faster the more we use AI. But if you are boycotting Amazon’s LLM … perhaps consider also boycotting … buying shit on Amazon?

It’s worthwhile to avoid using shortcuts that erode your intellectual capacity, flatten your prose to a lowest common denominator, or outsource your joy. But purity politics about the exceptionalism of certain types of labor is not helpful. A writer is not tainted for using an LLM. No one will win rights by insisting on the sanctity of the offline Microsoft Word document. Shaming people for using AI in, for example, humanities contexts probably means they’re more likely to use it covertly—when the very least we should demand is transparency. If concepts of authorship are changing this rapidly, we need to be able to debate them openly. Normalization happens as much through secrecy as through disclosure. The argument I often hear against writers using it—that a person is bad/cheap/lazy/fake for opening that browser tab—is a distraction.

You can understand why a freelance writer with no contract, no insurance, no workplace, much less workplace protections, who’s getting paid 6 cents per word to write an art review—I’m using that example instead of something seemingly more innocuous like generating image caption and less bombastic than writing a whole novel—would ask Claude to turn their notes into a four-paragraph document (which their editor has no time to revise). I don’t think that person is lacking conviction, doesn’t understand the way the world economy works, doesn’t appreciate creativity, doesn’t understand why we are doing “art” in the first place. I don’t think that person is desecrating the industry. Because what I would be saying, functionally (and this is true) is that this person cannot afford to be a writer. Surely that is the substrate of the problem, not the individual writer’s laziness. Much less the issue of whether AI wrote a Good Enough review that nobody noticed.

I don’t want to read reviews touched by no human hand. My point is that this is a labor issue that only labor solidarity can ever begin to confront. We can’t talk about AI threatening the meaning and soul of art or undermining our intellect unless in the same breath, we say that writers, like everyone, need to be compensated so they have the choice to draw Red Lines.

And not just writers: the people working in the Amazon warehouse distributing those books that we wrote, whose movements are being tracked by surveillance cameras using AI to determine whether they’ve taken too long a bathroom break. And also: the people coding the camera software. (A lot of tech workers are thinking deeply and with great nuance about ethics and best practices!) We have to see our labor as connected to theirs, and theirs. Completely dependent on theirs, actually. Also, wouldn’t it be great if all those other workers had enough time between their shifts to read the books that we wrote?

This week I’m mostly thinking about the idea that “Mass surveillance, while very scary, is like the 10th scariest thing the government could do with control over the AI systems with which we will interface with the world.”

I’m not saying Don’t fuss about books during the End Times—the opposite. I hope to be around to write books about what’s happening.

When it comes to writing: I’m scared that the infrastructure that supports writers is disintegrating. I’m scared of the energy cost and the military complicity that are so outsized they’re hard to think about. I’m scared that we increasingly take for granted that AI is everywhere. I’m scared that young people aren’t learning to write anymore, and are therefore not learning to think in the way that writing enables, because AI does it for them.

This will, yes, lead to a cratering of the intellectual capacity of a generation, and (in tandem with other repressive forces) a cratering of the intellectual class. But those fears are not addressed by handwringing over whether AI can write a humanish novel. They are not addressed by indicting people who use a given tool to work at a tech company or write art reviews or generate resumes so they can get a better-paid job than lifting boxes. The fears are addressed by organizing with them.


This essay was first published on the author’s Substack, Fast Writing, on May 16, 2026.