Skip to Content
Blog

You Don’t Need AI To Write A Novel

AI isn't a privilege issue, no matter what National Novel Writing Month says

Pixabay

National Novel Writing Month takes place in November, but it was on everyone’s minds this weekend following the organization’s release of a position on AI. That position, as the latest update to the statement has it, is that “We believe that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology, and that questions around the use of AI tie to questions around privilege.”

I’ll drop the meat of NaNoWriMo’s explanation for this position below, so you don’t have to keep switching tabs as we look at it together:

Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.

Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can't "see" the issues in their writing without help. 

General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur. 

On their face, the issues brought up above are good ones. Writing and publishing are hugely gatekept; I am aware that I’ve been able to turn them into a paying career because I grew up the kid of a teacher in an English-speaking household in an English-speaking country, had writing-specific education and access to graduate work, and formed social and professional connections that have given me opportunities. While I faced discrimination as a trans writer writing trans content–my undergrad advisor in 2003 told me “no one wants to see that pervert shit” when I submitted a play about a trans man as my thesis, and made me bring in something else–I had the ability to eschew the systems that didn’t want me, producing and publishing trans-focused work with the support of my community instead of the cis establishment. 

How precisely NaNo thinks AI will help people who haven’t had those privileges isn’t quite spelled out. Is someone without the “financial ability to engage a human for feedback and review” going to use AI to evaluate their writing, something the technology cannot meaningfully do because it is simply a word predictor? Are the “underrepresented minorities [who are] less likely to be offered traditional publishing contracts” going to use AI to publish their book? I don’t know what this would even look like, nor how it would fix “the indie author space, which inequitably creates upfront cost burdens.” The idea that AI is some kind of assistive technology akin to dictation software or alt-text feels naive to me when you consider how the entire system is built on theft with the ultimate aim of profiting the rich, with ableism already baked into its very core.

The goal of National Novel Writing Month is to produce 50,000 words of a draft in 30 days, a quantity-over-quality approach that has value in that 1) it forces you to write, probably the hardest part of writing; and 2) it gives you a draft you can make better–as I often say to writers, you can’t fix something that doesn’t exist. But it’s not to create a publishable book, or even a good one; it’s simply to do the work. Asking ChatGPT to write a garbage draft for you will produce the kind of garbage draft that comes out of NaNoWriMo, but you haven’t really done NaNoWriMo–you’ve just created some garbage. 

There are of course degrees between writing 50,000 shitty words yourself and generating 50,000 shitty words with AI; I’m not going to say that every possible use of AI in the creation of a first draft is equally reprehensible. But NaNo’s statement dresses the latest fad technology up in the language of social justice, desperately trying to give AI capabilities it doesn’t have or trying to solve problems it itself creates. It reminds me of the logic of OpenAI CEO Sam Altman’s hideous Worldcoin project, one of the supposed benefits of which is to provide people with a universal basic income–using, you’ll never believe this, Altman’s own cryptocurrency–after Altman’s AI takes their jobs away. NaNo’s statement is just retconning altruistic excuses for the self-interest of the AI money men, citing uses the technology doesn’t even have.

The bullshit on display here is personal to me: Before moving into journalism full time, I ran a small press focused on trans authors–at the time, so under-represented in publishing and the broader culture that a review of our first release began with the absurd first line “It is difficult to estimate the number of transgender people in the United States”--and teaching writing workshops for trans writers. I also made my actual money contracting for an editing service, where I edited clumsy memoirs and fiction and countless dissertations for students of shady for-profit colleges, people getting Master’s degrees or PhDs in nursing or teaching primarily to get raises or promotions at their jobs.

In both these roles, though in different ways, I tried to overcome access gaps and the effects of systemic discrimination. One of the most fulfilling parts of the trans writing workshops we ran was teaching writers how to give and get feedback, something many of them hadn’t meaningfully received from cis teachers or classmates who dismissed their work out of hand or told them they were “brave” for writing it at all. And as a dissertation editor, while I’m not going to say fixing citations and writing “Google results aren’t a lit review” for money is political work, what most struck me was how much my clients’ schools had failed them, how many of them wouldn’t have needed me if they’d been taught the structures and methodology of an academic research paper. I often used my editing notes to give writing advice or link to resources I hoped would help them in the future. My entire career has been dedicated to helping people get better at writing; the idea that not only can the plagiarism machine do that, but that believing it can is a moral obligation, is frankly hideous.

Another bit of AI that made the rounds over the weekend was a New Yorker essay by sci-fi writer Ted Chiang, who argues that AI cannot make art. In a paragraph I saw countless times on social media, Chiang wrote

The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.

NaNo’s statement thoroughly proves Chiang’s point. When AI is brought into the mix, it doesn’t matter what you produce or how and why you produced it, simply that words exist that didn’t before. You could even argue, if you wanted to be haughty, that this is the problem of NaNoWriMo itself, and the organization’s AI statement lays that bare. I’m not entirely inclined toward that view; while I’ve never done NaNoWriMo personally, I think the container and inspiration it provides to get something on the page has value, even as it’s led to controversy and unrealistic expectations. If it helps you jumpstart a writing project you haven’t made time for, by all means go for it. 

Writing, and all art, are deeply human things, even when the art you’ve made doesn’t meet your hopes or is objectively bad. The joy in making art is in making it; it is, incidentally, also one of the ways you get better at it. The AI grifters who argue that the technology is a valuable shortcut to democratizing artmaking misunderstand what art is for, what it means to make it, and why people engage with it. There is no more Succession, there is nothing outside the frame of the Mona Lisa–not because we couldn’t make them without AI, but because that finale and those boundaries are what real people actually made. They’re what makes those products into art, what makes us want to watch and see them. That NaNo, an organization ostensibly geared toward introducing people to that process of artmaking, would argue otherwise is a failure on every level.

Enjoyed this article? Consider sharing it! New visitors get a few free articles before hitting the paywall, and your shares help more people discover Aftermath.

Stay in touch

Sign up for our free newsletter