The Stillborn God: the Age of AI is (Un)Dead

SHODAN, the evil AI from the System Shock series of games. Fortunately, we’re probably not going to have to deal with this. Unfortunately, we’re probably going to have to deal with something much, much dumber.

I’ve written about Large Language Models here, and due to further reading my thoughts have evolved somewhat, to the point where I would like to update what I said previously. I will be putting a link to this piece in that one. This is unavoidable, as discourse related to so-called “Artificial Intelligence” has been continuing, and we’re just seeing it tip over from the peak of inflated expectations into the so-called “trough of despair.”

This piece was kicked off by last week’s somewhat sensationally-named episodes of Behind the Bastards, which host Robert Evans – a former tech reporter and war correspondent (for Cracked of all things, apparently they didn’t know he was doing that at the time) – reproduced on his substack, Shatterzone. The title of this piece is the same: “AI is Coming For Your Children.”

the latest edition of Literature as Exploration. It’s in it’s 5th edition, which came out a decade after the author died. They should probably put some more names on the cover.

In this, Evans details a number of grifters using ChatGPT and Midjourney to produce “children’s books.” Evans correctly identifies this as a troubling development, and even goes through a fair amount of research to identify why this is a problem – he uses a framework taken from Louise M. Rosenblatt’s Literature as Exploration to explain why. Rosenblatt was a pioneer of reader response theory, and while I generally think that’s a solid theory, there are some small problems with framing this as if it is settled science backed up by field research and robust cognitive theories – that’s not generally how things work in the Liberal Arts. There are, of course, other, competing theories that have alternative ideas about how the acquisition of literacy and how texts function. This is not necessarily a major problem for the argument Evans is making from a rhetorical standpoint, and I’ll get to why in a moment. I’m still in the summary portion of things.

The principle issue raised here is that a work of literature is viewed as a conversation of sorts, and a work written by a Large Language Model is, generally speaking, the same as a conversation with a chatbot. There isn’t another consciousness there, it’s fundamentally the same as the Chinese Room Thought Experiment that formed a major portion of Peter Watts’s Blindsight. It’s a deterministic system that works by predicting the statistically most-likely next letter (not even the next word, mind you, my recollection is that it’s generally working on a letter-by-letter level.)

This creates the Uncanny Valley effect that many people have pointed out. It’s difficult to select one cohesive feature that marks something as LLM-generated, but you can always feel that something is off. That’s largely because it’s not the presence of something new and weird, but the absence of something: there is no vision behind it, so there can be no intentionality behind its decisions, and – as a result – those decisions are essentially made with the roll of the dice (see our oft-repeated passage from Mark Fisher’s The Weird and the Eerie. It’s not that AI is weird, it’s eerie: something is lacking).

An aside: this same sort of eerieness appears in student papers from time to time. Generally speaking, English composition classes are unfairly biased against students who come from non-anglophone backgrounds, and addressing this asymmetry is a major issue that I and my colleagues work on. I generally try to work with students in these situations: I try to catch them before they leave the room and talk to them about the process. I point out that I couldn’t take college courses in Spanish or French or Arabic, and I recognize this fact, and that while I wouldn’t be factoring grammatical errors into their grades beyond whether the message got across, I would mark them so that the student can improve.

However, in the past few years, I noticed the phenomenon of students using AI-assisted tools to work around this: they would write in their own language, then pass it through google translate and a gauntlet of assistive services – grammerly and quillbot among them. Some students, I am given to understand, have five or six such automatic steps that they put their work through. This flattens something essential out of the paper and results in something that may communicate an idea, but has the side effect of infecting any English teacher that reads it with the rage virus. It sets off every sense I have developed for whether something is plagiarized or not because it doesn’t match the style and voice of any of the lower-stakes or in-class writing that the student has done.

Look, if English were easy, then there would be a lot fewer books making fun of how difficult it is.

The question remains, of course, who did this writing. The only mind behind it is the student’s, but they have not developed the skills.

What I tend to do here is take the student aside and talk to them again, explaining the issue and connecting them with resources that can help them – every university worth its salt has a writing center, and there are people there that can take the time to help with this. However, what is most necessary is convincing them to take the long road to improving their work, rather than simply attempting – badly – to shortcut around it. I’d go deeper into this question, but the approach is largely dependent on the student’s character, the language that they’re coming to English from, and a number of other issues. Going deeper into this would be a FERPA violation.

An autobiographical aside: this summer, I am working for a nonprofit that is planting trees around the Kansas City metro. My job largely consists of watering trees, but it occasionally might involve pruning, removing dead trees, or staking trees. Not every young tree is staked in place, but many are. My trainer explained why, pointing at a stand of trees put in by another service in an adjacent part of one park: staking the trees improves their survival rate the first four years or so, but beyond that, a tree that was staked unnecessarily will actually be weaker. A young tree made to stand on its own flexes with the wind and weather, and its roots strengthen and dig deeper into the soil. A staked tree has weaker, shallower roots.

Obviously, people aren’t trees, and it would be possible to extend this lesson too far and use it to justify painful and unnecessary treatment of children. But young students need brushes with failure to grow their skills. A sense that one marches from victory to victory simply leads to a lack of resilience in the face of actual failure.

The goal of an educational institution isn’t to certify completion, it’s to teach skills and knowledge that are considered necessary.

Here we come to the crux of my critique.

The hype around so-called “AI” – Large Language Models and machine learning – doesn’t say anything about the future. It says things about the present and immediate past. One of the most important things it says is, “we have undervalued the liberal arts to a dangerous and foolish degree.” Because no one who has paid attention to the study of literature or writing will think that there is any value in a purely AI-generated text, no more than a serious engineer will consider a description of the hyperdrive from a Star Wars tie-in-novel to be a legitimate avenue of exploration for the propulsion of a space ship.

This is part of why – while I am sympathetic to Evans’s argument, described above, and believe that he is almost completely correct in his assessment – I point out that acting like Rosenblatt’s Literature as Exploration is “settled science” is a potential problem. Because it is a useful tool to use to solve the immediate problem, but preserves a grain of what I think the underlying problem is: the idea that all knowledge functions like science, where something can be a purely settled theory. The Liberal Arts is a zone of indeterminacy and flexibility where you occasionally have to behave as if three mutually incompatible theories are all true to have an understanding of what is going on. The best explanation for this might come from Russo-Lithuanian critic Juri Lotman’s theory on poetry, where a poem is not a single system but a “system-of-systems” and the places where these different patterns reinforce or interfere with one another is part of the point (an aside: I’ve never read Lottman, I encountered him in Terry Eagleton’s much more approachable book How to Read a Poem.)

New game: Mark Twain or Eastern European Academic?

This “zone of indeterminacy and flexibility” issue is why students using all of these different tools is a problem: the most common comparison I’ve heard people use is to handwave it and say it’s like the introduction of calculators into the math classroom. This comparison, by definition, is false. Math is – by and large – deterministic, there is a single right answer to any given problem. The question in liberal arts classes is how one deals with the indeterminacy, and “I banish it with a machine” isn’t really a valid answer. It’s refusing to engage with the question.

Some might argue that this can eventually be solved for through some other approach. Perhaps there can be a way to enforce consistency somehow.

I don’t worry about this, because I am convinced that the age of generative AI is already dead. The problem here is that it will continue to shamble along like a zombie reciting nonsense listicles because the money people are convinced that it will make sense to do so.

It is already dead because of the phenomenon of “model collapse.” According to “The Curse of Recursion: Training on Generated Data Makes Models Forget,” a paper published to arxiv by a team composed of Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson, who are Computer Science scholars working in the UK and Canada, models trained on data sets that involve AI-generated content become more and more deranged as time goes on.

One of these authors, Ross Anderson, summarized the situation in a blog post:

“Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data.

After we published this paper, we noticed that Ted Chiang had already commented on the effect in February, noting that ChatGPT is like a blurry jpeg of all the text on the Internet, and that copies of copies get worse. In our paper we work through the math, explain the effect in detail, and show that it is universal.” (Links in original)

I think that Prions are an apt metaphor here. A mistake in the process of creation leads to a cascade of failures. Pictured: the protein responsible for BSE, or “Mad Cow” disease.

What this means is that we had one chance to take a complete snapshot of the internet, and as soon as AI-generated content began to sneak in, that data became poison. All future sets of training data, unless they are hand-picked by humans, contain the statistical equivalent of prion disease. The only solution would be to develop an accurate way of judging whether something contains LLM-generated content, and this would solve the issue for educators: such a tool would make it possible to completely filter out AI generated junk, and I imagine that such tools would also be used alongside plagiarism-detection software.

The problem, though, is that these tools will remain cheap, and will continue to be so even deep into their derangement. The people generating cognitive poison for children up above spend almost no money and almost as little time on their work. They can repeat this process hundreds of times in the space it takes a legitimate children’s book to be made. It produces garbage, but the margins are incredibly friendly. Why do six months of work that involves thought and skilled labor if you can just spend the same period of time churning out digital slop for the same payout?

There’s a common saying, in regard to things like this, that a genie can’t be put back in its bottle. Once a world-changing force has been unleashed, it will continue and it will be part of the world forever.

I do not think this is true. The matter of genies and bottles and whether a bell can be unrung and so on are ideological statements. They are declarations not of how the world is but about how those who hold the power in our society want it to work.

We’re getting the streetcars back but, man, is it a slog.

Back in the early 20th century, American cities had robust networks of inter-urban rail and streetcars – we had the functional public transportation that you have to go to Europe to experience these days – and it was systematically destroyed by GM and associated companies to bolster the sale of automobiles. Now, of course, American cities are choking on smog and you can’t walk a mile in them without being forced to wait at least once for a line of explosion-powered death machines to roar past.

But this wasn’t a superior technology defeating an inferior one, this was an inferior technology being made to supplant a superior one through the application of capital.

Now, the people who hate AI the most don’t really have GM-levels of capital, but capital is just one kind of social force. If enough people refuse, reject, and resist the functioning of the machine, it can be prevented from doing what it is designed to do.

That is what is necessary here.

Refuse to buy anything made by large language models, refuse its demands upon your attention.

Reject anything you have previously accepted when you see it gesture towards use of AI.

Resist efforts to incorporate these tools into your life and fight back to the best of your abilities.

Perhaps one day, it may be possible for a machine to be conscious and to qualify as a person. I do not believe this day will come, and if it does I do not believe it will come in my lifetime. If this happens, then obviously creative expression will be part of its experience of the world – that is part of what it means to be a person.

But what is happening now is not that. There is no “AI.” There is no artificial consciousness. There are only more robust chatbots than there were previously. Do not be fooled.

If you enjoyed reading this, consider following our writing staff on Twitter, where you can find Cameron and Edgar. Just in case you didn’t know, we also have a Facebook fan page, which you can follow if you’d like regular updates and a bookshop where you can buy the books we review and reference (while supporting both us and a coalition of local bookshops all over the United States.) We are also restarting our Tumblr, which you can follow here.

Also, Edgar has a short story in the anthology Bound In Flesh, you can read our announcement here, or buy the book here