AI Is Costing Us Our Voice
The Guardian recently came under fire to publishing blatant AI slop. Might this signal the beginning of the abolition of man?
The Guardian came under fire recently for publishing an AI-generated news article. An author by the name of Bryan Graham wrote what can only be described as “AI slop.” But fascinatingly, a Guardian spokesman reportedly said, “Bryan is an exemplary journalist, and this is the same style he’s used for 11 years writing for the Guardian, long before LLMs existed. The allegation is preposterous.”
Assuming for the moment this quote did come from a real spokesman, I am not sure how to understand the argument. Are we saying that Graham wrote in mechanical and uninspired ways for years? Or might this just be cope?
Max Spero thinks the latter. He fetched 871 articles that Graham published with the Guardian in a six-year period, presenting evidence that at least some of Graham’s articles are fully AI generated.
What makes Spero’s evidence interesting and the Guardian’s reported defence of Graham damning is the Guardian’s own policy on AI use in journalism from 2023:
“If we wish to include significant elements generated by AI in a piece of work, we will only do so with clear evidence of a specific benefit, human oversight, and the explicit permission of a senior editor. We will be open with our readers when we do this.”
Evidently, the policy changed. But the Guardian is not unique in its use of AI writing for journalistic ends. Writing has become intertwined with AI use. While not everyone will use Generative AI to produce writing, the temptation to do so is too great. From students to professionals, generative AI use has become virtually necessary for everyday life.
The speed at which North Americans have adopted AI use is staggering. Pew Research recently found that 62% of work-aged adults regularly use AI throughout the week. I imagine that number will accelerate through 2026.
And this means that business writing, marketing, and other forms of the written word will increasingly rely, to some degree, on generative AI. At this point, it is destiny, not decision.
The danger of what C. Thi Nguyen calls “value capture” is present. We often outsource our values to external scoring systems. In the case of writing, it might mean “clicks” and “views.” So writers and editors end up changing how they produce content to match demand. The values of readers are increasingly transferred to writers and editors. And increasingly, the core values of the writing team, even if they do not formally change, become captured by external standards of value.
Value capture happens across industries. And so we should expect that as writers learn how to write through AI, they will begin to sound like AI. Then we won’t really know what AI writing is and what it isn’t. But we will know that writing will become boring, all the same.
The standardization of processes came into being to ensure that a factory can hire any able-bodied person to fulfill a role. No longer did employers need experts, but they can insert or remove any individual who could follow a simple mechanical process. As time went on, this process of standardization spread across various fields (education, law, etc.).
And where standardization exists, individuality on average plummets. Obviously, there will be exceptions that show some people to be non-average. And that will continue. But I am here talking about general patterns.
And when it comes to writing, I suspect that just as factories used standardization to remove the need for skilled workers, AI writing may do something similar. AI, as a machine, scrapes the internet to find common patterns of acceptable writing that people will read. It thus standardizes writing through generalities.
As a machine, AI works by outsourcing its values to external creations. Then it adopts these values to express them to users. Yes, a skilled AI user will avoid this problem, but I am talking about averages. The average person, as data has shown over the last two years, outsources his or her memory and skills to the machine. As Gurwinder recently reminded us, we should only automate what we want to lose.
Publishers contribute to this paradigm. Over the last few decades, publishers have standardized their books to fit the values of buyers. This process was already well underway, mind you. But as the mechanical sciences flooded the humanities, so did publishing become more machine-like. Mary Midgley describes this as the myth of science, the myth that because science works so well in physics (it does), then it must work well in the humanities (it does not always).
Likely, I should stop here and make my point: standardization intentionally lowers quality. The point is to make something easy so that anyone can do it. AI privileges standardization through its machine-like mode of operation. As writing becomes more and more machine-like, it will become more and more generalized. Consequently, the quality must needs lower, as writing becomes even more machine-like.
So maybe Bryan Graham did write his article by hand, and perhaps AI detection software found true AI writing. But it did not come from Generative AI; it came from an AI-generated writer, whose individuality has become lost in the machine.
But if that is the case, might that be worse than simply using AI-generated content? It would mean that AI has changed what it means to be a human writer. We might see, in Lewis’s phrase, the beginnings of the abolition of man.








So sad how all my years in school I was learning to have a unique writing voice. Now we have AI erasing all uniqueness and individuality. And that’s not even touching on how much intelligence has been lost in the younger generations.
When I first encountered AI, I immediately realized the mediocrity it would unleash.
My fears were not unfounded. But AI is not neutral, it is damaging.
It damages brand reputations (although the staff at the Guardian are good enough at reaching that goal on their own).
It damages writers by hurting their development curve. They do not receive proper criticism or feedback, and they don't put in the work to produce quality copy. People who use it develop cognitive debt. And, as a result, become increasingly reliant on it as pressure mounts.
It damages the soul. You are called to glorify the Father through your works. Every professional action should and ought to be a song of praise. How can you praise the Father when you hobble yourself? What athlete doesn't train? What have you ever had in your life, that was truly meaningful, that you haven't had to work for?
But no, go ahead staff at the Guardian. Time to double-down on stupidity. It is a shame they don't print papers much anymore or I'd at least have some good fire-starter.