On Friday the website Ars Technica published a story about Scott Shambaugh, a coder who made headlines in the tech world last week with his story about AI agents, and in particular one that he claims had written what he called a 'hit piece' on him and published it for the world to see.
Shambaugh’s story is as interesting as it was horrifying. Agents are just the latest front in tech's war on our collective sanity, a type of AI that's essentially a glorified autocorrect that in this case has been given a uniform and sent onto the internet to try to do human things like propose code changes and then, when humans like Shambaugh decline them, writing pissy blogs complaining about it.
Among other sites, Ars Technica covered this last week with a news story that remarkably appeared to have some AI-created filler of its own, citing quotes from Shambaugh that never appeared in the very blog Ars was linking.
Not long after readers--including Ars' own community--began noticing this, the story was pulled (though you can still read the archived original here). It's general journalistic practice that published stories which contain inaccuracies are edited and updated, not deleted entirely.
Ars has since published an editorial statement, bylined by EiC Ken Fisher, addressing the story, its deletion and the outlet's policies on AI:
On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.
Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
By citing Ars' clear rules, Fisher's statement points the finger for this disaster at the two authors bylined in the piece, with one (Benj Edwards) having since posted a statement of his own, assuming full responsibility for the incident and saying the other (Kyle Orland) had 'no role in this error':
I have been sick with COVID all week and missed Mon and Tues due to this. On Friday, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh.
Here’s what happened: I was incorporating information from Shambaugh’s new blog post into an existing draft from Thursday.
During the process, I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline.
When the tool refused to process the post due to content policy restrictions (Shambaugh’s post described harassment). I pasted the text into ChatGPT to understand why.
I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words.
Being sick and rushing to finish, I failed to verify the quotes in my outline notes against the original blog source before including them in my draft.
Kyle Orland had no role in this error. He trusted me to provide accurate quotes, and I failed him.
The text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica’s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.
I sincerely apologize to Scott Shambaugh for misrepresenting his words. I take full responsibility. The irony of an AI reporter being tripped up by AI hallucination is not lost on me. I take accuracy in my work very seriously and this is a painful failure on my part.
When I realized what had happened, I asked my boss to pull the piece because I was too sick to fix it on Friday. There was nothing nefarious at work, just a terrible judgement call which was no one’s fault but my own.
Look, I understand mistakes can happen when you're sick, but Edwards--who it should be noted is Ars' 'Senior AI Reporter'--has used AI not once but twice here, and in doing so has caused a huge amount of reputational damage for himself and his employer in the process. And it's not like he used it to comb through 800 pages of impenetrable legal documents, either; Shambaugh's original blog was only a couple of pages long (he's since written a follow-up), and written in plain English, making the AI's hallucinations (and Edwards' use of it) even more damning.
It's disappointing someone working in this space felt the need to have to use this garbage, particularly when it violates their employer's own policies. As this whole mess has shown, the tech simply cannot do the most basic things the people selling it keep claiming it can. Citing quotes from a blog for your own story is bread-and-butter stuff for a journalist, it's what the job is, and seeing this busted tech worming its way into a profession that should be its sworn enemy– and fucking the whole thing up in the process--is just a huge bummer.