Ars Technica fires reporter after AI controversy involving fabricated quotes
danso
130 points
63 comments
March 03, 2026
Related Discussions
Found 5 related stories in 54.9ms across 3,471 title embeddings via pgvector HNSW
- Senior European journalist suspended over AI-generated quotes Brajeshwar · 84 pts · March 21, 2026 · 74% similar
- BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI jsheard · 21 pts · March 13, 2026 · 55% similar
- Wikipedia bans AI-generated content in its online encyclopedia Brajeshwar · 76 pts · March 28, 2026 · 55% similar
- Elon Musk's xAI sued for turning three girls' real photos into AI CSAM nobody9999 · 19 pts · March 16, 2026 · 55% similar
- Elon Musk pushes out more xAI founders as AI coding effort falters merksittich · 385 pts · March 13, 2026 · 55% similar
Discussion Highlights (19 comments)
ab_testing
So they fired that author after the author had publicly apologized on Blue sky.
add-sub-mul-div
> senior AI reporter A true "senior" AI reporter should be more skeptical of LLM output than anyone else.
Revanche1367
So the original blogger got slandered by an LLM agent, then got slandered again by a human journalist who used an LLM agent to write the article about him getting slandered by an LLM agent? How ironic. But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.
JumpCrisscross
“Edwards also stressed that his colleague Kyle Orland, the site’s senior gaming editor who co-bylined the retracted story, had ‘no role in this error.’” Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.
sl0pmaestro
> while working from bed with a fever and very little sleep," he "unintentionally made a serious journalistic error" as he attempted to use an "experimental Claude Code-based AI tool" to help him Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.
geerlingguy
Context from earlier discussion of the article being pulled: https://news.ycombinator.com/item?id=47009949
vadansky
Good time to watch Shattered Glass. Imagine what he could have gotten up to with LLMs.
jackyli02
The role "reporter" deserves very little credence in AI now. The public might be better off if they get their information on AI from ChatGPT.
sl0pmaestro
Happy to see some accountability here. Athough it's unclear why the other co-author who stamped their name on that article was retained. Maybe they just stamped their name to meet their quota of articles. In any case this follow up action makes me take arstechnica standards a bit more seriously.
aizk
I have a story with Benji. Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job. Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response. Then, tech crunch wrote an article on our project. I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?) I thought that was rather strange, especially since we already had built up a relationship. I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes. Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.
jmyeet
The crazy part to me is that even here on HN there are people who still insist that LLMs don't fabricate things or otherwise lie. I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business. Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.
rahimnathwani
The headline says Ars fired the reporter, but AFAICT the article doesn't include any facts that indicate this. All we know is that he no longer works there, and that Ars refused to provide any additional information.
aidenn0
I don't know that this is what happened here, but any time there is a push to do more with less, you end up rewarding people who take shortcuts over those who do a proper job, and from the outside, it looks like journalism has a push to do more with less.
raincole
I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years. I really don't know where the internet is heading to and how any content site can survive.
Barrin92
people have said enough about the ethics of all of it but what I found even sadder is that the story made me curious to take a look at the actual piece he "investigated" with AI, it's this one ( https://theshamblog.com/an-ai-agent-published-a-hit-piece-on... ) This is btw a bit more than 1k words, which takes the average American reader, not senior journalist, ~5 minutes. This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that , and so on. That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.
lich_king
I clicked through the author's earlier stories when this first made waves. I obviously had no proof, but I was pretty certain that he's been using LLMs to generate stories for a good while. When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?
bragr
The headline is a bit sensational considering all we know from the reporting is that he isn't working there anymore. Fired likely, sure, but not for a fact.
AnonC
Journalists and bloggers usually write about others’ mess ups and apologies, dissecting which apologies are authentic and which apologies are non-apologies. In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it. There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about. It’s sad to see Ars Technica at this level.
breput
As much as I respect the site and gladly financially support it, this is ultimately a failure on Ars Technica and its editors. If there are any. If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents. That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.