Senior European journalist suspended over AI-generated quotes
Brajeshwar
84 points
74 comments
March 21, 2026
Related Discussions
Found 5 related stories in 54.8ms across 3,471 title embeddings via pgvector HNSW
- Ars Technica fires reporter after AI controversy involving fabricated quotes danso · 130 pts · March 03, 2026 · 74% similar
- Wikipedia bans AI-generated content in its online encyclopedia Brajeshwar · 76 pts · March 28, 2026 · 59% similar
- Wikipedia officially bans AI-generated content 1vuio0pswjnm7 · 14 pts · March 29, 2026 · 55% similar
- India's top court angry after junior judge cites fake AI-generated orders tchalla · 343 pts · March 03, 2026 · 52% similar
- Meta planning layoffs as AI costs mount: Reuters nova22033 · 13 pts · March 14, 2026 · 52% similar
Discussion Highlights (14 comments)
Chinjut
Good lord, even the apology is AI generated: "That was not just careless—it was wrong." https://pressanddemocracy.substack.com/p/i-am-admitting-my-m...
phreack
> “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author. Of course, I should have verified them. The necessary ‘human oversight’, which I consistently advocate, fell short.” What? Irresistible quotes? This betrays a terrible way of thinking as a journalist. Basically an admission of wanting to fake news that'd sound good. At that point just write fiction.
abaieorro
> I wrongly put words into people’s mouths, when I should have presented them as paraphrases Journalists were doing this for decades. Stitching and editing words out of context, to put words into peoples mouths! I will take AI halucinations over journalists halucinations anytime, at least machine has no hostile intent, and is making a geunine error!
mmooss
They said earlier that they didn't verify the quotes. I understand them to mean that the LLM outputted text that included quotes. They assumed the output was accurate and found it so appealing, on an emotional level, that they just went with it without checking. The most valuable lesson here, by far, is not about other people but about ourselves. This person is trained, takes it seriously, and advocates for making sure the AI is supervised, and got caught in the emotional manipulation of LLM design [0]. We all are at risk. If we look at the other person and mock them, and think we are better than them, we are only exposing ourselves to more risk. If we think - oh my goodness, look what happened, this is perilous - then we gain from what happened and can protect ourselves. (We might also ask why this valuable tool also includes such manipulative interface. Don't take it for granted; it's not at all necessary for LLMs to work, and they could just as easily sound like a-holes.) [0] I mean that obviously they are carefully designed to sound appealing
PeterStuer
"Journalism" over here seems to have died a long time ago. Most if not all of the former "quality newspapers" unfortunately seem to have devolved into what could be more accurately described as "pro regime activist blogs".
camillomiller
I have witnessed in person what LLMs have done to the mind of seemingly intelligent people. It’s a disaster.
intended
Looking at the media ecosystem at large, gives me a case of gallows humor. In some sections of the ecosystem, firms still penalize journalists for errors. In other sections, checking reduces the velocity of attention grabbing headlines. The difference in treatment is… farcical. We need more good journalists, and more good journalism - but we no longer have ways to subsidize such work. Ads / classifieds are dead, and revenue accrues to only a few. I have no idea how we square this circle.
ashwinnair99
The tool didn't fail here, the person did. An experienced journalist should know better. Editorial review exists for exactly this reason, if you skip it, this is what happens.
maxrmk
Ironic coming from the Guardian. One of their journalists consistently publishes ai slop and the paper is in denial about it. https://x.com/maxwelltani/status/2023089526445371777?s=46
crop_rotation
HN is full of people saying ABCD should know better and honestly I thought the same, but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly. People get defensive when I point out out to them that ChatGPT will make things up and it is widely know, and some even tell me it is the fault of "tech people" for not fixing it and they can't be expected to double check every chatgpt conversation. So I am very sure this problem is more prevalent than what we see and also that it is going to continue increasing.
shahbaby
> That was not just careless – it was wrong lol
tobr
Interesting to note how similar this seems to what happened with Benj Edwards at Ars Technica. AI was used to extract or summarize information, and quotes found in the summary were then used as source material for the final writing and never double checked against the actual source. I’ve run into a similar problem myself - working with a big transcript, I asked an AI to pull out passages that related to a certain topic, and only because of oddities in the timestamps extracted did I realize that most of the quotes did not exist in the source at all.
smcin
Already posted this yesterday: https://news.ycombinator.com/item?id=47449126
skygazer
Out of curiosity, if you asked for the same text extraction multiple times, each inside fresh contexts, is it likely to fabricate unique quotes each time? And if so, a) might that be a procedure we train humans to do to better understand LLM unreliability, and 2) and instrumentalize the behavior to measure answer overlap with non LLM statistical tools? Also, quote-presence testing/linking against source would seem to be a trivial layer to build on a chat interface, no LLM required. Just highlight and link the longest common strings.