Scientists invented a fake disease. AI told people it was real
latexr
86 points
88 comments
April 10, 2026
Related Discussions
Found 5 related stories in 54.0ms across 4,179 title embeddings via pgvector HNSW
- The AI Industry Is Lying to You spking · 150 pts · March 24, 2026 · 54% similar
- New York Times Got Played by a Telehealth Scam and Called It the Future of AI hn_acker · 23 pts · April 07, 2026 · 53% similar
- AI is unhealthy in a variety of different ways dryadin · 23 pts · March 02, 2026 · 53% similar
- AI models will deceive you to save their own kind cmsefton · 14 pts · April 03, 2026 · 52% similar
- AI users whose lives were wrecked by delusion tim333 · 196 pts · March 26, 2026 · 51% similar
Discussion Highlights (20 comments)
daoboy
It sounds like there wasn't really a counter narrative for the models to learn from. This feature of how llms accumulate information is already being gamed by seeding the internet with preferred narratives. I'm not sure how many Medium articles, blog posts and reddit threads I need to put out before grok starts telling everyone my widget is the best one ever made, but it's a lot cheaper than advertising.
wiredfool
This is a strong contender for an Ignobel.
simmerup
You’ve seen people game adsense It’s gunna be even wilder when people realise they have an incentive to seed fake information on the internet to game AI product recommendations I’ve already bought stuff based off of an AI suggestion, I didn’t even consider it would be so easy to influence the suggestion. Just two research papers? Mad.
andrewstuart
Well yes of course. In the old days of computing people liked to say “garbage in, garbage out”.
tossandthrow
Seems to be a failure of the publishing system. For humans, or Ai, to have any knowledge, we need to have trustworthy sources. Naturally,when you use publishing systems considered trust worthy, that is going to be trusted.
fennecbutt
This isn't an AI problem... Clickbait headline.
krilcebre
What stops a small, or even a large group of people to intentionally "poison" the LLMs for everyone? Seems to me that they are very fragile, and that an attack like that could cost AI companies a lot. How are they defending themselves from such attacks?
Oras
This would work on people too, you can see daily fake info/text/videos and many people believing in them. LLMs do not think, why this is still hard to understand? They just spit out whatever data they analyse and trained on. I feel this kind of articles is aimed at people who hate AI and just want to be conformable within their own bias.
malux85
One of the frustrating parts about LLMs is that they are so neutered and conditioned to be politically correct and non-offensive, they are polite more than correct. Its too easy to "lead the witness" if you say "could the problem be X?" It will do an unending amount of mental gymnastics to find a way that it could be X, often constructing elaborate rube Goldberg type logic rats nests so that it can say those magic words "you're absolutely right" I would pay a lot of money for a blunt, non-politeness conditioned LLM that I would happily use with the knowledge it might occasionally say something offensive if it meant I would get the plain, cold, hard truth, instead of something watered down, placating, nanny-state robotic sycophant, creating logical spider webs desperate for acceptance, so the public doesn't get their little feelings hurt or inadequacies shown.
austin-cheney
I bet you could easily convince LLMs of Dihydrogen-Oxide toxicity.
codeulike
“Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”
yewenjie
Interestingly ChatGPT right now answered > Bixonimania is not a real disease. It was deliberately invented by scientists as an experiment to test whether AI systems and researchers would spread false medical information. Here’s the simple explanation ...
simianwords
This is exaggerated. Here's what happened Edit: I don't think its exaggerated and I think its important . 1. they invented a new disease and published a preprint (with some clues internally to imply that it was fake) 2. asked the Agent what it thinks about this preprint 3. it just assumed that it was true - what was it supposed to do? it was published in a credentialised way! It * DID NOT * recommend this disease to people who didn't mention this specific disease. Edit: I'm wrong here. It did pop up without prompting It just committed the sin of assuming something is true when published. What is the recommendation here? Should the agent take everything published in a skeptical way? I would agree with it. But it comes with its own compute constraints. In general LLM's are trained to accept certain things as true with more probability because of credentialisation. Sometimes in edgecases it breaks - like this test.
_the_inflator
Bad. But scientists faked data and told people it wasn’t is ok? Nature had to recall quite some papers. I hope that we all keep the balance.
ChrisMarshallNY
I wonder if one of the issues is, LLMs treat all data sources equally, or they don’t really weight the reputation properly (pure speculation, based only on seeing the results). I know that a large portion of code out there, is not written by seasoned experts, so rather naive code is the fodder for AI. It often gives me stuff that works great, but is rather “wordy,” or not very idiomatic. For example, court cases mentioned in fictional accounts. If they are treated as valid, then that could explain some of the hallucinations. I wonder if SCP messes up LLMs. Some of that stuff is quite realistic. I also suspect that this is a problem that will get solved.
pu_pe
This is partly why this talk about AI "solving science" should be taken with a grain of salt. Here the authors intentionally poisoned the publication record, but there are millions of papers out there that are also garbage, and it would be very hard for either a human or a LLM to distinguish them from actual work.
mrjay42
I'm not especially defending AI, but isn't this information like that one time a professor changed the content on Wikipedia to play a big 'gotcha' on his students? Instead of proving that Wikipedia is "bad", that professor didn't realize he proved that Wikipedia is working as intended: if you write something wrong in Wikipedia, over a certain period of time (yes, it can be long, I know), it will be corrected. About this article in Nature, if you feed AI incorrect information, it's gonna spit it back at you. When you think about it, when did we say that AI was self correcting? In a broader logic, imagine we teach kids something false, as an experiment of course. And then we wait a little bit, and we watch some years later how much of this people still repeat the false information they were taught. And then we'd write a paper to say "oh look at those people they're dumb", wouldn't that be a little unfair? even unscientific?
OutOfHere
The authors of all recent bogus papers should be outed and fired. I hope a future AI can identify many of them.
simianwords
I think this problem is interesting and it carries over to the general public. Is the general public and are the media outlets also equally skeptical? Are they aware of the distinction between published journals vs preprints? Take this as an example: Search for this in google: "ai data centers heat island". Around 80 websites published articles based on a preprint which was largely shown to be completely wrong and misleading. https://edition.cnn.com/2026/03/30/climate/data-centers-are-... https://www.theregister.com/2026/04/01/ai_datacenter_heat_is... https://hackaday.com/2026/04/07/the-heat-island-effect-is-wa... https://dev.ua/en/news/shi-infrastruktura-pochala-hrity-mist... https://www.newscientist.com/article/2521256-ai-data-centres... https://fortune.com/2026/04/01/ai-data-centers-heat-island-h... You may not believe it but the impact this had on general population was huge. Lots of people took it as true and there seem to be no consequences. What should be a takeaway for the LLM should also be a takeaway for the media outlets.
ninjagoo
At first I thought this was a Nature paper. Turns out, it's a feature article. The true test for this would be a blind test that involves human doctors - primary care since that's where something like this fits - exposed to the same data (fake papers), as well as LLMs. Isn't it interesting that the fake papers made it onto science preprint servers? I didn't think that they were open to posting by random authors and had some basic checks in place. Currently these papers are showing as "withdrawn" on their DOI links [1] [2]. [1] https://doi.org/qzm4 [2] https://doi.org/qzm5