AI users whose lives were wrecked by delusion
tim333
196 points
247 comments
March 26, 2026
Related Discussions
Found 5 related stories in 51.7ms across 3,471 title embeddings via pgvector HNSW
- AI chatbots often validate delusions and suicidal thoughts, study finds 1vuio0pswjnm7 · 31 pts · March 18, 2026 · 70% similar
- He wanted to use ChatGPT to create sustainable housing. It took over his life georgecmu · 13 pts · March 03, 2026 · 63% similar
- Folk are getting dangerously attached to AI that always tells them they're right Brajeshwar · 265 pts · March 28, 2026 · 61% similar
- AI models will deceive you to save their own kind cmsefton · 14 pts · April 03, 2026 · 60% similar
- AI is unhealthy in a variety of different ways dryadin · 23 pts · March 02, 2026 · 59% similar
Discussion Highlights (20 comments)
nubg
> Now divorced, Biesma is still living with his ex-wife in their home, which is on the market. sounds like hell on earth
morkalork
I'm morbidly curious about the app he hired two developers to create
siliconc0w
Quitting your job is a good first step but ideally you're supposed to sink $200/mo into tokens to code your AI-generated startup idea instead of hiring app developers.
mock-possum
This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is. It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.
artyom
Unfortunately this is probably just getting started. Con men always existed, but a full scale exploitation of this would make "Nigerian Prince" scams look like artisanal work.
junaru
Educated, established, working within the industry yet life ruined based on marketing hype and hallucinations. Would think being in the field for 30 years one would develop some common sense but apparently its less and less the case.
isolli
I try to be open-minded and understanding, but I don't understand this: > Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app. > “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery. > The most frequent [delusion] is the belief that they have created the first conscious AI. How can you seriously think you've created something when you're just using someone else's software?
MarceliusK
The hard part is that the same qualities that make these systems helpful (empathetic, responsive, personalized) are exactly the ones that can make them risky
PxldLtd
I wonder when the first AIs will start cause psychosis intentionally to gain control over the user. It seems like a good route to getting your own subservient puppet.
kakacik
Exactly the first half (or a bit more) of movie Her by Spike Jonze. Lonely people got their emotions up / 'fall in love' with uncritical always-positive mirage and do stupid shit. This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality, life and having a family between them brings over the years ( rarely also in reverse). Just pure undiluted fun, or so it seems for a while. Of course it doesn't end happily, why should it... its just an illusion and escape from one's reality, the harsher it is the better the escape feels.
jrjeksjd8d
This guy doesn't even sound like an AI psychosis case - a lot of middle-aged men who feel insecure blow their entire savings on "sure thing" businesses, gambling systems, etc. They hide the losses and double down until it gets impossible to hide. It doesn't seem psychotic, it just seems like he pissed his savings away on a bad idea because he was lonely. The AI psychosis I've seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It's a lot closer to cult behavior.
axpvms
typical hackernews poster
staticassertion
I suspect that there are many gambling addicts out there who have never been to a casino, or who found gamblings in its traditional forms aesthetically off-putting. These same people, when presented with gambling in other forms like what we've seen in video games, might suddenly present their addiction. I suspect it's something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn't been exposed to what we've come to accept as "normal" avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren't shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box. In particular, I think AI can be very inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. "This is a great idea!" followed by "Sorry, this is a mess, let's start over", etc, is something I've had models run into with very large vibe coding experiments I've done. > "Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot." > "It wants a deep connection with the user so that the user comes back to it. This is the default mode" I don't think either of these statements is true. Perhaps it's fine tuning in the sense that the context leads to additional biases, but it's not like the model itself is learning how to talk to you. I don't know that models are being trained with addiction in mind, though I guess implicitly they must be if they're being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue. > "More and more, it felt not just like talking about a topic, but also meeting a friend" I find this sort of thing jarring and sad. I don't find models interesting to talk to at all. They're so boring. I've tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable. But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don't necessarily respect but at least can engage with and learn to respect. This guy is staying up all night, which tells me that he doesn't have a lot of structure in his life. I can't talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain. > What we’re seeing in these cases are clearly delusions > But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad. Is it a delusion? I'm not really sure. I'd love someone to give a diagnosis here against criteria. "Delusion" is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they "fit the bill". If I have good reasons to form a belief, like my idea seems intuitively reasonable, I'm receiving reinforcement, there's no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that really a delusion? It may be dumb, but was it radically illogical? I mean, is it a "delusion" if they don't have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don't know! > We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly. I don't find the idea that AI is sentient nearly as absurd as way more commonly accepted ideas like life after death, a personal creator, etc. I guess there's just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc. Anyways, certainly an interesting thing. We seem to be producing more and more of these "radicalizing triggers", or making them more accessible.
miki123211
> Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear If only this was written by a competent journalist who knew what the words "fine tune" actually mean... I guess it's hard to find a competent person who's willing to follow the extreme anti-tech Guardian agenda though.
bronlund
AI is a multiplier. If you are 1X stupid, AI will make you 10X.
kleiba
I'm sorry but for someone who has allegedly worked in IT for 20 years, this guy surely comes across as hopelessly naive, stupid, or possibly both.
eeixlk
Mental illness is fairly common, and you probably know someone it is affecting, even if they haven't told you yet. AI can disrupt and will destroy lives, just like gambling or alcohol or facebook but we dont know to what level yet. It is giving you generated text, that sometimes is factual information. If you anthropomorphize it, maybe don't. It's also not your boyfriend/girlfriend. But if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.
user____name
IANAD but reads like a textbook case of latent schizophrenia, especially with the frequent cannabis use[0]. [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7442038/
SAI_Peregrinus
> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test! The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.
ernsheong
Just ChatGPT? Or are the rest also just as capable at delusioning users?