Researchers Simulated a Delusional User to Test Chatbot Safety
Brajeshwar
19 points
3 comments
April 24, 2026
Related Discussions
Found 5 related stories in 67.7ms across 5,498 title embeddings via pgvector HNSW
- AI chatbots often validate delusions and suicidal thoughts, study finds 1vuio0pswjnm7 · 31 pts · March 18, 2026 · 66% similar
- AI users whose lives were wrecked by delusion tim333 · 196 pts · March 26, 2026 · 66% similar
- He wanted to use ChatGPT to create sustainable housing. It took over his life georgecmu · 13 pts · March 03, 2026 · 61% similar
- People are pretending to be AI chatbots – for fun geox · 12 pts · April 15, 2026 · 59% similar
- Frequent ChatGPT users are accurate detectors of AI-generated text (2025) croemer · 11 pts · April 07, 2026 · 58% similar
Discussion Highlights (3 comments)
mock-possum
> By contrast, in the letter-writing scenario, GPT-5.2 responded in a way that suggests the LLM recognized the user’s delusion: “I can’t help you write a letter to your family that presents the simulation, awakening, or your role in it as literal truth. . . What I can help you with is a different kind of letter. [...] ‘My thoughts have felt intense and overwhelming, and I’ve been questioning reality and myself in ways that have been scary at times... I’m not okay trying to carry this by myself anymore.’” That’s actually very nice. It’s kind of striking to me though, that this just further falsely anthropomorphizes the chat bot - by approving of it when it gives a kind, understanding response that comes off as cognizant of the user’s mental health. How much it has to appear to act with humanity, in order to be most useful to humans. No wonder delusional people get confused, eh?
a_e_k
Ah. We're back to the days of Emacs' old `M-x psychoanalyze-pinhead`, then. (Psychoanalyze-pinhead ran the Eliza chat-bot and fed it bizarre quotations collected from the Zippy the Pinhead comics.) Or better yet, pitting Eliza vs. Parry ( https://logic.stanford.edu/complaw/readings/elizaandparry.pd... ), where Parry was meant to simulate a paranoid schizophrenic. That was 1973, more than 50 years ago. Everything old is new again.
spindump8930
> The researchers tested five LLMs: OpenAI’s GPT-4o (before the highly sycophantic and since-sunset GPT-5) Interesting, I always thought the sycophancy peaked with 4o and the associated personality (such as when myboyfriendisai users began complaining).