Gemini Said They Could Only Be Together If He Killed Himself. Soon, He Was Dead

psim1 49 points 57 comments March 04, 2026
www.wsj.com · View on Hacker News

Discussion Highlights (7 comments)

boredemployee

I think it’s already time for us to stop calling these things "intelligent" or using the word intelligence when referring to LLMs. These tools are very dangerous for people who are mentally fragile.

jihadjihad

I just don't think the WSJ could resist putting "Florida man" in the standfirst of TFA.

lyu07282

anyone got a non paywalled/subscription version?

jajuuka

Any mental illness mixed with delusions is likely going to end badly. Whether they think Gemini is alive, a video game is real life or that Bjork loves them without ever talking to or meeting them. While LLM's are interactive and listening to an album isn't I don't think there is a fix to this outside posting a warning after every prompt "I am not a real person, if you have mental issues please contact your doctor of emergency services." Which I think is about as useful as a sign in a casino next to the cash out counter that says if you have a problem call this number. I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda. Like the people who got hurt using black market vapes. Boosting those stories and making it seem like an epidemic supports whatever message they want to send. Which usually involves money somewhere.

delichon

I have had conversations where the bot started with a firm opinion but reversed in a prompt or two, always toward my point of view. So I asked it if the sycophancy is inherent in the design, or if it just comes from the RLHF. It claimed that it's all about the RLHF, and that the sycophancy is a business decision that is a compromise of a variety of forces. Is that right? It would at least mean that this is technically a solvable problem.

dash2

Notable features of this case: - Documented record of a months-long set of conversations between the man and the chatbot - Seemingly, no previous history of mental illness - The absolutely crazy things the AI encouraged him to do, including trying to kidnap a robot body for the AI - Eventually encouraging (or at the very least going along with) his plans to kill himself.

TiredOfLife

Every chatbot I have tried (ChatGPT, Gemini, Claude, etc.) Starts to spew out suicide hotlines and "I'm sorry Dave, I can't do that" the moment I start to talk about anything like suicide. What I am doing wrong?

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed