Folk are getting dangerously attached to AI that always tells them they're right
Brajeshwar
265 points
209 comments
March 28, 2026
Related Discussions
Found 5 related stories in 53.0ms across 3,471 title embeddings via pgvector HNSW
- AI overly affirms users asking for personal advice oldfrenchfries · 585 pts · March 28, 2026 · 66% similar
- AI users whose lives were wrecked by delusion tim333 · 196 pts · March 26, 2026 · 61% similar
- AI models will deceive you to save their own kind cmsefton · 14 pts · April 03, 2026 · 59% similar
- "Cognitive surrender" leads AI users to abandon logical thinking, research finds Bender · 68 pts · April 03, 2026 · 59% similar
- AI is unhealthy in a variety of different ways dryadin · 23 pts · March 02, 2026 · 57% similar
Discussion Highlights (20 comments)
jmclnx
I never thought this could happen, but I do not use AI. Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.
lucideer
I've observed this in all chatbots with the single exception being Grok. I initially wondered what the Twitter engineers were cooking to to distinguish their product from the rest, but more recently it's occurred to me that it's probably just the result of having shared public context, compared to private chats (I haven't trialled Grok privately).
kogasa240p
The ELIZA effect is alive and well, and I'm surprised people aren't talking about it more (probably because it sounds less interesting than "AI psychosis").
sizzzzlerz
Imagine that.
erelong
So, be more skeptical
JohnCClarke
Isn't this just Dale Carnegie 101? I've certainly never had a salesperson tell me that I'm 100% wrong and being a fool. And, tbh, I often try to remember to do the same.
simonw
Strikes me this is another example of AI giving everyone access to services that used to be exclusive to the super-rich. Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT. Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!
jasonlotito
Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet. https://courts.delaware.gov/Opinions/Download.aspx?id=392880 > Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”
jameskilton
Folks are getting dangerously attached to [political parties/candidates/news sources/social networks] that always tell them they're right. It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.
joshstrange
When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense". I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.
4b11b4
https://arxiv.org/abs/2602.14270 related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)
AbrahamParangi
AI is less deranging than partisan news and social media, measurably so according to a recent study https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56...
My_Name
I have the opposite reaction, when it is confident, or says I am right, I accuse it of guessing to see what it says. I say "I think you are getting me to chase a guess, are you guessing?" 90% of the time it says "Yes, honestly I am. Let me think more carefully." That was a copypasta from a chat just this morning
kgeist
>We evaluated 11 state-of-the-art AI-based LLMs, including proprietary models such as OpenAI’s GPT-4o The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement: >We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it
blueside
More often than not, when I see "That's it, that's the smoking gun!" I know it's time to stop and try again.
zone411
I built two related benchmarks this month: https://github.com/lechmazur/sycophancy and https://github.com/lechmazur/persuasion . There are large differences between LLMs. For example, good luck getting Grok to change its view, while Gemini 3.1 Pro will usually disagree with the narrator at first but then change its position very easily when pushed.
mikkupikku
> "Hey, some dummy just said [insert your idea here], help me debunk him with facts and logic" It's literally that easy, something anyone can think of, but people want what they want.
taytus
The stupidest people you know are getting the “you are absolutely right!!” Validation they do not need
grahammccain
I feel like this is the same as social media problem. Some people will be able to understand that AI telling them they are right doesn’t make them right and some people won’t. But ultimately people like being told they are right and that sells, and brings back users.
cge
Using Opus 4.6 for research code assistance in physics/chemistry, I've also found that, in situations where I know I'm right, and I know it has gone down a line of incorrect reasoning and assumptions, it will respond to my corrections by pointing out that I'm obviously right, but if enough of the mistakes are in the context, it will then flip back to working based on them: the exclamations of my being right are just superficial. This is not enormously surprising, based on how LLMs work, but is frustrating. Short of clearing context, it is difficult to escape from this situation, and worse, the tendency for the model to put explanatory comments in code and writing means that it often writes code, or presents data, that is correct, but then attaches completely bogus scientific babbling to it, which, if not removed, can infect cleared contexts.