Ask HN: How do you deal with people who trust LLMs?

basilikum 98 points 105 comments March 19, 2026
View on Hacker News

A lot of people use LLMs as the source of their objective truth. They have a question that would be very well answered with a search leading to a reputable source, but instead they ask some LLM chat bot and just blindly trust whatever it says. How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?

Discussion Highlights (20 comments)

chipgap98

Is this any different than people who believe random things they read on sketchy news sites or social media?

sodapopcan

Are you talking about people who will still insist the LLM was correct even after being presented with evidence to the contrary, or people who don't EVER bother double checking answers they get out of said software since they assume it to be true?

renewiltord

There are two kinds of fools in the world: the kind who ask a search engine and believe the first reputable source they see, and the kind who ask an LLM and believe the first response that has a reputable citation.

paul_n

Accept this this is going to continue to happen, ask yourself if it’s something in your control or not, and try to find a way to enjoy the ride. It’s going to be bumpy, as we’re going through trust issues outside of just LLMs as a society right now. However, if I notice a friend is about to harm themselves in some way I’ll pull open their ChatGPT and show them directly how sycophantic it is by going completely 180 on what they prompted. It’s enough to make them second guess. I also correct people who say “he or she” when referring to an LLM to say “it” in dialog, and explain that it’s a tool, like a calculator. So gentle reframing has helped. Sometimes I’ll ask them to pause and ask their gut first, but people are already disconnected from their own truths. It’s going to be bumpy. Save your mental health.

max8539

asking ChatGPT to read and tell me what this post is about

kace91

Honestly, the kind of people doing that is probably better served by AI (currently). I'm saying that because they were not going to be critical of the search results, and google is not exactly showing objective truth in the first positions nowadays.

fxtentacle

Yesterday, I was praying to ChatGPT and asking for guidance on my car washing problem. Through its holy scripture, it suggested me to walk to the car wash to improve my fitness. When I arrived and found the absence of my car to be a true hindrance for washing, it occurred to me that I should have pondered the scripture more carefully to identify its true meaning. I treat the LLM like a diety. Every sane person understands well enough that the Bible is not to be taken literally. And then when someone talks about using LLMs, I always rephrase that as prayer.

ddawson

I'm going to hold them to the same standard no matter if they use crappy sources, plagiarize, or hallucinate on their own. If someone asked, when and if I am in a position where I have to tell them, I would remind them that LLMs prioritize their own confidence over correctness. LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.

PaulKeeble

Its everywhere now its becoming a real problem in every corner of the internet and in the real world. People are using hallucinated legal cases in lawsuits, they are generating images to create fake events, they are using AI to write their CVS and just about everything you can imagine. People are having to wade through all this slop professionally, calling it out and pointing out the mistakes doesn't seem to help, the people using this stuff believe the AI is correct no matter what you say or do. Like most things that go mainstream it will take a good while before people understand, by which point they will have learnt a lot of things that aren't true and they will never let them go. We might get a healthy use of current AI at some point in the future or if the product drastically improves. All you can do now is hold them to the same standard you normally would, if you catch them lying whether an AI did it or not its their responsibility and you treat them accordingly.

ggm

I have a feeling this is like telling people "don't touch a live wire" and the more direct experiental "I won't touch a live wire again" lesson: People need to experience being hallucinated at, within their comprehension, and at best can be told about Gell-Mann Amnesia. I doubt you can stop them from asking machines for answers. What you can do is aide them to learn how to distrust the answers competently, but outside their field of knowledge, applying skepticism is hard. The irony of Gell-Mann Amnesia is that Michael Chrichton, who is said to have named it, suffered from it badly: Wrote well within his field, misapplied sciences to write well outside it, and said things which were indefensible.

maxdo

same way as i deal with people who trust other people.

eranation

Ask them to tell the LLM it's wrong... then when it goes "You are absolutely right!" to challenge it and say that it was a test. Then when it replies, ask it if it's 100% sure. They'll lose faith pretty quick.

keithnz

tell them what to prompt the AI with to get the correct results. I've seen a number youtube shorts lately doing this, where some scientist gets "refuted" by some random person based on an LLM result, they then sit with the LLM and ask the same question, get the same wrong answer, then follow it up with a clarifying question, which then the LLM realizes its mistake and gives a better answer.

heliumtera

They do not have a soul, they are NPCs incapable of reasoning. I don't mean lazy, incapable is literally what they are. Logic escapes them. When they say llms are conscious, and fully intelligent, them are comparing to themselves. If you think about it, they are right to say AGI is here, if the bar is the average human being. If you contemplate this fact for a moment, and start pondering it could be true, your life would change forever. Most beings just do not have a singular perspective, cannot reason, do not have a taste, cannot appreciate someone else's singular perspective. They also do not appreciate art for the same reason. I am sorry, truly. Just let them be. They would kill you before admitting they are forever stuck in Plato's cave.

roguechimpanzee

I think LLMs are fine for a "first pass" on a topic, but if I am researching something, I want a primary source rather than just the LLM-generated output. Do they have the primary source?

spacecadet

Introduce them to jailbroke LLMs.

userbinator

Show them https://old.reddit.com/r/aifails/

fallinditch

Simple: tell them to ask their LLM about it ... "Tell me about all the potential pitfalls of blindly trusting LLM output, and relate a couple or three true stories about when LLM misinformation has gone badly wrong for people."

uyzstvqs

The same way that I handle anyone who blindly trusts anything on the internet. Could be an LLM, TikTok or YouTube video, Wikipedia article, news article, whatever. It usually involves some form of "well, no, hold on..."

sbinnee

A friend of mine severely injured her leg, especially knee, and went through a surgery. She said that she had a rehabilitation plan for the next 6 months. Guess what, from Gemini. I just told her just listen to her doctor. I didn't tell her why LLMs can make mistakes or hallucinate because I thought that she would not appreciate my mansplaining. Looking forward though, my boring answer would still be education. It is going to take time. But without understanding LLMs, they will not be easily persuaded.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed