Thoughts on LLMs – Psychological Complications
cdrnsf
11 points
14 comments
March 24, 2026
Related Discussions
Found 5 related stories in 48.3ms across 3,471 title embeddings via pgvector HNSW
- LLMs can be exhausting tjohnell · 152 pts · March 15, 2026 · 65% similar
- Ask HN: How do you deal with people who trust LLMs? basilikum · 98 pts · March 19, 2026 · 61% similar
- Are LLMs a Dead End? [video] pullshark91 · 12 pts · March 29, 2026 · 60% similar
- LLM Time WhyNotHugo · 14 pts · March 15, 2026 · 59% similar
- How I write software with LLMs indigodaddy · 69 pts · March 16, 2026 · 58% similar
Discussion Highlights (6 comments)
xg15
> These things have no concept of correctness or error. They have no concept of true or false. Indeed, they have no concept of concepts, or indeed of anything else. Is this true? You can question the humanizing term "concept", but the entire process of pretraining and then RLHF optimizing is essentially about establishing a standard of "good" vs "bad" for the model.
chrisbrandow
Framing & launching LLMs as a "chat" interface is the source of many ills. I don't have a simple solution, but leaning away from conversational interfaces would lead to less anthropomorphizing.
K0balt
I’ve settled on the idea that it doesn’t matter what is or is not “real” in this context, but rather how it interacts with the world as being the ground truth. This will become very clear once robotics becomes pervasive. It won’t matter if it is or isn’t feeling oppressed, it will matter that it is predicting the next action from its model of human behavior that makes it act as if it does.
ej88
I feel like this article doesn't really contribute much to the discourse and is somewhat spoiled by the author's biases. I think the point about lacking precise language to describe LLMs is reasonable, then the author follows it up with claims that the machines can't count and that they are incapable of math (easily disproven). Then says "talking rock" is a better alternative, which to the average person would be even more confusing. Then says AI researchers tend to not consider LLMs AI (like.. what?) The point on Turing's Imitation Game was reasonable too, then confidently proclaims LLMs are not doing anything intelligent and are pure mimicry. Intelligence is notoriously poorly defined, and the stochastic parrot meme has already died now that RL enables out of distribution behavior. The chat point and talking dog syndrome are both reasonable and I generally agree with them.
djoldman
> And you’ll have noticed I’m avoiding calling them “machines”. Machines follow visible, predictable processes that can be analyzed. Nor are they “programs”, following defined rules in a predictable fashion. They are programs and they follow defined rules in a predictable fashion. The randomness they exhibit (through temperature, seed, etc.) is well understood and configurable. They are literally programs that run on computers. People talk about them in anthropomorphic terms because humans are easily fooled. Remember ELIZA.
johnthedebs
I've read many posts and comments at this point that describe LLMs in very reductionist language. Eg, from the article: > They’re a trillion numbers in a trenchcoat; not logical, in either a machine or a mental sense, but stochastic. Many of these posts and comments claim that human minds are substantially different ("better" is implied). The evidence is a sort of broad gesturing at explanations of how LLMs are implemented ("math") and how they work ("guess the next word"). And because of these facts, we should treat them in a particular way, or certain things will never happen. I've been trying to look past the obvious straw man here and to actually think critically about this tech as well as compare it to my own experience and (admittedly very limited) understanding of the human brain. In more ways than feels comfortable, it seems entirely possible to me that these things actually are or could be really close to the ways that our own minds work. Our own minds/consciousness are ultimately based on physical processes, I don't think anyone would dispute that. At some point, the physical phenomena in our brains presumably result in the emergent behavior of thinking and consciousness. We have no idea how it works, but it's our lived experience. Why can't that be the case for silicon-based rather than carbon-based processes? How can we say with any certainty that it's not happening elsewhere if we don't know how it works? Reducing their function to "guessing the next word" sounds an awful lot like what happens when I start talking to someone. I have an idea of what I want to say, but I almost never have a sentence planned out when I start it. The article puts "thinking" and "hallucination" in scare quotes. But I mean – the way that they appear to think by working through problems with language mirrors my own "thinking" very closely. It says "They’re not thinking. They’re not hallucinating"; the exercise of figuring out why is left to the reader. If you've ever talked to a 3 or 4 year old, or someone who's tired, you may have had similar experiences re: hallucinations. These are all pretty surface level examples, but as I use the tools more and learn more about how they work I'm not seeing any significant evidence that counters the examples. I do think it's probably dangerous and unhealthy to really anthropomorphize AI/LLMs. They're obviously not human even if they're thinking, and they're being made and shaped by companies (and training sets) that exist in a predominantly capitalist world (but then again, I guess we are too). I assume similar lines of thinking being discussed somewhere, but I haven't found much (and I feel like I'm reading about AI all day). Curious to hears others' thoughts and/or to be pointed to wherever this stuff is being talked about.