The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness
LopRabbit
28 points
40 comments
April 20, 2026
Related Discussions
Found 5 related stories in 73.0ms across 5,126 title embeddings via pgvector HNSW
- If AI has a bright future, why does AI think it doesn't? JCW2001 · 15 pts · March 06, 2026 · 59% similar
- Why AI systems don't learn – On autonomous learning from cognitive science aanet · 73 pts · March 17, 2026 · 55% similar
- Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning Anon84 · 116 pts · March 21, 2026 · 54% similar
- AI Is Not About to Become Sentient measurablefunc · 11 pts · March 29, 2026 · 54% similar
- "Cognitive surrender" leads AI users to abandon logical thinking, research finds Bender · 68 pts · April 03, 2026 · 54% similar
Discussion Highlights (8 comments)
mstank
Glad to see Searle's Chinese Room mentioned early on in the paper. "Syntax is not sufficient for semantics," no matter how much compute we throw at the problem. My very amateur view is that until the underlying compute architecture and substrate resembles artificial biology more than silicon, we wont get there. The latest advances in AI have given me even more appreciation of biology and evolution. It's incredible what the human brain can do with about 20 watts of power, barely enough to power a lightbulb, in comparison to what it takes to run even our most basic LLM models.
yogthos
The paper makes a huge assumption that only thermodynamic constitutions can produce consciousness. The assumptions seems completely unsubstantiated given that thermodynamics are just states and states are replicable. The whole Chinese Room idea is pure sophistry as well. Both Dennett and Hofstadter address it quite well in Consciousness Explained and I am a Strange Loop respectively.
diablozzq
Consciousness is a property of humans biology - and quite clearly not a requisite to intelligence. I say clearly as at some point we reach proof by construction. As in, we already built intelligence because the system already completes tasks that require intelligence. We are so far into what would have been science fiction five years ago and the goal posts have moved so far. For anyone who disagrees, I challenge you to prove deep learning systems cannot solve <task with specific outcome humans can solve but not AI> given sufficient data and compute. I think the strongest sign we have true intelligence already is no one has built any benchmark that AI cannot solve. Yes, our current robotics lags AI, so we don’t have the equivalent of the human body to give our deep learning systems. Thus, it’s expected AI will be limited in physical scenarios. Second, hallucinations are present in humans. We are highly biased to ignore all the misspoken words in everyday life as we have error correction built into normal conversations. How often do you have to have someone repeat or rephrase something? It just doesn’t make sense to me. It’s like there are people out there whose belief systems are incompatible with this tech existing. Sure, it has limitations due to training data. It has limitations with no physical body. It cannot combine training and inference the same way a human does. But none of those are measures of intelligence or required to be intelligent.
jdmoreira
This is the complete opposite of Hofstadter's "Strange Loop" hypothesis, which intuitively makes much more sense to me.
Kim_Bruning
I'm partial to bioinformatics as per Pauline Hogeweg's definition; which explicitly has computation as a property of life. This approach actually makes testable (and tested) scientific predictions. This makes Searle-derived papers super-weird for me; since from my perspective they seem to disprove the existence of life. (and it makes the name of the philosophy "biological naturalism" very ironic to me :-P ) (for extra irony, Turing actually went into biology late in his life. See: Turing 1952 "The Chemical Basis of Morphogenesis" )
in-silico
From my observations, there are generally four camps in the machine consciousness discussion: 1. People who haven't really thought about it, and assume they're conscious because they talk like a human. 2. People who haven't really thought about it, and assume they can't be conscious because humans are obviously somehow special. This appears to be the largest group, and is linked to our religiously rooted culture in which human exceptionalism is the default. Those first two groups comprise the majority of people, and are not worth engaging with. 3. People who have thought about it, and came to the conclusion that they might be conscious, usually for computationalism/functionalism reasons. This is the group that I place myself in. 4. People who have thought about it, and came to the conclusion that they can't be conscious, usually for biological naturalist reasons. This seems to be the predominant group on Hacker News (among those who discuss it).
tmvphil
> To fully understand the difference between the embodied robot running an algorithm on a chip and the biological mapmaker, we need to remember that for the latter, subjective experience is a given, not because of abstract information processing, but because of a specific, metabolically constituted physical reality. Total drivel. Consciousness in biological systems is "a given" because of metabolism?
jwpapi
I think the question goes more into ourselves as it goes into AI, we don’t know exactly how our own intelligence and conciousness works and therefore it’s very tough to impossible to compare to AI intelligence and conciousness. Are we just autocomplete machines with sufficient enough variable pseudo-randomized input?