There's yet another study about how bad AI is for our brains
speckx
49 points
61 comments
April 16, 2026
Related Discussions
Found 5 related stories in 68.7ms across 4,783 title embeddings via pgvector HNSW
- AI is unhealthy in a variety of different ways dryadin · 23 pts · March 02, 2026 · 66% similar
- When Using AI Leads to "Brain Fry" dracula_x · 18 pts · March 06, 2026 · 65% similar
- AI-assisted cognition endangers human development? i5heu · 221 pts · April 15, 2026 · 61% similar
- We will come to regret our every use of AI paulnpace · 22 pts · March 12, 2026 · 60% similar
- Lawyer behind AI psychosis cases warns of mass casualty risks mentalgear · 13 pts · April 12, 2026 · 59% similar
Discussion Highlights (19 comments)
austin-cheney
So I guess when employers force AI use by their developers those developers progress towards worthlessness in that they will produce wrong code, not know the difference, not care about the resulting harm, and finally not even try to course correct if AI is removed. This sounds like something I have seen before: jQuery, Angular, React. What the article misses is the consequence of destroyed persistence. Once persistence is destroyed people tend to become paranoid, hostile, and irrationally defensive to maintain access to the tool, as in addiction withdrawal.
aqme28
Working with AI just feels like having a team of junior employees. Is this the same effect that causes managers and people in power to sometimes become... (for lack of a better phrase) stupid and crazy? edit: Everyone is responding to the "junior" part of my comment without addressing the actual question I'm asking. I should have just said "employees" -- Sorry.
bayarearefugee
Great news for the AI providers, turns out they are automatically turning their audience into captives who end up increasingly dependent on their product to get anything done.
gjsman-1000
All fun and games until the first time someone successfully sues an employer who mandated it and wins a mental health claim. The moment that happens, insurance flips tables, OSHA starts asking if they need exposure controls, and employers back down. And that’s the good scenario! The bad scenario is an employer mandated it, and someone mentally declined to the point they committed a public act of violence.
desecratedbody
This is why everyone needs to implement "Rawdog Thursdays" as I call it, in which you write code without the assistance of AI (i.e., you are "rawdogging" your professional output).
keysersoze33
Link to the preprint paper: https://arxiv.org/pdf/2604.04721 Worth reading the conclusion - makes a good point or two regarding the cumulative effect of using AI and not only the loss of the learning through struggle/time, but also the reference point of how long tasks should take without AI (e.g. we are no willing longer afford the time to learn the hard way, which will impact the younger generation notably).
forinti
I already had the impression that auto-complete was bad for programmers, since I've many times seen coders brute-force it until they found something that looked like it would do. With AI I've also witnessed people go crazy going back and forth without even looking carefully at the code (or the compile messages) to figure out what was missing. I'm pretty sure nobody will read the docs now.
tokai
And the amount of people that can recite Homer by heart has collapsed since writing came along.
fumar
It feels like every other convenience in modern life. We trade off some value for lack of human ability. Should you drive or walk or bike? In the US, most people drive and sit all day. Now we have fenced off part of our week for dedicated physical exercise to counteract physical atrophy.
boplicity
> "People’s persistence drops." Has anyone else noticed this, as they've scaled up their AI coding use? I've found it harder to stay on task, and it's affected a broad range of my personal activities. I'm able to make incredible things happen with AI tools, but do worry about the personal costs.
SubiculumCode
Reminder: Human cognition is complex and determining whether something is "good" or "bad" won't come from 1 or 2 studies. Point for discussion: We know that task and context switching imposes substantial cognitive costs, leading to lower and slower performance for a time. I think it is may be reasonable to hypothesize that interacting with a LLM to solve tasks tends to focus the brain on a more strategic level of focus. What do I want to solve? What is my goal? Actually solving individual problems is very different. It is more concrete and mechanistic, requiring a different mode of thought. Switching from the former to the latter is a cognive task switch, where the context changes, and resetting into the new context takes time and that imposes costs. Unless they had a control arm that imposed a task switching cost...
LLMCodeAuditor
Related: Unfortunately, given participant feedback and surveys, we believe that the data from our new experiment gives us an unreliable signal of the current productivity effect of AI tools. The primary reason is that we have observed a significant increase in developers choosing not to participate in the study because they do not wish to work without AI, which likely biases downwards our estimate of AI-assisted speedup. ( https://metr.org/blog/2026-02-24-uplift-update/ ) This was a huge red flag! Within a year a large majority of devs became so whiny and lazy that METR couldn't fill the "no AI" bucket for their study - it's not like this was a full-time job, just a quick gig, and it was still too much effort for their poor LLM-addled brains. At the time I thought it was a terrible psychological omen. I am so glad I don't use this stuff.
_moof
Interesting. Seems analogous to the atrophy of navigation abilities caused by over-reliance on GPS. I wonder if there's a common underlying mechanism.
Bridged7756
I personally find that LLMs help me store my mental energy to later put into more (personally) fruitful endeavors. Instead of being too tired to contribute to OSS, write, do other things at the end of the day, I find I can leave more juice for the end of the day after work hours, or just at work, I can move faster thus I utilize that extra time and energy into stuff like Anki, ups killing, etc. Just as anything, I believe the dose is the poison. I still find myself thinking about the high-level and decisions, but I spend less cognitive load into library, implementation specifics I can put somewhere else.
SamHenryCliff
This directly contradicts the statements made by Sal Khan. Children are being harmed by his push. This is very troubling. HN Discussion Here: https://news.ycombinator.com/item?id=47788845
fragmede
I sent the study to ChatGPT for analysis and it told me not to worry about it so I'm not gonna.
ChrisArchitect
Source: https://arxiv.org/abs/2604.04721 ( https://news.ycombinator.com/item?id=47682908 )
KevinMS
I can't wait to be one of the last thinking humans.
m_w_
Obviously the discussion here is mostly about writing code. In that domain, I’m always of two minds on this sort of thing. Although I think everyone would agree that material cognitive decline is bad, I also think we have to be precise with what that means. During university, for an exam in a graduate databases course, I had to manually calculate the number of operations for a query, down to the ones place. We were given an E-R diagram, the schema, and the query. So we had to act as the query planner - build out the B+ tree, check what was most efficient, and do it. This is by all means a pointless endeavor - no one has had to do this by hand in literally decades. It was also among the hardest cognitive tasks I've ever had to do. After being one of two people to complete the exam in the three allotted hours, I sat outside the lecture hall on a bench for a little while because I though I might faint if I went any further. I’m beginning to feel the same about writing code by hand. If I can design systems that are useful, performant, and largely maintainable, but the code is written by an LLM, is this harmful? It feels that I spend more time thinking about what problems need to be solved and how best to solve them, instead of writing idiomatic typescript. It’d be hard to convince me that’s a bad thing.