The human cost of 10x: How AI is physically breaking senior engineers
forks
65 points
59 comments
April 13, 2026
Related Discussions
Found 5 related stories in 106.0ms across 4,562 title embeddings via pgvector HNSW
- AI Isn't Killing Developers–It's Creating a $10T Maintenance Crisis rakiabensassi · 33 pts · March 19, 2026 · 65% similar
- Why developers using AI are working longer hours birdculture · 62 pts · March 07, 2026 · 61% similar
- AI and remote work is a disaster for junior software engineers gpi · 17 pts · April 09, 2026 · 61% similar
- AI Made Writing Code Easier. It Made Being an Engineer Harder saikatsg · 380 pts · March 01, 2026 · 60% similar
- AI is unhealthy in a variety of different ways dryadin · 23 pts · March 02, 2026 · 60% similar
Discussion Highlights (11 comments)
solomatov
Is there any publication which demonstrates that the improvement is really 10x?
kakacik
Somebody doesnt know how to regulate their pace, and then various burnout symptoms happen. Not everybody pushes themselves like that, nor should, its anything but healthy and sustainable. In my experience it takes... rather obsessed people, ocd or similar traits, maybe 2 out of 10 intensity of their disease. Highly functional, smart, yet unbalanced. Llms just allow this spiral to go further, while human limits remain the same. Each of us creates our own path, dont mess it up just because you can. Your employer doesnt care much about you at the end, just another cog in machine but health once damaged may not bounce back, ever
aanet
I feel this is not discussed enough. I can attest to this 100%. Just the past weekend, I was talking with a very senior engineer (~distinguished engineer at a very large tech co) who basically said he's working 8-8-6 (8 am - 8 pm, 6 days/week), "writing code" (more like supervising 8-15 agents) for a product demo in 2 weeks, which otherwise would have taken at least 1 quarter's worth of time with a small team. He's zonked out, fwiw. There are no junior engineers in the team ¯\_(ツ)_/¯, most having been laid off a few months ago. The toll it takes, and the expectations of AI-driven productivity, have only increased dramatically. At some point, the reality will hit the remaining engg team. Not sure if the company or its leadership realizes, but so far, it's all-AI, all-the-time, human cost of productivity be damned.
rvz
> The industry calls this “10x productivity.” I call it what it is: a system that generates output at machine speed and forces humans to process it at biological speed. The question is can you tolerate the amount of PRs thrown at you per day on top of reviewing the exponentially growing mess of code that continues to double every hour and being paid less for it. Just learn to say no and leave. Why do you tolerate the increasing comprehension debt that is loaded on to you. You will never get that time back. Just give it to someone else that thinks it is worth maintaining that slop for less.
zthrowaway
Can definitely attest to this. The frequency of outages at my company have increased drastically the past year, especially ever since incorporating agentic development. I’m seeing all of the dev best practices go out the window. We have a few vibe coders that are posting 15-30 PR’s per day. It’s way too much for us to review. We’re not a big shop. I think we’re going to have to hire more people just to review code across the industry. And those people will have to know how to actually write software otherwise what are they even reviewing. Maybe the models will get so good they never make a mistake. Doubt it.
aetherspawn
… how are you getting actual usable output at that scale? I have to baby my AI in 1 minute increments or it just doesn’t arrive at the correct solution at all. Using Codex 5.2
TuringNYC
I can attest to this. Ultimately I dont think it is possible to 10x output systems with AI and actually keep the traditional quality controls (yet.) IMHO you just need two stacks -- systems where you can play fast and loose and 10x output. And systems where quality matters where you can perhaps 1.5 or 2x. That is still a lot of output.
hgoel
Using vibe coding for frequent PRs seems insanely reckless. In my scientific computing environment, the majority of my vibe coded output goes to one-off scripts, stuff that is not worth committing (correcting outputs, one-off visualizations, consistency checks), and anything worth committing gets further refined to an extent that it pretty much can't be considered vibe coded anymore. It's simply too risky, any bugs would propagate down to decision making for designing new, expensive instruments. I imagine that the cost and trust risks in enterprise environments are similar, so this seems very reckless. AI Agents have helped up my productivity, but that's specifically because I can focus on the science, and delegate the auxiliary things to AI. I also believe I get this productivity out of them because my supervisor really drove home how hard I need to go on consistency checks and years of having my visualizations nitpicked (so I am able to do the same to AI and recognize when results are suspicious).
ok_dad
I love it. I was getting burnt out due to ADHD or autism burnout but with AI tooling I’m able to work a full week without burnout. I think the kind of burnout I get is helped with these tools, but since I’m not neurotypical it’s different from the burnout people are getting from doing too much. I do see “task expansion” happening often though. If I can do the full feature rather than doing baby steps I’ll often do that now, because wrangling code is easier.
Incipient
I'm a mostly solo dev, and I'm finding that being purely code-review for an AI is sub-optimal. Too often the AI runs off down bad paths which you only realise later, and unpicking the mess is most likely a productivity loss. Working more as a pair, or essentially doing code review as you go, in small chunks, is significantly better. I personally don't have the setup of tokens to spend to say "go build this entire thing" and then review 15k loc. I also find even opus is poor at coming up with tests to justify the business logic it's meant to be implementing.
cadamsdotcom
You can write your own linters for every dumb AI mistake, add them as pre-commit checks, and never see that mistake in committed code ever again.. it’s really empowering. You don’t even have to code the linters yourself. The agent can write a python script that walks the AST of the code, or uses regex, or tries to run it or compile it. Non zero exit code and a line number and the agent will fix the problem then and rerun the linter and loop until it passes. Lint your architecture - block any commit which directly imports the database from a route handler. Whatever the coding agent thinks - ask it for recommendations for an approach! Get out of the business of low level code review. That stuff is automatable and codifiable and it’s not where you are best poised to add value, dear human.