Thoughts on slowing the fuck down

jdkoeck 761 points 362 comments March 25, 2026
mariozechner.at · View on Hacker News

Discussion Highlights (20 comments)

0xbadcafebee

> it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services As somebody who has been running systems like these for two decades: the software has not changed. What's changed is that before, nobody trusted anything, so a human had to manually do everything. That slowed down the process, which made flaws happen less frequently. But it was all still crap. Just very slow moving crap, with more manual testing and visual validation. Still plenty of failures, but it doesn't feel like it fails a lot of they're spaced far apart on the status page. The "uptime" is time-driven, not bugs-per-lines-of-code driven. DevOps' purpose is to teach you that you can move quickly without breaking stuff, but it requires a particular way of working, that emphasizes building trust . You can't just ship random stuff 100x faster and assume it will work. This is what the "move fast and break stuff" people learned the hard way years ago. And breaking stuff isn't inherently bad - if you learn from your mistakes and make the system better afterward. The problem is, that's extra work that people don't want to do. If you don't have an adult in the room forcing people to improve, you get the disasters of the past month. An example: Google SREs give teams error budgets; the SREs are acting as the adult in the room, forcing the team to stop shipping and fix their quality issues. One way to deal with this in DevOps/Lean/TPS is the Andon cord. Famously a cord introduced at Toyota that allows any assembly worker to stop the production line until a problem is identified and a fix worked on (not just the immediate defect, but the root cause). This is insane to most business people because nobody wants to stop everything to fix one problem, they want to quickly patch it up and keep working, or ignore it and fix it later. But as Ford/GM found out, that just leads to a mountain of backlogged problems that makes everything worse. Toyota discovered that if you take the long, painful time to fix it immediately, that has the opposite effect, creating more and more efficiency, better quality, fewer defects, and faster shipping. The difference is cultural. This is real DevOps. If you want your AI work to be both high quality and fast, I recommend following its suggestions. Keep in mind, none of this is a technical issue; it's a business process isssue.

gchamonlive

I think before even being able to entertain the thought of slowing the fuck down, we need to seriously consider divorcing productivity. Or at least asking a break, so you can go for a walk in the park, meet some friends and reflect on how you are approaching development. I think this is very good take on AI adoption: https://mitchellh.com/writing/my-ai-adoption-journey . I've had tremendous success with roughly following the ideas there. > The point is: let the agent do the boring stuff, the stuff that won't teach you anything new, or try out different things you'd otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation. That's partially true. I've also had instances where I could have very well done a simple change by myself, but by running it through an agent first I became aware of complexities I wasn't considering and I gained documentation updates for free. Oh and the best part, if in three months I'm asked to compile a list of things I did, I can just look at my session history, cross with my development history on my repositories and paint a very good picture of what I've achieved. I can even rebuild the decision process with designing the solution. It's always a win to run things through an agent.

ex-aws-dude

Eh I think its self-correcting problem Companies will face the maintenance and availability consequences of these tools but it may take a while for the feedback loop to close

badlibrarian

I suppose everyone on HN reaches a certain point with these kind of thought pieces and I just reached mine. What are you building? Does the tool help or hurt? People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era. After five or six cycles it does become a bit fatiguing. Use the tool sanely. Work at a pace where your understanding of what you are building does not exceed the reality of the mess you and your team are actually building if budgets allow. This seldom happens, even in solo hobby projects once you cost everything in. It's not about agile or waterfall or "functional" or abstracting your dependencies via Podman or Docker or VMware or whatever that nix crap is. Or using an agent to catch the bugs in the agent that's talking to an LLM you have next to no control over that's deleting your production database while you slept, then asking it to make illustrations for the postmortem blog post you ask it to write that you think elevates your status in the community but probably doesn't. I'm not even sure building software is an engineering discipline at this point. Maybe it never was.

ketzo

I think the core idea here is a good one. But in many agent-skeptical pieces, I keep seeing this specific sentiment that “agent-written code is not production-ready,” and that just feels… wrong! It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.” Yes, there are still issues, and yes, keeping mental context of your codebase’s architecture is critical, but I’m sorry, it just feels borderline archaic to pretend we’re gonna live in a world where these agents have to have a human poring over every single line they commit.

ontouchstart

I am "playing" with both pi and Claude (in docker containers) with local llama.cpp and as an exercise, I asked both the same question and the results are in this gist: https://gist.github.com/ontouchstart/d43591213e0d3087369298f... (Note: pi was written by the author of the post.) Now it is time to read them carefully without AI.

bluGill

I only have so long on earth. (I have no idea how long) I need things to be faster for me. Sometimes that means I need to take extra time now so they don't come back to me later.

markus_zhang

If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term. Integration is the key to the agents. Individual usages don't help AI much because it is confined within the domain of that individual.

sjkoelle

i just wish someone would explain why i prefer cline to claude code so much

jaffee

> You installed Beads, completely oblivious to the fact that it's basically uninstallable malware. Did I miss something? I haven't used it in a minute, but why is the author claiming that it's "uninstallable malware"?

gedy

It's not even the complexity which, you have to realize: many managers and business types think it's just fine to have code no one understands because AI will do it. I don't agree, but bigger issue to me is many/most companies don't even know what they want or think about what the purpose is. So whereas in past devs coding something gave some throttle or sanity checks, now we'd just throw shit over wall even faster. I'm seeing some LinkedIn lunatics brag about "my idea to production in an hour " and all I can think is: that is probably a terrible feature. No one I've worked with is that good or visionary where that speed even matters.

simonw

Useful context here is that the author wrote Pi, which is the coding agent framework used by OpenClaw and is one of the most popular open source coding agent frameworks generally.

shevy-java

> While all of this is anecdotal, it sure feels like software has become a brittle mess That may be the case where AI leaks into, but not every software developer uses or depends on AI. So not all software has become more brittle. Personally I try to avoid any contact with software developers using AI. This may not be possible, but I don't want to waste my own time "interacting" with people who aren't really the ones writing code anymore.

SoftTalker

> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes One thing about the old days of DOS and original MacOS: you couldn't get away with nearly as much of this. The whole computer would crash hard and need to be rebooted, all unsaved work lost. You also could not easily push out an update or patch --- stuff had to work out of the box. Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it. Not that I want to go back to DOS but Wordperfect 5.1 was pretty damn rock solid as I recall.

gmuslera

This assumes that only (AI/Agentic) stupidity comes into play, with no malice on sight. But if things go wrong because you didn't noticed the stupidity, malice will pass through too. And there is a a big profit opportunity, and a broad vulnerable market for malice. Is not just correctness or uptime what comes into play, but bigger risks for vulnerabilities or other malicious injected content.

caldis_chen

hope my boss can see this

rglover

Nature will handle this in time. Just expect to see a "Bear Stearns moment" in the software world if this spirals completely out of control (and companies don't take a hint from recent outages).

profdevloper

It's 2026, the "fuck" modifier for post titles by "thought leaders" has been done already ad nauseam. Time to retire it and give us all a break.

jschrf

I for one look forward to rewriting the entirety of software after the chatbot era

trinsic2

> And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code. This is a great point. I have been avoiding LLM's for awhile now, but realized that I might want to try working on a small PDF book to Markdown conversion project[0]. I like the Claude code because command line. I'm realizing you really need to architect with good very precise language to avoid mistakes. I didn't try to have a prompt do everything at once. I prompted Claude Code to do the conversion process section by section of the document. That seemed to reduce the mistake the agent would make [0]: https://www.scottrlarson.com/publications/publication-my-fir...

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed