We automated everything except knowing what's going on

kennethops 85 points 67 comments March 03, 2026
eversole.dev · View on Hacker News

Discussion Highlights (18 comments)

afry1

"The future belongs to whoever understands what they just shipped." Perfect summary. It's like we invented a world where you can finally, _finally_ speedrun an enormous legacy codebase and all patted ourselves on the back like that was a good thing.

climike

In a similar fashion it appears that article was automated - did the author read every word in their own article?

frisia

Actually unreadable

bena

I keep seeing the canard that "Anyone with an idea and access to an AI agent can ship a product. What used to take a team of twenty and six months now takes one person and a weekend. That's not hype. It's happening right now, everywhere, all at once." But I don't see it. Where is this glut of software?

Rygian

The title reminds me of the single lesson I retained from a training for upcoming people managers: "You can delegate everything except accountability."

andai

The whole point of AI is that we don't have to think anymore. Knowing what's going on is the AI's job. Not saying that's how it should be, that's just the world I predict in the not too distant future. I love thinking, but most people I know seem to experience it as a form of physical pain.

Traubenfuchs

This person has never worked with several decades old government, bank or tax (return) code where all that's ever done is edge cases and implementations of new laws and capabilities being forever bolted on to each other. Systems that were half migrated from a PL/I / Cobol mess to java 7 by Accenture, until the money ran out with the result being that both systems now exist forever and have to be integrated into each other for years. In the end you have decades old code bases maintained by people with less than 10 years of total work tenure, who will leave for greener pastures soon. No one to ask but some old grumpy grey beard with a royal salary who barely does any work but has some ancient wisdom to share. No one understanding what's going on inside of complex systems in financially constrained environments built and maintained by average, at best, engineers is the norm and is what keeps the world running. None of that is a symptom of AI. The only change AI brings is that even first person developers don't know anymore what the fuck they just deployed.

bluetomcat

We went from expressing computation via formal, mostly non-ambiguous languages with strict grammar and semantics, to a fuzzy and flaky probabilistic system that tries to mimic code that was already written. What could go wrong?

gwynforthewyn

Honestly, the post itself reads very generated, very rage bate. I have so much more faith in us and our hobby/industry than this blog post. There are reports of industries trying to use these tools to generate as much as possible, sure. There are also people generating bad art and unpleasant prose and using llms to generate nonsense they don’t pay attention to. I don’t see why that implies that you or I lost interest in tinkering with toys we build. If I want to spend 4 weeks understanding oauth a little better by implementing a client, I still can and I still do. Automating our builds absolutely didn’t create a cathedral of complexity while nobody noticed. It did mean I can open an Free Software project, read the build file and understand how to build the thing. That’s the opposite of generating complexity. I worry about our future generations as much as the next person, but this low effort pabulum doesn’t represent the thoughtful industry and hobby that I love.

DaedalusII

eventually i realised it was cheaper just to vibecode and buy put options over my company by managing the risk of failure and technical debt with a financial instrument, i have a lot more freedom to move fast and break things, and scale aggressively

andai

There's a funny angle to all this. There was an article last year where the author asked AI for a web app. It installed a gigabyte of node modules and crashed on startup. He told it to calm down and just use php, it gave him 100 lines with no dependencies that worked the first time. The Pieter Levels stack :) Of course, this is ideal for solo entrepreneur. If you are employed, then you cannot finish it in 100 lines. How will you get paid to maintain it for the next ten years, and hire all your friends to help you? I think this difference in incentives explains most of what we've been complaining about for the last twenty years.

iammjm

Should the goal really be to build a system that we completely understand, or build a system that solves a problem? Like we dont fully understand quantum physics, yet good enough to build helpful systems on top of it. Or like not knowing exactly what every bee in a hive does at any moment, yet still reliably harvesting honey in the end? I think people have this modernist desire for absolute truths and certainty, where the world we live in clearly is postmodern. There are no certainties, only probabilities. So embrace the chaos, try to build systems that help to contain entropy for some useful purposes, and accept that all of them will eventually fail in some way and you will need to course correct. Faulkner is dead, long live Pynchon

leecommamichael

Whoa a CEO writing about why their product is especially important in this very moment!

bluGill

You cannot understand everything. That has been the case since long before AI. I have a vague idea how the linux kernel works, and I could figure it out (I once found and fixed a bug in FreeBSD device drivers) - but I don't, I just trust it works. I've never looked at sqlite to understand how it works - I know enough SQL to be dangerous and trust it works. I know very in depth how the logging framework of my project works - maintaining that code is part of my day job and so I need to know, but the hundreds of other developers in the company that use it trust it works. Meanwhile my co-workers are writing code that I don't understand, I trust they do it well until proven otherwise. AI is very useful, but it so far doesn't write the type of code I can trust. Thus I use it but I carefully review everything it does.

whynotmaybe

I'm still balancing on whether we "need" to know what's happening. Very few understand deeply what's happening within the computer between the cou and the bridges and the rest. The fdiv bug in 1994 took us all by surprise because we were in a situation where bug couldn't exist in hardware, because it either works or it doesn't. When I'm using firebase or aws, I don't know the underlying system, I don't know why some resources can be created with an underscore or other can't start with a number. Yet it works. We're working in layers where usually we only touch the last one. Yes, understanding the others is great to debug. I'm even wondering whether we need tests when they are written by the same llm that wrote the code.

seethishat

Abstractions have been happening since the 1970s when ASM was replaced by the C Programming Language. From there we got C++ (look, it actually has a string type that most humans understand!) then we got memory safe managed languages like Go that is almost human readable, runs almost everywhere and doesn't have buffer overflows. ASM was machine specific. C was portable but required expert programmers. C++ was even more user friendly, but still very hard for normal people. Today, most anyone can write a program in Go. The more we abstract, the less knowledge/expertise is needed. So yes, programs are being built by people who don't really understand what they are doing. That is intended.

philipstorry

Not a bad article - thanks! Others are pointing out that you cannot understand everything - and that's true enough. But you only need to understand what's important. The experience of a good expert helps you to find that out. As a systems administrator the recent AWS outage in the Middle East is the best recent example. There will be roughly three types of companies, separated by their understanding: - Don't Understand - these companies thought that the cloud would handle this kind of thing for them, and are probably going to be doing a lot of finger-pointing in the near future. - Do Understand, Don't Care - these companies did understand that high availability meant going multi-region, but decided against it for whatever reason. Probably cost vs perceived likelihood. These companies know that they've made a mistake. Short term they're wondering how to survive it, long term they'll be re-assessing their risk acceptance. Many may decide to stay single-region, but at least understand why. - Do Understand, Do Care - these companies will simply be checking that their procedures worked for any manual parts of their failover, plus possibly looking at any improvements they can make given the real-life experience they've gained. An LLM is just going to tell you how to implement it. It's not going to be thinking "what sort of availability do we require?", it may never start that conversation unless explicitly prompted. And even then it's going to return consensus opinions, which may not be what you want when evaluating risk. I'd love to think a lot of companies will be looking at this event and updating their own risk register or justifying their existing risk decisions for hosting. But let's be honest - most won't even have thought about it, and won't until it goes wrong.

lowsong

> Software is being democratized. No. Software is being centralized. If the snake oil AI companies are selling about the coming agentic age were true, then the end result is not "anyone can produce software" it is "anyone stupid enough to rent the ability to run their business from an AI vendor can produce software".

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed