AI didn't simplify software engineering: It just made bad engineering easier
birdculture
118 points
101 comments
March 14, 2026
Related Discussions
Found 5 related stories in 55.7ms across 3,471 title embeddings via pgvector HNSW
- AI Made Writing Code Easier. It Made Being an Engineer Harder saikatsg · 380 pts · March 01, 2026 · 74% similar
- AI is great at writing code. It's terrible at making decisions kdbgng · 12 pts · March 13, 2026 · 67% similar
- AI is making junior devs useless beabetterdev · 162 pts · March 01, 2026 · 63% similar
- AI is a tool. Don't try to make it a teammate lubos76 · 26 pts · March 02, 2026 · 62% similar
- AI Isn't Killing Developers–It's Creating a $10T Maintenance Crisis rakiabensassi · 33 pts · March 19, 2026 · 59% similar
Discussion Highlights (20 comments)
sshine
It also made good engineering easier. AI is an amplifier of existing behavior.
sunir
Simplify? It’s like saying a factory made chair building… what? It’s not simpler. It’s faster and cheaper and more consistent in quality. But way more complex.
groundzeros2015
I’m an AI skeptic and in no sense is it taking my peers job. But it does save me time. I can do research much better than Google, explore a code base, spit out helper functions, and review for obvious mistakes.
a_void_sky
"Coding was never the hard part. Typing syntax into a machine has always been the least interesting part of building a system." and I think these people are benefitting from it the most, people with expertise, who know their way around and knew what and how to build but did not want to do the grunt work
staticassertion
> Code Was Never the Hard Part I can't believe this has to be said, but yeah. Code took time, but it was never the hard part. I also think that it is radically understated how much developers contribute to UX and product decisions. We are constantly having to ask "Would users really do that?" because it directly impacts how we design. Product people obviously do this more , but engineers do it as a natural part of their process as well. I can't believe how many people do not seem to know this. Further, in my experience, even the latest models are terrible "experts". Expertise is niche, and niche simply is not represented in a model that has to pack massive amounts of data into a tiny, lossy format. I routinely find that models fail when given novel constraints, for example, and the constraints aren't even that novel - I was writing some lower level code where I needed to ensure things like "a lock is not taken" and "an allocation doesn't occur" because of reentrancy safety, and it ended up being the case that I was better off writing it myself because the model kept drifting over time. I had to move that code to a separate file and basically tell the model "Don't fucking touch that file" because it would often put something in there that wasn't safe. This is with aggressively tuning skills and using modern "make the AI behave" techniques. The model was Opus 4.5, I believe. This isn't the only situation. I recently had a model evaluate the security of a system that I knew to be unsafe. To its credit, Opus 4.6 did much better than previous models I had tried, but it still utterly failed to identify the severity of the issues involved or the proper solutions and as soon as I barely pushed back on it ("I've heard that systems like this can be safe", essentially) it folded completely and told me to ship the completely unsafe version. None of this should be surprising! AI is trained on massive amounts of data, it has to lossily encode all of this into a tiny space. Much of the expertise I've acquired is niche, borne of experience, undocumented, etc. It is unsurprising that a "repeat what I've seen before" machine can not state things it has not seen. It would be surprising if that were not the case. I suppose engineers maybe have not managed to convey this historically? Again, I'm baffled that people don't see to know how much time engineers spend on problems where the code is irrelevant. AI is an incredible accelerator for a number of things but it is hardly "doing my job". AI has mostly helped me ship trivial features that I'd normally have to backburner for the more important work. It has helped me in some security work by helping to write small html/js payloads to demonstrate attacks, but in every single case where I was performing attacks I was the one coming up with the attack path - the AI was useless there. edit: Actually, it wasn't useless, it just found bugs that I didn't really care about because they were sort of trivial. Finding XSS is awesome, I'm glad it would find really simple stuff like that, but I was going for "this feature is flawed" or "this boundary is flawed" and the model utterly failed there.
woeirua
How many model releases are we away from people like this throwing in the towel? 2? 3?
agentultra
100%. There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests. Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning. Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours. There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.
xg15
> Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively. As a software engineer, I'd love if the industry had an actual breakthrough, if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity. But not if the only reward for this would be to be laid off. So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
BoredPositron
A lot of times bad engineering is all you need.
tyleo
I disagree with the premise. It made all engineering easier. Bad and good. I believe vibe coding has always existed. I've known people at every company who add copious null checks rather than understanding things and fixing them properly. All we see now is copious null checks at scale. On the other hand, I've also seen excellent engineering amplified and features built by experts in days which would have taken weeks.
zer00eyz
"AI" (and calling it that is a stretch) is nothing more than a nail gun. If you gave an experienced house framer a hammer, hand saw and box of nails and a random person off the street a nail gun and powered saw who is going to produce the better house? A confident AI and an unskilled human are just a Dunning-Kruger multiplier.
arty_prof
In terms of the Tech Debt it is obviously allow to make it a lot. But this is controllable if analysing in depth what AI is doing. I feel I become more like a Product than Software Engineer when reviewing AI code constantly satisfying my needs. And benefits provided by AI are too good. It allows to prototype near to anything in short terms which is superb. Like any tool in right hands can be a dealbreaker.
dgxyz
Not easier but faster. It’s really hard to catch shit now.
water_badger
So somewhere here there is a 2x2 or something based on these factors: 1. Programmers viewing programming through career and job security lens 2. Programmers who love the experience of writing code themselves 3. People who love making stuff 4. People who don't understand AI very well and have knee-jerk cultural / mob reactions against it because that's what's "in" right now in certain circles. It is fun to read old issues of Popular Mechanics on archive.org from 100+ years ago because you can see a lot of the same personality types playing out. At the end of the day, AI is not going anywhere, just like cars, electricity and airplanes never went anywhere. It will obviously be a huge part of how people interact with code and a number of other things going forward. 20-30 years from now the majority of the conversations happening this year will seem very quaint! (and a minority, primarily from the "people who love making stuff" quadrant, will seem ahead of their time)
sega_sai
When I see this: "One of the longest-standing misconceptions about software development is that writing code is the difficult part of the job. It never was." I don't think I can take this seriously. Sure, 'writing code' is not the difficult often, but when you have time constraints, 'writing code' becomes a limiting factor. And we all do not have infinite time in our hands. So AI not only enables something you just could not afford doing in the past, but it also allows to spend more time of 'engineering', or even try multiple approaches, which would have been impossible before.
lowbloodsugar
> and ensuring that the system remains understandable as it grows in complexity. Feel like only people like this guy, with 4 decades of experience, understand the importance of this.
jazz9k
Juniors that are relying too heavily on AI now will pay the price down the line, when they don't even know the fundamentals, because they just copy and pasted it from a prompt. It only means job security for people with actual experience.
hyperbovine
I’m seeing a real distinction emerge between “software engineering” and “research”. AI is simply amazing for exploratory research — 10x ability to try new ideas, if not more. When I find something that has promise, then I go into SWE mode. That involves understanding all the code the AI wrote, fixing all the dumb mistakes, and using my decades of experience to make it better. AI’s role in this process is a lot more limited, though it can still be useful.
rvz
Is that why there are so many outages across many companies adopting AI, including GitHub, Amazon, Cloudflare and Anthropic even with usage? Maybe if they "prompted the agent correctly", you get your infrastructure above at least 5 9s. If we continue through this path, not only so-called "engineers" can't read or write code at all, but their agents will introduce seemingly correct code and introduce outages like we have seen already, like this one [0]. AI has turned "senior engineers" into juniors, and juniors back into "interns" and cannot tell what is maintainable code and waste time, money and tokens reinventing a worse wheel. [0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
furyofantares
AI Didn't Simplify Blogging: It Just Made Bad Blogging Easier I was hopeful that the title was written like LLM-output ironically, and dismayed to find the whole blog post is annoying LLM output.