We might all be AI engineers now

sn0wflak3s 188 points 303 comments March 06, 2026
yasint.dev · View on Hacker News

Discussion Highlights (20 comments)

bitwize

The phrase "shape up or ship out" is an apt one I've heard. Agentic AI is a core part of software engineering. Either you are learning and using these tools, or you're not a professional and don't belong in the field.

noemit

Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write. The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce. Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace. Unfortunately, this group, the curious ones, IMHO is a minority.

ChrisMarshallNY

> The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering. That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow. The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping. I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).

amelius

> Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever. I'm sure some people are having fun that way. But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win. It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.

Bukhmanizer

This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out. I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.

roli64

Lost me at "I’m building something right now. I won’t get into the details. You don’t give away the idea."

duggan

Very much on the same page as the author, I think AI is a phenomenal accelerant. If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too. There's a baseline value in going fast.

CrzyLngPwd

It sounds a bit no-true-scotsman to me.

bambax

I agree wholeheartedly with all that is said in this article. When guided , AI amplifies the productivity of experts immensely. There are two problems left, though. One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable. The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die). What comes after? How do we produce experts in the age of AI?

rimmontrieu

> But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided. Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)

jruz

I find really sad how people are so stubborn to dismiss AI as a slop generator. I completely agree with the author, once you spend the time building a good enough harness oh boy you start getting those sweet gains, but it takes a lot of time and effort but is absolutely worth it.

yanis_t

They will never admit it, but many are scared of losing their jobs. This threat, while not yet realized, is very real from a strictly economic perspective. AI or not, any tool that improves productivity can lead to workforce reduction. Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people. You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves? To make matters worse, all your competitors also have the same semi-automatic ovens...

holyra

what about the environmental impact of AI, especially agentic AI? I keep reading praise for AI on the orange site, but its environmental impact is rarely discussed. It seems that everyone has already adopted this technology, which is destroying our world a little more.

v3xro

The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.

thefounder

The issue is that you become lazy after a while and stop “leading the design”. And I think that’s ok because most of the code is just throwaway code. You would rewrite your project/app several times by the time it’s worth it to pay attention to “proper” architecture. I wish I had these AIs 10 years ago so that I could focus on everything I wanted to build instead to become a framework developer/engineer.

jwr

Finally a take that I can agree with.

jjmarr

I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization. That would've taken me 3 months a year ago, just to learn the syntax and evaluate competing options. Now I can get sccache working in a day, find it doesn't scale well, and replace it with recc + buildbarn. And ask the AI questions like whether we should be sharding the CAS storage. The downside is the AI is always pushing me towards half-assed solutions that didn't solve the problem. Like just setting up distributed caching instead of compilation. It also keeps lying which requires me to redirect & audit its work. But I'm also learning much more than I ever could without AI.

wk320189

Strangely we never hear gushing pieces on how great gcc is. If you have to advertise that much or recruit people with AI mania, perhaps your product isn't that great.

egl2020

"You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity.

nickstinemates

I've been programming for literally my entire life. I love it, it's part of me, and there hasn't been more than a week in 30 years that I haven't written some code. This is the first time that I feel a level of anxiety when I am not actively doing it. What a crazy shift that I am still so excited and enamored by the process after all of this time. But there's also the double edged sword. I am also having a really hard time moderating my working hours, which I naturally struggle with anyway, even more. Partly because I am having so much fun and being so productive. But also because it's just so tempting to add 1 more feature, fix one more bug.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed