I don't use LLMs for programming

ms7892 68 points 98 comments March 12, 2026
neilmadden.blog · View on Hacker News

Discussion Highlights (18 comments)

baCist

I mostly use AI as an assistant, not as a replacement. I actually enjoy the process of programming and learning new things along the way. I’m not really interested in outsourcing that to an LLM just to save a few minutes.

shanjai_raj7

I've gone the other direction completely. Run claude code basically unsupervised on my codebase for months now and honestly I write way less by hand. still understand everything it does but I spend time on what actually matters — the architecture, the decisions. for me the fun part was always shipping things, not the syntax.

voidUpdate

I don't see the point in supporting the hoovering up of anything anyone has ever wrote online, without attribution, just so I don't get to do the thing I actually like doing, programming

imdsm

I've written code for almost 30 years, and the last 4 years I've slowly used AI more and more, starting with GitHub Copilot beta, ChatGPT, Cursor, Windsurf, Claude, Gemini, Jules, Codex. Now I mostly work with Claude, and I don't write any code myself. Even configuring servers is easier with Claude. I still understand how everything works, but I now change how I work so I can do a lot more, cover a lot more, and rely less on people. It isn't much different to how it works with a team. You have an architecture who understands the broader landscape, you have developers who implement certain subsystems, you have a testing strategy, you have communication, teaching, management. The only difference now is that I can do all this with my team being LLMs/agents, while I focus on the leadership stuff: docs, designs, tests, direction, vision. I do miss coding, but it just isn't worth it anymore.

mattmanser

I am really finding this. By the time I've specced out a feature properly to an LLM, I could have just written most of it quicker myself. But I often find that with jobs I want to give to other people, so maybe I over specify? There's some tasks where it's pretty clear what you want though and are just boring jobs that are totally not worth speccing well and an LLM will blaze through. Things like: - add oauth support to this API - add a language switcher in this menu, an API endpoint, save it to the UserSettings table - make a 404 page

w-m

I don’t think learning and understanding is hard-coupled to performing all low level steps yourself. The LLM can be a developer, sure. But it can also take on the role of rubber duck, architect, teacher or pupil. Have a large LLM-written change set that works but that you’re not sure you fully understand? Make the coding agent quiz you on the design and implementation decisions. This can be a lot more engaging than trying to do a normal code review. And you might even learn something from it. Probably not the same amount as if you did this yourself fully. But that’s just a question of how much effort you want to invest in the understanding?

bambax

The quote from Douglas Adams is perfectly consistent with using AI for programming. The difficulty of programming, indeed the whole point, is to understand the problem; it's not typing if-then-else cases for the millionth time. Explaining the problem to an LLM and having it ask pointed questions is helpful IMHO, as well as being able to iterate fast (output new versions fast). As an example, I'm currently making simple Windows utilities with the help of AI. Parsing config files in C is something the AI does perfectly. But an interesting part of the process is: what should go into a config file, or not, what are the best defaults, what should not be configurable: questions that don't have a perfect answer and that can only be solved by using each program for weeks, on different machines / in different contexts.

mikkupikku

This morning while sitting on the shitter, claude wrote me a complete plugbox interface for wiring together A/V filters, rendered with libASS subtitles to be embedded in an mpv video player.

NewEntryHN

This assumes you always learn something new with every new program you write.

ivanvoid

I genuinely don’t understand how anyone (with technical background) can see LLMs anything more then fancy autocomplete. If you know anything about NNs and about average code quality, that LLMs never will be able to generate high quality code. Im ready to get downvotes again for my takes, but as a person who writes and trains DL models, I will die on the hill: “people need to produce high quality data” it can be code it can be art, but we can’t rely on those models and trust in the things that they provide.

nananana9

I may start using LLMs to filter out these kinds of posts. At this point it's worth considering a permanent, pinned "HN Flamewar: Will LLMs turn you into the next Ken Thompson or are you just a poser who can't write code" thread. We're having this same discussion, constantly, on 5 different threads on the frontpage.

timonoko

Gemini parsed 5000 lines assembly program. And it understood everything. I wanted to change it from 32-bit MSDOS to 64-bit Linux. But it realized that the segmented memory model cannot be implemented in large memory without massive changes which breaks everything else. It was willing to construct new program with seemingly same functionality, but the assembly code was so incomprehensible that whole project was useless as a learning tool. And C-version would have been faster already. Sorry to say, but less talented humans like me-myself are already totally useless in this.

prohobo

Seeing a lot of "ok boomer" reactions to posts like this, and honestly I think I kind of agree - but more accurately the author hasn't considered the current landscape properly. Grady Booch (co-creator of UML) has this to say about AI: this is a shift of the abstraction of software engineering up a level. It's very similar to when we moved from programming in assembly to structured languages, which abstracted away the machine. Now we're abstracting away the code itself. That means specs and architectural understanding are now the locus of work - which is exactly what Neil is claiming to be trying to preserve. I mean, yeah you can give that up to the AI as well but then you just get vibecoded garbage with huge security/functionality holes.

adamddev1

> By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself. I love these quotes. I got a much deeper, more elegant understanding of the grammar of a human language as I wrote a phrase generator and parser for it. Writing and refactoring it gave me an understanding of how the grammar works. (And LLMs still confidently fail at really basic tasks I ask them for in this language.)

mixtureoftakes

Ironically, author doesnt realize that not everyone wants to learn everything There is nothing wrong whatsoever with just getting things done.

7777332215

A large issue is that I am not going to give up my private source code/IP to be trained on. As an individual, not a billion dollar enterprise.

yunseo47

Coding without AI will likely take on the nature of leisure activities like cycling, jogging, horseback riding, or swimming. The invention of cars, trains, and ships didn't eliminate them. It's clear the latter are overwhelmingly more efficient, while the former now remain in the realm of hobbies or exercise. I also deliberately avoid using AI for some small projects and code them myself, but I consider this purely a hobby now, not work. As the original author pointed out, the advice to jog or ride a bike because driving all the time is bad for your health is sound, but the Red Flag Act has proven to be a foolish endeavor. I believe the same phenomenon will occur.

axegon_

I don't either. I'm genuinely considering registering an NGO dedicated to anti-slop. I tried AI and it didn't work on all accounts - bugs, edge cases are never covered, horrible security, slow and over complicated. The reason people keep saying that it does work is just the perception they had of programming: A lot of people were lead to believe that anyone can be a programmer. Much like everyone believes that they can be an artist and spoilers - that's not true. I am saying this as the child of two artists - I am incapable of creating art, despite numerous swings in that direction when I was a child - it was just not for me. People looking from the outside saw the tons of apps pouring out over the years, some making billions and though "well if those losers can do it, so can I". A 20 hour course on web development did not cut it, even though the hiring spree around COVID made many think that it did and did not attribute it to the instant rise in demand for online services. But that, for better or worse, did not last. So the alternative came in the form of AI slop and now an active generation that is in their mid 20's thinking that seemingly functioning slop and stable software are the same thing, completely brushing off a century of collective knowledge in what we know as computer science. The metric became lines of code, although those of us that started off coding as children when MySpace was a thing and goto was the best performing search engine, are well aware that lines of code is the stupidest metric you can come up with. But slop machines produce so much of it, it's easy to see why many people are like "see? see this? it works! And you are gonna be doing this 2 days as a caveman". Gladly, because two data pipelines that do the exact same thing take 4 days to run on slop code, whereas my caveman approach takes single digit hours and does not produce several billion rows of unusable garbage. Not to mention the countless times when someone has asked he to help them when they are stuck and a simple question such as "where do you define the path to the output directory?" leads to 10 minutes of scrolling on project that contains a total of 10000 lines of code. The good news for us mortals, is that this is that this approach is starting to bite people back and for the companies that manage to survive the inevitable head on collision, they will have to dig deep in their pockets to get people to clean up the mess.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed