I'm a 30-year veteran SWE, and my industry is currently overrun with addicts

farmerbb 16 points 7 comments April 09, 2026
old.reddit.com · View on Hacker News

Discussion Highlights (3 comments)

sjdv1982

Interesting to hear the industrial SWE perspective, it is very different. I am a scientific research engineer (bioinformatics), and here no one cares much about covering all the possible code paths. What we care about is if the code computes "the correct thing", i.e. that it represents the underlying science. No such guarantee with LLMs. But no such guarantee without LLMs, either (the "code growing above our heads" has happened already, a long time ago). Still, I would say that LLMs are a big net positive for us: they are better at checking such things than we are.

aurareturn

For some context around this sub r/BetterOffline, they follow Ed Zitron who is an AI denier, AI skeptic, or whatever you want to call him. He's on the extreme end. They basically deny everything positive AI brings. If AI cures cancer, they'll say but AI hasn't cured aging yet so it's still useless. If AI solves a math conjecture, they'll say but AI hallucinated that one time so we can't use it. The goal post keeps moving. It's the opposite of r/accelerate. When I read the sub, I can't help but get a cult-following feeling. They'll twist facts and bend them to their beliefs. It's not much different than people who seriously think the Earth is flat in my opinion.

cyanydeez

too much lightning rock. Determinism is what CPUs excel at; when they dont, its because of either bugs in the hardware, or in the software. in theory, those can be contained. but the allusion that theres some third magic is silly. LLMs are problematic because of the addiction, yes, but more so because they reas stack overflow and dont discard all the hacks, so those will srface. when you tell it "dont make mistakes" you might be pointing it to those underlying comments that say "OP, you forgot to..." or "the cuerent API is.." or "its 2025, we can now do x" The major point on the nose is the LLM is working inside the branching logic but its not going to follow control flow. it has zero idea how anything will compile beyond a few ifs. And only if you canslice and dice the proper context like a DAG where its no poisoned by irrelevant but badly named methods or constants polluting its scope.

Semantic search powered by Rivestack pgvector
4,075 stories · 38,119 chunks indexed