I am leaving the AI party after one drink

speckx 109 points 121 comments March 27, 2026
lara-aigmueller.at · View on Hacker News

Discussion Highlights (20 comments)

peacebeard

Pretty insubstantial high level tour of broad AI pushback. Goes from "It was just not 'elegant.'" [sic] to "I don't want to give up my brain" [sic].

haolez

I don't have a stake on AI, but more and more I see the following patterns: - people that give in to AI do so because the technical merits suddenly became too big to ignore (even for seasoned developers that were previously against it) - people who avoid AI center their arguments on principles and personal discomfort Just from that, you can kind of see where this is going.

mstank

While I applaud her and wish her well — writing like this reminds me of a couple of things. First my aging father insisting on navigating using his unfortunately fading memory instead of Google maps. Some people just won’t pick up technology out of habit or spite, even if it hinders them. Second, a quote I read here that I’ll paraphrase “you can be the best marathon runner in the world and still lose a race to a guy on a bike.” Know the race you’re racing. It often changes. I think it’s valid and commendable to keep the old ways alive, but also potentially dangerous to not realize they’re old ways.

legitster

> I don’t want to feel this kind of “addiction.” > I don’t want to depend on something doing the work I earn money with. > I don’t want to give up my brain and become lazy and not think for myself anymore. There are a lot of good reasons we should be skeptical of AI and not give up on essential skills. But sometimes I want to shake these people by the shoulders. Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools? The entire point of civilization and society is that we are all "addicted" to technology and progress. But the invention of the plow did not, in fact, make us lazier or stop using our brain. We just moved on to the next problems. Maybe the Amish are have it right and we should just be happy with a certain level of technology. But none of us have "lost" the ability to go backwards if we really wanted. You can finally ask a computer to think and solve problems, and it will! People act like this is a brave new world, but this is literally what computers were supposed to be doing for us 50 years ago! If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."

cowlby

Who else struggles with both sides of this? My engineer side values curiosity, brain power, and artistanship. My capitalist side says it's always the product not the process. My formula is something like this: product = money, process = happiness, money != happiness, no money = unhappiness. I think the optimal solution is min/maxing this thing. Find the AI process that minimizes unhappiness, and maximizes money.

trinsic2

Yeah I think this article put a finger on what I was feeling after using Claude Code for the first time to convert an PDF to an Markdown document[0]. I think I will update my article on these thoughts. Thanks for touching on something I had been feeling. It also feel like I was cheating. I also used CC to update the version of my SSG and that was good because I did not want to spend my time dealing with that. But there are certain projects that I can see myself not feeling good about if I used the tool to help me with. [0]: https://www.scottrlarson.com/publications/publication-my-fir...

basket_horse

Why can’t people just acknowledge AI is good at some things and bad at others. Why does every post say AI is either groundbreaking or terrible. Get a grip people. It’s a tool.

jr3592

I love posts like this. AI is easily the most disruptive thing to hit our industry in over a decade and it feels like one of those "this changes everything" moments. Reading how it's impacting others is cathartic and helps shape my own understanding. Here are some thoughts I have from reading this article: > The AI can’t “see” the output, so some responsive refinements were just not correct. Within one CSS rule block there were redundant declarations. This 1,000%. Vibe coding has its issue and for me personally, frontend polish, responsiveness, and overall quality is the #1 most glaring of them that simply re-prompting often can't solve. Even with the ability to screen shot your UI that hasn't solved things like glitchy animations. If you want to do anything even remotely above a junior level like scroll animations, page transitions, etc. good luck. AI will certainly try to do it for you, but inevitably it will not work perfectly and you will need to manually refine or even re-write code. When the code base isn't yours, that makes these re-writes a lot less fun. > The guilty conscience at the same time, like I was cheating. I realized that when I move on like this, my project will never truly feel like my own. I've wrestled with this over the last year, and still do to some extent. I'm trying to shift my perspective and envision myself as a brand new developer maybe 16 or 17 years of age. Would I think this isn't my work? I doubt it. I'd probably just (correctly) assume that this is the state of the art, this is how you do it. Unfortunately this doesn't fix a bigger problem... I just don't enjoy vibe coding as a craft. There's something special about sitting down in the morning with your coffee and taking on a difficult programming problem. You start writing some code, the solutions start to formalize in your mind, there's a strong back-and-forth effect where as you code, the concepts crystalize further... small wins fuel a wonderful dopamine hit experience... intellisense completions, compilation completions, page refreshes, etc. are now all replaced with dull moments often waiting for the agent to return its response, which you now read. > I’m curious (and a little bit scared) to see where we will go from here. I hope that in the end I can be part of a community that values craftsmanship, individuality and honest, high-quality work. I really hope so too... But speaking honestly, I think this ship is sailing away quite quickly. Time is money, and it always has been this way. Very few organizations can afford the luxury of time when building, designing, etc. I see no chance for this genie to go back in the bottle, and I believe it has (and will continue) to fundamentally change the nature of our work. Over time as these models improve, there's a chance it could dramatically reduce the overall need for developers... It will start with low level teams as we're seeing already, but could expand. I have been saying this to everyone -- what's your exit strategy? I'm not saying you need to panic, but you need a plan for what happens if / when salaries tank dramatically. I hate to be "that guy" but in life I've found expecting the worst, isn't always a bad thing. Keep your mood up, prepare for the worst possible outcome, and be pleasantly surprised if that's not what happens.

JPKab

"I'm not going to use this technology that obviously enhances my productivity because <insert emotional subjective reasoning that no customer would ever care about here>." I think a lot of people have forgotten why we actually get paid to write code. The person who wants an automated billing system doesn't care if you hand-typed it or not, or if the CSS that would have taken 2 hours to write took 8 seconds via an AI plus 60 seconds of you tweaking a border you didn't like. They just want their billing system. And if you are the person that takes 20x longer to build it, you're going to quickly get outcompeted. Sorry.

rspoerri

Human thinks eating food from fire is bad because it looks charred and you might burn your fingers doing so. /s I remember the time when people insisted that they would never use a mobile phone. I remember the time when people didnt understand my presentation about the magical "internet" (8th grade school in 94).

yomismoaqui

I would pay money to read rants on a forum like Hackernews but in 190X when cars started sharing the roads with horse carriages. I bet it would look something like the posts we are seeing today with developers and agentic AI.

andai

Reposting a comment (and one of the replies) since it's relevant: --- It occurred to me on my walk today that a program is not the only output of programming. The other, arguably far more important output, is the programmer. The mental model that you, the programmer, build by writing the program. And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it. (Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!) The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well. See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc) https://www.youtube.com/watch?v=ZSRHeXYDLko --- Munksgaard 1 day ago: Peter Naur had that realization back in 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf

TaupeRanger

> I don’t want to feel this kind of “addiction.” Getting a feeling of "wanting to keep going" with something does not automatically make it an "addiction". > I don’t want to depend on something doing the work I earn money with. A tale as old as time, and a valid feeling, though not particularly helpful to dwell on since the technology will never go away and never get worse than it is right now. > I don’t want to give up my brain and become lazy and not think for myself anymore. Your brain will think about other important things and you don't need to become "lazy" just because a machine is doing something that used to require more effort on your part. > I enjoy technical discussions with (human) co-workers. So what? You can still have those discussion. > I enjoy reading blog posts and tutorials and learning from other developers. So what? You can still read those blogs, but the subjects might shift away from coding minutiae to other topics. > I want to learn and grow and become better at what I am doing by trial and error and mistakes I make all by myself. Going forward, that trial and error process will start to happen more at the product/project level rather than the source code level. > I don’t want to be part of a trend/hype destroying our planet even faster than we already do without it. It isn't. https://blog.andymasley.com/p/the-ai-water-issue-is-fake

allenrb

For what it’s worth (ie, absolutely nothing), I agree with her 100%. I didn’t get into this field in order to prompt an AI to take care of the details. I got into it because I love the details. I’m a strong performer on a good team at a company many people would want to work at… and I know the clock is ticking. Sooner or later, I will be too slow. I’m not going to claim that this is the wrong way to go. It’s obviously the future, and the future doesn’t care what allenrb does or does not want. I’m somewhat hopeful that power and cooling requirements will come down by multiple factors of 10x over time, reducing the environmental damage. The fact is, I love what I’ve been able to do “the old way” and just don’t feel the urge to move on. So it goes.

par

Any sufficiently advanced technology is indistinguishable from magic. -Arthur C Clarke

api

I'm a little tired of the environmental argument against AI. It feels contrived, like people are fishing for a "problemism" to use to oppose it to avoid harder discussions. Let's compare AI to one typical 20 mile round trip commute. I asked Gemini and Claude and compared to see if the results looked good, but feel free to check. One ~20 mile round trip commute: about 5700 Wh in an EV, about 27000 Wh in a gas car (due to thermal efficiency). Comparing to the EV that's about 1,400 ChatGPT queries, 2,800 AI code completions, and 380 AI image generations. Ordering lunch on Doordash uses the same power as days and days worth of very heavy AI usage, and that's if the dasher is driving a very efficient car. If they're driving an inefficient gas car it's like weeks of heavy AI usage. Ultimately what matters is where we get our power. If we are getting it from CO2 emitting sources, what we do with it after that is not relevant. Make AI memes? Order burritos? Boil spaghetti? Who cares. The solution is to replace CO2 emitting sources with cleaner sources. I also think people are avoiding the big fat elephant: wealth inequality. The whole problem with AI that bothers people is loss of jobs and possible wage suppression. The problem isn't AI, it's inequality and the fact that our system is basically regressive at this point with wealth being actively transferred upward. But that's a hard complicated discussion and involves confronting powerful forces. It's easier to make stuff up about AI being some uniquely bad energy or water waste when it's not. This is really what "problemism" is all about: using a contrived or exaggerated or mis-attributed problem to avoid a hard or complicated conversation.

slumpt_

to be clear, the work you’re doing is only relevant to the extent it produces product nobody cares if you use your brain or not. they do care if you’re efficiently delivering reliable product

firmretention

I've found two personal use cases for LLM generated code: (1) I have an idea for some app, but either I feel it won't be useful enough/save me enough time to justify developing it, or I simply don't feel the problem is interesting enough to be motivated by it. In that case, a vibe coded tool is perfect. It generally does one simple thing, and I don't care about long term maintenance, because it just needs to keep doing that thing. (2) Adding a feature to an open source project. Again, it's a case of "I want this feature, but am not willing to spend the time needed to implement it." Even a relatively simple open source project can take a day or two just to get a basic understanding of the code and where I need to make the changes. Now I can often just get a functioning vibe-coded implementation within a few hours. (2) leaves me with some unsettling feelings about how this will affect the future of open source software. Some of the features I've implemented this way may very well be useful to other users, but I can't in good conscience just dump a vibe coded pull request on a project and except them to do the work of vetting it. But if I didn't have the energy to implement the change myself, I'm definitely not going to bother doing the work of going through all the LLM generated code, cleaning it up to the standards of the project, etc. Whereas before I didn't have a choice, and the idea of getting the change ready for a PR was much less daunting since I understood the problem space and solution well. So at least for myself, I can see a future where many of the apps I use are bespoke forks of popular applications. Extrapolate that to many, many people and an interesting landscape emerges.

rickdeaconx

The idea of getting information from other engineers relies on the idea that the other engineers aren't already complacent with AI.

aeternum

A lot of this is just wrong. AI can now see the output. It's interesting that highly flawed opinion pieces like this are so popular.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed