The future of everything is lies, I guess: Where do we go from here?

aphyr 567 points 608 comments April 16, 2026
aphyr.com · View on Hacker News

Discussion Highlights (20 comments)

airza

I agree with the general sentiment that the structure of society is going to change, but I don't know what the satisfying solution is. It's hard to imagine not participating will work, or even be financially viable for me, for long.

poszlem

From the article: "I’ve thought about this a lot over the last few years, and I think the best response is to stop. ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call metis." "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune

dfxm12

The idea that Claude might be able to help you change the color of your led lighting as a legitimate counter to things like a less usable world wide web, worse government services, the loss of human ability, etc. is excellent parody.

willrshansen

If there's too many lies, "source or gtfo" becomes more important

voidUpdate

> "Unavailable Due to the UK Online Safety Act. Now might be a good time to call your representatives." Having the "call your representatives" link be to your website as well isn't particularly helpful... I already can't get to it

catapart

the epilogue is what speaks to me most. all of the work I've done with llms takes that same kind of approach. I never link them to a git repo and I only ever ask them to make specific, well-formatted changes so that I can pick up where they left off. my general feelings are that LLMs make the bullshit I hate doing a lot easier - project setup, integrate themeing, prepare/package resources for installability/portability, basic dependency preparation (vite for js/ts, ui libs for c#, stuff like that), ui layout scaffolding (main panel, menu panel, theme variables), auto-update fetch and execute loops, etc... and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with. So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it. of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.

lukev

This is a must-read series of articles, and I think Kyle is very much correct. The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society. That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy. But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.

egonschiele

I've been thinking about this a lot recently, and I don't know if it is possible to stop. I've been thinking the most impactful thing would be to create open-source tools to make it easier to build agents on top of open-source models. We have a few open-source models now, maybe not as good as Gemini, but if the agent were sufficiently good, could that compensate? I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?

chungusamongus

Complaining about AI slop is starting to become its own kind of slop. There isn't anything novel in this little essay. It might as well have been written by AI because I've seen this type of dude complain about this exact type of thing countless times at this point, and none of them have a solution other than empty moralizing or call your representative or whatever. None of that’s going to work. Fortune, Gizmodo, The Verge,Ars Technica, etc. all circulate the same negative headlines and none of them have a solution, and their writers are probably going to be totally replaced by AI so what difference does it make? They're just capitalizing on the negative sentiment and they have no intention to come up with a solution. At that point it's just complaining and I'm sick of it.

nipponese

The conclusion was the takeaway. Everyone is getting bumped up a skill notch, not just bozo liars.

SilverBirch

Frankly I think it’s kind of childish to just put up a massive Uk wide block on your website. “Call your representatives”, ok dude, can I give you a list of things I want to change about your country’s policies?

cm2012

This article is a good example of how ideology can can lead people down irrational paths.

yanis_t

I read couple of articles in the series and I still couldn't get what was the point author is trying to make. Reads like, "let me give you 100 arguments why I think this is bad". Do LLMs lie? Of course not, they are just programs. Do the make mistakes or get the facts wrong? Of course they do, not more often then a human does. So what is the point of that article? Why my future is particularly bad now because of LLMs?

analog8374

We've recreated pre-enlightenment intellectual culture. Authority and logical consistency matter. Reality doesn't.

nfornowledge

Rudolph built his engine, Henry built his car, Popular Mechanics published it. 2000 biofueling stations across the nation. All made illegal by special interests months before the article was published. Information didn't move fast enough to let the editors know that innovation was illegal.

gmuslera

The epilogue looked weak to me. The previous sections explored why it was essentially wrong to use current LLM technology, the answers can be wrong, or not even wrong, and why it has to be that way. The epilogue focus more in (our) obsolescence in a paradigm shift towards widespread LLM use scenario and not in them doing their work right or wrong. And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.

yubblegum

I fear that outside of cataclysmic global warfare or some sort of butlerian jihad (which amounts to the same) this genie is not going back into the bottle. This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them. So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.

skyberrys

The reasons laid out in this article are why it's so important to share how we are using AI and what we are getting in return. I've been trying to contribute towards a positive outcome for AI by tracking how well the big AI companies are doing at being used to solve humanitarian problems. I can't really do most of the suggestions the article, they seem like a way to slow progress. I don't want to slow AI progress, I want the technology we already have to be deployed for useful and helpful things.

abricq

> ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.

ori_b

Some people like roasting marshmallows. Others think that setting the house on fire may have downsides.

Semantic search powered by Rivestack pgvector
4,783 stories · 45,112 chunks indexed