The future of everything is lies, I guess: Work
aphyr
254 points
205 comments
April 14, 2026
Related Discussions
Found 5 related stories in 64.5ms across 4,562 title embeddings via pgvector HNSW
- The Future of Everything Is Lies, I Guess: Safety aphyr · 291 pts · April 13, 2026 · 80% similar
- The Future of Everything Is Lies, I Guess: Part 3 – Culture aphyr · 117 pts · April 09, 2026 · 78% similar
- The future of everything is lies, I guess – Part 5: Annoyances aphyr · 250 pts · April 11, 2026 · 77% similar
- ML promises to be profoundly weird pabs3 · 452 pts · April 08, 2026 · 64% similar
- The AI Industry Is Lying to You spking · 150 pts · March 24, 2026 · 52% similar
Discussion Highlights (20 comments)
hoppp
Unavailable Due to the UK Online Safety Act
mock-possum
Wow the typography is obnoxious on mobile, some lines only have 3 words due to the text justification
greatpost
Thank you for this aphyr. My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.
Papazsazsa
previously: https://news.ycombinator.com/item?id=47754379
AndrewKemendo
This has been on the front page for over a week in different forms what gives? https://hn.algolia.com/?q=future+of+everything+is+lies
0xbadcafebee
> more like witchcraft than engineering Welcome to web development buddy > how ML might change the labor market Human labor is expensive. If LLMs do make things cheaper and faster to produce, you don't need that many humans anymore. Again, assuming the improvement is real, there absolutely will be shrinkage for existing businesses in headcount. What remains to be seen is how much cheaper machines make work. 1.5x? 2x? 10x? 100x? > unlike sewing machines or combine harvesters, ML systems seem primed to displace labor across a broad swath of industries [...] The question is what happens when [..] all lose their jobs in the span of a decade It's more like hand tools -> power tools; a concept applied to many things. Everyone will adopt them, and you'll need fewer workers who'll work faster with less skill. You get a gradual labor force shrinkage, but also an increase in efficiency, so it's not like a hole is opening up in your economy. A strong economy can create new jobs, from either private or public sources. > ML allows companies to shift spending away from people and into service contracts with companies like Microsoft The price of hardware, as it always has been, is a downward trend, while the efficiency of open weights is going up (it will plateau eventually but it's still going up). We already spend $20,000 on servers, whether it's buying them once on-prem, or renting them out in AWS. ML is just another piece of software running on another piece of hardware > if companies are successful in replacing large numbers of people with ML systems, the effect will be to consolidate both money and power in the hands of capital That ship left port like 30 years ago dude. Laborers have no power in the 21st century.
cratermoon
"Another critical lesson is that humans are distinctly bad at monitoring automated processes". Humans are also distinctly bad at noticing certain kinds of bugs in software. Think off-by-one errors, deadlocks, or any sort of bug you've stared at for days and not noticed the one missing or extra semicolon. But LLMs can generate a tsunami of subtly wrong code in the time a reviewer will notice one typo and miss all the rest.
curuinor
Omnissiah-bothering, I call it.
mannanj
> This feels hopelessly naïve. We have profitable megacorps at home, and their names are things like Google, Amazon, Meta, and Microsoft. These companies have fought tooth and nail to avoid paying taxes (or, for that matter, their workers). OpenAI made it less than a decade before deciding it didn’t want to be a nonprofit any more. There is no reason to believe that “AI” companies will, having extracted immense wealth from interposing their services across every sector of the economy, turn around and fund UBI out of the goodness of their hearts. > If enough people lose their jobs we may be able to mobilize sufficient public enthusiasm for however many trillions of dollars of new tax revenue are required. On the other hand, US income inequality has been generally increasing for 40 years, the top earner pre-tax income shares are nearing their highs from the early 20th century, and Republican opposition to progressive tax policy remains strong. I think we are in general a highly naive, gullible class of people: we were conditioned, programmed and put into environments where being this was the norm and rewarded. The leaders and those extracting resources, who we gullibly allow to trample over our dignity and our rights, take advantage of this and reinforce it through lobby and influence of the mainstream culture and media campaigns around us. Further, if social media becomes a threat to their statuses, they have been shown to employ their influence there too through censorship and more; we therefore, may be best to learn how to not to be gullible and grow some balls.
simianwords
No you don’t have to review every single line of code produced by AI in fears of security. This is quite exaggerated and I think the author is biased in his own field.
vegancap
How come this is blocked in the UK? :S
jerf
The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table. If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do. If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.
_doctor_love
Another interesting one from 'aphyr -- I think the points around the Ironies of Automation deserve deeper focus, possibly even a separate follow-up post. I would encourage folks to look at the following industries: nuclear safety, commercial aviation, remote surgery. These industries have dealt with the issues of automation for much longer than we have as programmers. In the research I've done, these industries went through a similar journey in the 20th century as we are now: once something becomes automated enough, the old way simply won't work. You have to evolve new frameworks and procedures to deal with it. So in the case of aviation they developed CRM and SRM - how to manage the airplane as a crew and how to manage it as a solo operator. Remember that modern airplanes are highly automated!! The human pilot is not typically hands-on-wheel for most of the flight. In the case of surgeons, they found that de-skilling without regular practice can occur in as little as four weeks! So to combat that, some surgeons are now required to practice in simulated environment to keep their skills sharp. My feeling is that 'aphyr is right in the short-to-medium term. Current market forces and US regulatory posture (or lack thereof) makes it so that there are less rules and less enforcement. IMHO the results are depressingly predictable but the train has left the station with enough momentum that there's no stopping it. If we survive long enough to make it past the medium-term things will change.
enraged_camel
>> Imagine a co-worker who generated reams of code with security hazards, forcing you to review every line with a fine-toothed comb. One who enthusiastically agreed with your suggestions, then did the exact opposite. A colleague who sabotaged your work, deleted your home directory, and then issued a detailed, polite apology for it. One who promised over and over again that they had delivered key objectives when they had, in fact, done nothing useful. An intern who cheerfully agreed to run the tests before committing, then kept committing failing garbage anyway. A senior engineer who quietly deleted the test suite, then happily reported that all tests passed. >> You would fire these people, right? Okay, now imagine a different colleague. One who writes a solid first draft of any boilerplate task in seconds, freeing you to focus on architecture instead of plumbing. A dev who never gets defensive when you rewrite their code, never pushes back out of ego, and never says "that's not my job." A pair programmer who's available at 3 AM on a Sunday when prod is down and you need to think out loud. One who remembers every API you've forgotten, every flag in every CLI tool, every syntax quirk in a language you use twice a year, or even every day. You'd want that person on your team, right? In fact, you would probably give them a promotion. Here's the thing: the original argument describes real failure modes, but then commits a subtle sleight of hand. It personifies the tool as a colleague with agency, then condemns it for lacking the judgment that agency implies. But you don't fire a table saw because it doesn't know when to stop cutting, right? You learn where to put your hands. Every flaw in that list is, at the end of the day, a flaw in the workflow, not the tool. Code with security hazards? That's what reviews are for. And AI-generated code gets reviewed at far higher rates than the human code people have been quietly rubber-stamping for decades. Commits failing tests? Then your CI pipeline should be the gate, not a promise. Deleted your home directory? Then it shouldn't have had the permissions to do that in the first place. In fact, the whole "deleted my home directory" shit is the same thing as "our intern deleted the prod database". We all know that the response to the latter is "why did they have permission to prod in the first place??" AI is the same way, but for some god damn reason people apply totally different standards to it.
m0llusk
Bullshit is more dangerous than lies.
elcapitan
I really appreciate this series of posts, as it serves as a good summary of key points of the discourse around AIs, and links to the relevant articles etc. I find following all those discussions myself exhausting, so if I can find this all in one place and read it nicely grouped, that is very helpful.
buildbot
I love the analogy of AI coding as witchcraft! It’s very accurate to how working with these tools feels - At one point I was forced to invoke a “litany against stubbing” in a loop to make claude code actually implement a renode setup for some firmware. That worked really well. It feels like hexing the technical interview come to real life ;)
barbazoo
> I continue to write all of my words and software by hand, for the reasons I’ve discussed in this piece—but I am not confident I will hold out forever. There it is, an actual em-dash in the wild, written by hand.
itissid
Everyday I sit down to build a product for my clients. I am a one man shop _now_. Before I had people helping me. My mental state is not good. A very odd thing happens when claude or codex complete code fast, I begin to think of all the other things that are needed to make AI Agent work better. I begin to worry about problems that other people use to help me with and think "Can I do those too?". Problems like product design, devops work etc. In a bid to try I get nerd sniped by the velocity people seem to have — and these are respected devs not just twitter claims. And because I am so bad at "doing it all" its causing my mental health to suffer because of long hours i have to put it in. I miss my friends and colleagues who I worked with. I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...
itissid
For any one who has not read the cockpit recording of air-france-447 I would encourage them to[1]. It is simply jaw dropping study in how things go wrong so fast — a risk with AI we have barely begun to acknowledge let alone regulate as a community. [1]( https://tailstrike.com/database/01-june-2009-air-france-447/ )