We will all work for AGI
ajax33
16 points
15 comments
April 02, 2026
Related Discussions
Found 5 related stories in 54.0ms across 3,471 title embeddings via pgvector HNSW
- We might all be AI engineers now sn0wflak3s · 188 pts · March 06, 2026 · 54% similar
- Maybe the G in AGI stands for Gemini speckx · 13 pts · March 10, 2026 · 53% similar
- What if AI just makes us work harder? paulpauper · 42 pts · March 06, 2026 · 50% similar
- A.I. Helped One Man (and His Brother) Build a $1.8B Company jbredeche · 50 pts · April 02, 2026 · 49% similar
- What 81,000 people want from AI dsr12 · 22 pts · March 19, 2026 · 49% similar
Discussion Highlights (8 comments)
mkdelta221
Fascinating article. Everyone knows they will be replaced by AI but nobody wants to talk about it.
maplethorpe
> What Moravec was describing was a difference in how skills are stored, not how complex they are. Physical skills are encoded in the body, almost impossible to put into words. But knowledge work, the analysis, the diagnosis, the strategy, the legal argument, is stored in text. Humans wrote it all down. Every framework, every protocol, every insight accumulated across every profession for centuries, captured in documents, papers, books, case files, and reports. I don't think this is true. Text is a lossy form of communication. There's no way to get the sum of my knowledge from my brain over to your brain purely through text. Also, anyone who has ever had to deal with incomplete documentation knows that humans did not, in fact, write it all down.
bamboozled
There is a jarring assumption in this article, which is that LLMs are performing much much better then they are. Thy are awesome tools, but they just aren't that great where I'd be replacing my accountant with anything like an LLM and personally, as a software engineer, the more I use these tools, the more I realize I need to understand software better than I ever have before to actually be proficient with these tools. Maybe we're agreeing to some degree because the author seems to think there will still be need for certain skill sets, even with AGI, but I think we're still in the figuring shit out phase. If any thing they've made my job much much more stressful because I'm just dealing with 10x the amount of code to reason about than before, the expectation to delivery faster is growing, and people are just smashing out code without properly understanding the business problems because of doing implementing a feature is so low.
effable
The core idea of ASI arriving before AGI seems to be true: we have already seen that through Chess Programs, LLMs etc. However what caught my eye and that to me does reflect the lens through which the author sees the world, unless I am completely misunderstanding their point: "Most of the world's important problems have never been modelled at the precision AI requires to act on them. Pollution, traffic, healthcare, taxation, public infrastructure, water distribution." Pollution, traffic, healthcare and public infrastructure however are not really problems that require "clever" solutions - rather they are problems of political will, regulating industry and moving to cleaner energy sources. For example, we have known about human caused climate change for decades and carbon emissions are just hitting their peak now.
guillego
There might be a really good conclusion in this article but I had to give up halfway through. The LLM-writing chapter after chapter is unbearable, full of short sentences leading into paragraphs that read like LinkedIn posts. > AlphaFold solved protein structure prediction, a fifty-year problem, not in decades but in a fraction of the time traditional research would have required. Not by thinking like a biologist. By finding patterns at a scale no human could reach. That is a domain detonation. Not progress. A before-and-after. The same logic is now moving through radiology, legal research, financial analysis, drug discovery, software engineering. If you have good ideas, good insights and good stories, they deserve your own words. If you can't respect your own ideas enough to spend time writing them down and forming them into paragraphs and sentences, why should I respect them any more?
rembal
I love the water/ice metaphor, but the author tends to completely ignore the physical world. Example with cardiologist - we all know what happened to the radiologist prediction. Example with defence (or war) becoming mostly a case of having a better AI model: well, try to win without a solid, distributed production capabilities, energy access and safe supply chains, in a geographical disadvantage. Embodiment is coming, but it will require moving a lot of atoms. Also, even in text heavy domains, a lot of knowledge is not written down, often of purpose (especially in legal), and that's the juicy part...
effed3
AGI will earn a salary, pay taxes, buy goods? So this is: maximizing for efficiency and production, but who will buy all this? Maybe pushing all possible to AI / AGI will end in a global deep recession or worse, seems the course is under the eyes of us all.
beej71
"She is also one person, available to perhaps fifteen clients a day, charging accordingly." Sounds like this pays nothing. Are we all to be in charge of billion dollar companies with no employees? Just sitting around thinking? I have an alternate future in mind where that happens to a relatively few people and then the guillotines come out.