The AI Great Leap Forward

jodah 89 points 43 comments April 08, 2026
leehanchung.github.io · View on Hacker News

Discussion Highlights (13 comments)

supliminal

What’s the story with Klarna? Any details around it?

mholm

Decent sentiment and analogy, but writing this with AI with hackneyed examples undercuts the point

IanCal

> Today’s backyard AI looks like AI. It is not AI. Getting real tired of people new to AI thinking only recent LLMs are AI somehow. BoW was a pretty solid technique and that only requires you to learn how to count to one.

arisAlexis

The outputs were wrong 2 years ago maybe.

348asGaq7

This is a great comparison. The US dominated software industry is centrally planned and in many ways run like a communist country, taking into account the whims of the current chairman in Washington. If the chairman dictates DEI, DEI it is. Most software developers put up the proper flags in their Twitter "bios" and purged opponents. The same developers now queue to work for Zuckerberg's "male energy" company. If the chairman and the industry dictate AI, AI it is. The same people who said girls and coal miners have to code now talk about efficiency, products and rationalize layoffs. This is the product of an industry that has been dominated by bullshitters for at least two decades.

skybrian

If you want to show that that there's a risk of disaster you need to do better than making a silly analogy. Companies will often start expensive projects that fail and then they pick themselves up and move on. Big, profitable companies can afford bigger failures. Google has had a slew of failed projects, and Meta's metaverse stuff tanked, and they're still fine. They can afford to experiment. So which companies are betting so big that it might actually threaten them? Oracle maybe?

vingilot

Note that the author of this blog post is also the author of a soon-to-be-published Manning book on safely implementing AI systems.

gbnwl

Liked the article in general, but > These apps will win awards at the next all-hands. In two years they’ll be unmaintainable tech debt some poor soul inherits and rewrites from scratch. Huge assumption/prediction that I think is actually just wrong. There's this weird assumption from a certain crowd, never justified or explained, that tech debt accrued by AI is now, and will forever be, impossible for AI to address, and will for some reason require humans to fix. Working at pace with agents I accrue tech debt every day, then go through the code nightly, again with agents, to clean and tidy everything up. The more I see this view espoused the more bizzare it seems. People's assumptions seem to be "if AI couldn't one shot this perfectly the first time, then it's useless to try to have it go back over the codebase and identify and address issues". This doesn't match my personal experience at all, second or third passes over code with CC or Codex are almost always helpful and weed out critical issues, but I'm open to hearing from the rest of the HN crowd on their experiences on this.

deltamidway

Great rant! Claw based propaganda posters makes me smile.

cynicalsecurity

Oh god, don't get me started on this. The article goes full opera-level tragedy, like we're all marching into some corporate gulag where AI eats our souls and the lights go out forever. "The famine comes later" my ass. It's peak doomer porn, written to make you feel like the sky is falling instead of just another round of executive circle jerking. The corporate world has always been 80% lies, fake KPIs and theatre. "Synergies", "disruptive innovation" "digital transformation", same shit since the 90s. Managers don't give a flying fuck about your clever moat. They wake up one day, get a spreadsheet from McKinsey saying "cut 15%" and boom - your undocumented wizardry gets deleted along with your badge. Nothing personal, just Excel doing what Excel does. Yes, the corporate bullshitry has been turbocharged with AI now. But it's nothing new and nothing that much tragic. At the very least the same AI can help me finally release personal projects that have been collecting dust for years. Who knows what the future will bring. I'd be much more worried of oil supply chokehold than of AI turbo circus in the corporate world. No oil means not having enough food tomorrow; or medical supplies. My child might die because of this. But AI temporarily causing perturbations at work is just another round of corporate theatre. Been there many times. Employment danger is real, but not apocalyptic. Some jobs will evaporate, sure. But even as the same articles states, now once thing ("AI know-how") replaced another thing ("domain knowledge siloing"). The corporate machine still needs warm bodies for the messy human parts: sales, talking to customers (customers hate talking to a robot, what a fucking surprise), covering ass. I would say, covering ass is the most important one, along with delegating the project management to someone else below on the corporate hierarchy, so upper management wouldn't have to work and would only keep asking for status updates. They would always need someone to type the actual AI requests. It's not like top management or VP would ever do that, neither they would ever run it automatically, since AI can delete production (happened many times), and they don't want to be the scapegoats. So yeah, the article is overdramatic trash for clicks. AI is just another round of that circus. The "famine" won't be real, it'll be a bunch of overpromises, just as usual. Same as it ever has been.

tim333

Comparing AI to steel production in the Great Leap Forward seems unfair. It's not some communist plan - it's a capitalist free for all similar to the industrial revolutions in the UK/US. It won't lead to a famine, it'll lead to the chaotic creative destruction capitalism usually produces.

operatingthetan

>A prompt template behind a REST endpoint is not a model. Not pulling any punches over there. It does feel like 95% of the "AI industry" consists of wrappers and associated tools.

sinuhe69

The organisation where my wife works ordered all mid-level leaders and above to take a mandatory AI course, mostly remote with two days on-site to present their capstone projects, costing 2x K USD (not including flights). The capstone projects sounded impressive, and the course was celebrated as a resounding success. However, one year later, as far as I know, none of the capstone projects have been implemented, including my wife's and other ones I know of. Having looked at some of the project descriptions, I realised that they would need to invest far more manpower, special expertise and time if they wanted to implement them with a moderate chance of success. I believe this is not uncommon in large organisations worldwide. BTW, it’s great that somebody has drawn a comparison with China’s Great Leap Forward. Not many people know about it and it always serves as a stark reminder of how crazy state-ordered “progress” could be.

Semantic search powered by Rivestack pgvector
3,961 stories · 36,971 chunks indexed