How the AI Bubble Bursts

martinvol 355 points 482 comments March 30, 2026
martinvol.pe · View on Hacker News

Discussion Highlights (20 comments)

256BitChris

I could see OpenAI hitting financial issues which triggers some media induced panic and for people to claim the AI bubble has popped. However, the core utility of the best AI (read: Anthropic's ATM, by miles), will still exist and be leveraged by those who have learned to use it well. I could also see the exponentially declining power requirements offsetting the exponential-but-slower rate of AI compute demand, which then renders a lot of unused capacity in these massive data centers. I think of it like the old mainframes in the 70s which would take an entire city block to run, and now we have the equivalent of millions, if not billions of them in our pockets.

monegator

> How this affects you? > checks list ... nope, nothing will either directly or indirectly affect me. Let it happen sooner, rather than later, and unleash the mobs at the tech bros that set the world on course to make everybody's life more miserable. We'll still be here to get the scrapped RAM and GPUs to train and infere local models thank you very much.

general_reveal

HN is no longer a reliable place for the truth. Quite frankly, unless you are utterly self educated, you are terribly vulnerable to this place. At this rate, I’d almost prefer to talk on a private mailing list with vetted resumes.

elorant

I feel that even if the bubble bursts hardware prices will still take years to normalize. So no clear benefit for the average consumer here.

jqpabc123

Another possibility not really addressed here --- local LLMs. AI on hardware you own and control --- instead of a metered service provider. In other words, a repeat of the "personal computing" revolution but this time focused on AI. TurboQuant could be a key step in this direction.

franze

.... so what? the technology exists, the models exist. Even when the bubble bursts things will not go to the state "before AI". Even if model development would stop today (not the worst thing to happen) it would still be the most impactful invention since the printing press

positron26

When will this concern farm end? Internet is ant-milling harder than a model gone psychotic on synthetic data. Call me when it's over. Back to the mines. The Vulkan only writes itself when prompted with well-conditioned problem statements.

nopinsight

> nobody is sure if even their metered pricing is profitable This is most likely wrong. Lab executives insist that serving tokens is profitable. It's the cost of training next-gen models that requires them to keep raising ever larger rounds. More importantly, many independent providers price tokens of open-weight models at a fraction of Anthropic's prices.

qoez

History doesn't have to repeat. There's barely anything else going on in terms of innovation, and AI is a real step function technology. We might be overspending but there's no way we're getting another AI winter like last time (remember last time investment in 90s AI had to compete for resources with the internet boom).

Chance-Device

From the beginning of this I’ve wondered the same question: how do these companies justify spending such massive amounts now (and 3 or 4 years ago) when software and hardware efficiencies will bring down the cost dramatically fairly soon? They basically decided that scaling at any cost was the way to go. This only works as a strategy if efficiency can’t work, not if you simply haven’t tried. Otherwise, a few breakthroughs and order of magnitude improvements and people are running equivalent models on their desktops, then their laptops, then their phones. Arguably the costs involved means that our existing hardware and software is simply non viable for what they were and are trying to do, and a few iterations later the money will simply have been wasted. If you consider funnelling everything to nvidia shareholders wasting it, which I do.

shubhamjain

> OpenAI is struggling to monetize. They turned to showing ads in ChatGPT, something Sam Altman once called a “last resort”, while Anthropic is crushing them with the more profitable corporate customers and software engineers. Their shopping feature flopped and they shut down Sora, both supposed to be revenue drivers. I don't think Sora ever thought of as a "revenue driver" considering how notoriously expensive and unpredictable video generation via inference is. OpenAI is just a repeat of Uber—minus the scandals—in a different decade. Uber got itself into tons of businesses related to transportation on the assumption that it would all be viable "one day." Same stuff that OpenAI is going. I would say, once the bubble bursts—which is likely, considering the geopolitical environment—OpenAI, Anthropic, and Alphabet are likely to be the winners, with a lot of small players at the tail end. Anthropic won over programmers and OpenAI on everyone else. For millions of people, AI = ChatGPT, so I would bet that OpenAI can still become profitable, once they cut down their expenses.

schnitzelstoat

It's a winner-takes-all market and everyone wants to be the next Google and not the next Lycos or AskJeeves etc. It'd be interesting to see what they spend all the money on though as we seem to be hitting diminishing returns and I'm not sure if the typical enterprise user really cares about small improvements on benchmarks. It seems like it'd probably be better to spend all that on marketing, free trials, exclusivity/bundle deals etc. ChatGPT already has a strong advantage there as it has so much brand recognition. I've seen lay people refer to all LLM's as ChatGPT like my grandparents did with Nintendo and all video game consoles.

infecto

It’s incredible how polarizing the AI rush is. I keep the perspective that the technology is an absolute step change but I have no idea where the cards will fall. I take a lot of issue with these style of articles. I get a sense that the authors are being overly defensive. The cost to serve tokens is absolutely profitable today and that’s been true for at least a year. What’s unclear is how R&D and capex fit into the picture. I am not that pessimistic on this front either though. For the data center build outs, demand for tokens is still exceeding supply. On the R&D front, well most of us here on HN have benefited from decades of overinflated engineering salaries being paid by often companies that were not profitable and not only unprofitable, usually without a plan for success. In this current rush, companies cannot keep up with supply, it’s a much easier math problem when you have something that people want (tokens) and you need to figure out profitability when including R&D.

piker

> They lose a big customer for their cloud services. Even worse considering that now, using the AI they helped fund, everyone can compete with their sub-par products. GitHub is a good candidate for disruption, and that’d be just the start. Look, I'm a Microsoft hater like the rest of us, but calling Microsoft's products sub-par discredits the author a good bit. I invite anyone who thinks this to try and compete with them. Go after something like Word, for example. Then prepare to be awed by what some of the most brilliant programming minds ever can produce after grinding for four decades.

joshstrange

> RAM prices are crashing because new models won’t need as much Reality begs to differ [0] and following the link for that text goes to an article [1] where they talk about Google's TurboQuant which supposedly will lower the RAM requirements. Now if that means RAM prices come down (as speculated, not reported on, in the link) or the AI companies just do more things with their extra ram is yet to be determined. The fact this article links there with text "RAM prices are crashing" throws the entire rest of the article into doubt for me. RAM prices are most certainly not crashing (yet) and treating it as a forgone conclusion because _one_ lab found gains could be made and hasn't even reported on the efficiency of their method is just irresponsible. It's almost as bad as when LLMs link things to prove their point, you visit the link, and find it says nothing of the sort or even the opposite. [0] https://pcpartpicker.com/trends/price/memory/ [1] https://tech.sportskeeda.com/gaming-news/how-google-s-new-tu...

Aurornis

This article tries to build upon a lot of half-truths or incorrect facts, like this: > OpenAI is struggling to monetize. They turned to showing ads in ChatGPT, The ads aren’t going into your paid plans (except maybe a highly discounted tier, depending on the market). The ads are a play to offer a free version. Having an ad-supported free tier isn’t new. The discussion about being unprofitable also repeats the reductionist view that these companies are losing money and therefore the business model doesn’t work. This happens with every VC cycle where writers don’t understand that funded companies are supposed to lose money while they grow. That’s what the investment money is for. We have very strong indicators that inference is not a money loser for these companies and is likely very profitable. They should be spending large amounts of money on R&D to get ahead and try new things while they’re serving up tokens. The “but they’re losing money” argument never seems to be brought out against competitors that literally give away their models for free and for which we can calculate the cost of serving 400B-1T parameter open weight models.

EternalFury

If somehow recovering the capex expenditure is not counted, if somehow the cost of developing future models is not counted, then yes, inference costs of current leading models allow a profit. But those things are tied together. Even xAI, that now has a reasonably competitive model, is struggling to achieve PMF. Meta is in shambles because their models have underperformed for years now.

Havoc

Gov bailout seems like the only way out.

richard___

Complete bs.

agentultra

It sounds like most of the data centers promised in 2025 and 2026 are not even built yet and most of the GPUs bought haven't even been installed. If it does all go down in flames, even floor value is not going to be that valuable. I can't predict the future but it's smelling a lot like a recession already under way that is bigger than the sub-prime crash.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed