Anthropic Update on Session Limits

chunkycapybara 39 points 12 comments March 26, 2026
old.reddit.com · View on Hacker News

Discussion Highlights (7 comments)

xvector

Anthropic is probably the only AI company that is trying to stop absolutely hemorrhaging money and reach profitability and financial sustainability. They aren't far off - they burn a tiny fraction of the cash of OAI and achieve similar ARR despite this - but as they tighten the belt it's inevitable that companies like OAI come in and offer more subsidized (unsustainable) inference to get people to switch. They will inevitably do the same "rug pull". It'll be interesting to see how this plays out.

sunnybeetroot

I love the interface of Claude Code but with these limits I’d be willing to use Codex. Anyone know if it’s possible to use Claude Code with other provider subscriptions (not API usage costs)?

cyanydeez

This will just get worse, particularly when you consider just how shitty the US government is becoming at properly managing the basic necessities for stability. IF you arn't planning a local LLm strategy, you're surely tying your lifeline to anchors.

tim-star

that explains why i kept hitting limits this week

_the_inflator

With the introduction of Opus 4.6 my bills went through the roof. I never burned budgets so fast with so few prompts since then. I more and more use Codex, because token usage is a blackbox and I think that we will see the next couple of month the usual three tier model evolving: free, normal, luxury. 2027 will be the year of token regulation by administrations worldwide. Until then take care for being ripped of at the luxury level.

lbreakjai

I cancelled my subscription two days ago. I was a customer since October last year. I could get a decent bit of work done on just the 20$ subscription, but since this monday, I can barely get two prompts in before hitting my limit. Same codebase, same sort of prompt, same scale. I was already on the fence. Models like Qwen, Kimi, or GLM5 already go a very long way while being vastly cheaper, and the new openAI models feels equivalent but with higher limits. This is getting to the point where the right harness makes a bigger difference than the right model. I've been experimenting with some planner-executor-reviewer setup in opencode, and I'm starting to feel like multiple smaller models working together are netting me better results.

Insensitivity

Funny how before the announcement, people who were experiencing this were being gaslighted on different platforms, to think they have a "skill" issue using Claude Code Additionally, this was practically predicted and expected by so many people, the second the off-hours increase was announced. Shoddy company

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed