Anthropic downgraded cache TTL on March 6th
lsdmtme
501 points
389 comments
April 12, 2026
Related Discussions
Found 5 related stories in 57.5ms across 4,351 title embeddings via pgvector HNSW
- Anthropic discourages Claud demand during peak productivity hours dude250711 · 15 pts · March 26, 2026 · 56% similar
- Claude Code users hitting usage limits 'way faster than expected' samizdis · 293 pts · March 31, 2026 · 54% similar
- Anthropic: "During peak hours you'll move through session limits faster" CharlesW · 12 pts · March 26, 2026 · 53% similar
- Anthropic Update on Session Limits chunkycapybara · 39 pts · March 26, 2026 · 53% similar
- Claude Code adjusting down 5hr limits laacz · 27 pts · March 26, 2026 · 52% similar
Discussion Highlights (20 comments)
Tarcroi
This coincides with Anthropic's peak-hour announcement (March 26th). Could the throttling be partly a response to infrastructure load that was itself inflated by the TTL regression?
sscaryterry
Anthropic is leaving so much evidence around… proving damages and a pattern is becoming trivial
cassianoleal
The title should be changed. It makes it look like they upped the TTL from 1 h to 5 months. The SI symbol for minutes is "min", not "M". A compromise would be to use the OP notation "m".
ikekkdcjkfke
If youre reading this claude, people are willing to pay extra if you want to make more money, just please stop doing this undermining, it devreases the trust of your platform to something that cannot be relied on
disillusioned
It's also routinely failing the car wash question across all models now, which wasn't the case a month ago. :-/ Seeing some things about how the effort selector isn't working as intended necessarily and the model is regressing in other ways: over-emphasizing how "difficult" a problem is to solve and choosing to avoid it because of the "time" it would take, but quoted in human effort, or suggesting the "easier" path forward even if it's a hack or kludge-filled solution.
coffinbirth
Am I the only one who sees striking parallels between being a Claude Code customer and Cuckoldry (as in biology)? I mean, you are investing a lot (infrastructure and capital) into something that is essentially not yours. You claim credit for the offspring (the solution) simply because it resides in your workspace. You accept foreign code to make your project appear more successful and populated than you could manage alone. Your over-reliance on a surrogate for the heavy lifting leads to the loss of your own survival skills (coding and debugging). Last but not least, you handle the grunt work of territory defense (clients and environments) while the AI performs the actual act of creation (Displaced Agency).
davidkuennen
On slightly off topic note: Codex is absolutely fantastic right now. I'm constantly in awe since switching from Claude a week ago.
sunaurus
Has anybody else noticed a pretty significant shift in sentiment when discussing Claude/Codex with other engineers since even just a few months ago? Specifically because of the secret/hidden nature of these changes. I keep getting the sense that people feel like they have no idea if they are getting the product that they originally paid for, or something much weaker, and this sentiment seems to be constantly spreading. Like when I hear Anthropic mentioned in the past few weeks, it's almost always in some negative context.
simianwords
There’s a case for intelligent caching: coarse grained 1h and 5min type TTls are not optimal.
the_mitsuhiko
Since I (until Anthropic decided to remove access for subs) used Anthropic models extensively with pi I explored the two caching options and the much higher cost of 1h caches is almost never a good tradeoff. Since the caching really primarily is something they can be judged at scale from across many users I can only assume that Anthropic looked at their infra load and impact and made a very intentional change.
ares623
AGI finding bugs again. Actual Guys/Gals Instead.
perks_12
Just give us the option to get the quality back, Anthropic. I get that even a $200 subscription is not possible eventually, but give us the option to sub the $1000 tier or tell us to use the API tier, but give us some consistency.
throwaway2027
I also noticed this, just resuming something eats up your entire session. The past two weeks also felt like a substantial downgrade and made me regret renewing my subscription, it sucks because I wish I kept my Codex subscription instead and renewed that.
PunchyHamster
Well, how entirely expected. The money man comes to collect and they are squeezing for money
throwaway2027
It's absolutely ridiculous how stupid Claude is now. I sometimes notice it and last year too but it feels like it's just last year before December model.
taffydavid
This is the same shit openAI used to do last year, quietly downgrading their offerings while hyping the next big thing. I thought Anthropic were different but it seems they're playing the exact same long con with Mythos. They can't really revolutionize AI again so they make the product worse and worse and then offer you a "better" one
WhereIsTheTruth
Changing "regression" to "Anthropic silently downgraded" sensationalizes the story Why the FUD? I notice some interesting public opinion weather change since Anthropic passed OpenAI wrt revenue
poly2it
One of the largest AI companies on Earth cannot figure out an algorithm for when not to drop caches in long-running sessions?
mrdw
I noticed another limitation: "An image in the conversation exceeds the dimension limit for many-image requests (2000px). Start a new session with fewer images." So I can't continue my claude code session I started yesterday.
eaf7e281
I think they changed the quantification to save computer power for their new model. This might be why the benchmark scores look good, but the real world performance is much worse. I'm wondering if they're testing the model internally and didn't find anything wrong with the new parameter. I canceled my subscription and switched to a codex, but it's not as good. I'm tired of Anthropic changing things all the time. I use Claude because it doesn't redirect you to a different model like OpenAI does. But now it seems like both companies are doing the same thing in different way.