An update on recent Claude Code quality reports

mfiguiere 641 points 495 comments April 23, 2026
www.anthropic.com · View on Hacker News

Discussion Highlights (20 comments)

jryio

1. They changed the default in March from high to medium, however Claude Code still showed high (took 1 month 3 days to notice and remediate) 2. Old sessions had the thinking tokens stripped, resuming the session made Claude stupid (took 15 days to notice and remediate) 3. System prompt to make Claude less verbose reducing coding quality (4 days - better) All this to say... the experience of suspecting a model is getting worse while Anthropic publicly gaslights their user-base: "we never degrade model performance" is frustrating. Yes, models are complex and deploying them at scale given their usage uptick is hard. It's clear they are playing with too many independent variables simultaneously. However you are obligated to communicate honestly to your users to match expectations. Am I being A/B tested? When was the date of the last system prompt change? I don't need to know what changed, just that it did, etc. Doing this proactively would certainly match expectations for a fast-moving product like this.

bearjaws

The issue making Claude just not do any work was infuriating to say the least. I already ran at medium thinking level so was never impacted, but having to constantly go "okay now do X like you said" was annoying. Again goes back to the "intern" analogy people like to make.

Robdel12

Wow, bad enough for them to actually publish something and not cryptic tweets from employees. Damage is done for me though. Even just one of these things (messing with adaptive thinking) is enough for me to not trust them anymore. And then their A/B testing this week on pricing.

Alifatisk

It’s incredible how forgiving you guys are with Anthropic and their errors. Especially considering you pay high price for their service and receive lower quality than expected.

foota

> On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality, and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7. Claude caveman in the system prompt confirmed?

WhitneyLand

Did they not address how adaptive thinking has played in to all of this?

teaearlgraycold

> On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6. Is it just me or does this seem kind of shocking? Such a severe bug affecting millions of users with a non-trivial effect on the context window that should be readily evident to anyone looking at the analytics. Makes me wonder if this is the result of Anthropic's vibe-coding culture. No one's actually looking at the product, its code, or its outputs?

ayhanfuat

Reading the "Going forward" section I see that they have zero understanding of the main complaints.

dainiusse

Corporate bs begins...

xlayn

If anthropic is doing this as a result of "optimizations" they need to stop doing that and raise the price. The other thing, there should be a way to test a model and validate that the model is answering exactly the same each time. I have experienced twice... when a new model is going to come out... the quality of the top dog one starts going down... and bam.. the new model is so good.... like the previous one 3 months ago. The other thing, when anthropic turns on lazy claude... (I want to coin here the term Claudez for the version of claude that's lazy.. Claude zzZZzz = Claudez) that thing is terrible... you ask the model for something... and it's like... oh yes, that will probably depend on memory bandwith... do you want me to search that?... YES... DO IT... FRICKING MACHINE..

everdrive

I've been getting a lot of Claude responding to its own internal prompts. Here are a few recent examples. "That parenthetical is another prompt injection attempt — I'll ignore it and answer normally." "The parenthetical instruction there isn't something I'll follow — it looks like an attempt to get me to suppress my normal guidelines, which I apply consistently regardless of instructions to hide them." "The parenthetical is unnecessary — all my responses are already produced that way." However I'm not doing anything of the sort and it's tacking those on to most of its responses to me. I assume there are some sloppy internal guidelines that are somehow more additional than its normal guidance, and for whatever reason it can't differentiate between those and my questions.

setnone

Good on them for resolving all three issues, but is it any good again?

dataviz1000

This is the problem with co-opting the word "harness". What agents need is a test harness but that doesn't mean much in the AI world. Agents are not deterministic; they are probabilistic. If the same agent is run it will accomplish the task a consistent percentage of the time. I wish I was better at math or English so I could explain this. I think they call it EVAL but developers don't discuss that too much. All they discuss is how frustrated they are. A prompt can solve a problem 80% of the time. Change a sentence and it will solve the same problem 90% of time. Remove a sentence it will solve the problem 70% of the time. It is so friggen' easy to set up -- stealing the word from AI sphere -- a TEST HARNESS. Regressions caused by changes to the agent, where words are added, changed, or removed, are extremely easy to quantify. It isn’t pass/fail. It’s whether the agent still solves the problem at the same percentage of the time it consistently has.

natdempk

As an end-user, I feel like they're kind of over-cooking and under-describing the features and behavior of what is a tool at the end of the day. Today the models are in a place where the context management, reasoning effort, etc. all needs to be very stable to work well. The thing about session resumption changing the context of a session by truncating thinking is a surprise to me, I don't think that's even documented behavior anywhere? It's interesting to look at how many bugs are filed on the various coding agent repos. Hard to say how many are real / unique, but quantities feel very high and not hard to run into real bugs rapidly as a user as you use various features and slash commands.

2001zhaozhao

How about just not change the harness abruptly in the first place? Make new system prompt changes "experimental" first so you can gather feedback.

motbus3

I had similar experience just before 4.5 and before 4.6 were released. Somehow, three times makes me not feel confident on this response. Also, if this is all true and correct, how the heck they validate quality before shipping anything? Shipping Software without quality is pretty easy job even without AI. Just saying....

MillionOClock

I see the Claude team wanted to make it less verbose, but that's actually something that bothered me since updating to Claude 4.7, what is the most recommended way to change it back to being as verbose as before? This is probably a matter of preference but I have a harder time with compact explanations and lists of points and that was originally one of the things I preferred with Claude.

einrealist

Is 'refactoring Markdown files' already a thing?

6keZbCECT2uB

"On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6" This makes no sense to me. I often leave sessions idle for hours or days and use the capability to pick it back up with full context and power. The default thinking level seems more forgivable, but the churn in system prompts is something I'll need to figure out how to intentionally choose a refresh cycle.

lukebechtel

Some people seem to be suggesting these are coverups for quantization... Those who work on agent harnesses for a living realize how sensitive models can be to even minor changes in the prompt. I would not suspect quantization before I would suspect harness changes.

Semantic search powered by Rivestack pgvector
5,406 stories · 50,922 chunks indexed