Claude Opus 4.6 accuracy on BridgeBench hallucination test drops from 83% to 68%

bratao 56 points 11 comments April 12, 2026
twitter.com · View on Hacker News

Discussion Highlights (2 comments)

Reubend

Because the website doesn't seem to show any sample size of runs, I assume they ran it once across the suite. The models are nondeterministic, and therefore it's pretty normal for different runs to give different results. I don't see this as evidence that Opus 4.6 has gotten worse.

ehtbanton

Benchmarks like this one are designed to thoroughly test the model across several iterations. 15% is a MASSIVE discrepancy. Come on Anthropic, admit what you're doing already and let us access your best models unhindered, even if it costs us more. At the moment we just all feel short-changed.

Semantic search powered by Rivestack pgvector
4,351 stories · 40,801 chunks indexed