1M context is now generally available for Opus 4.6 and Sonnet 4.6

meetpateltech 511 points 202 comments March 13, 2026
claude.com · View on Hacker News

Discussion Highlights (20 comments)

dimitri-vs

The big change here is: > Standard pricing now applies across the full 1M window for both models, with no long-context premium. Media limits expand to 600 images or PDF pages. For Claude Code users this is huge - assuming coherence remains strong past 200k tok.

minimaxir

Claude Code 2.1.75 now no longer delineates between base Opus and 1M Opus: it's the same model. Oddly, I have Pro where the change supposedly only for Max+ but am still seeing this to be case. EDIT: Don't think Pro has access to it, a typical prompt just hit the context limit. The removal of extra pricing beyond 200k tokens may be Anthropic's salvo in the agent wars against GPT 5.4's 1M window and extra pricing for that.

convenwis

Is there a writeup anywhere on what this means for effective context? I think that many of us have found that even when the context window was 100k tokens the actual usable window was smaller than that. As you got closer to 100k performance degraded substantially. I'm assuming that is still true but what does the curve look like?

vessenes

This is super exciting. I've been poking at it today, and it definitely changes my workflow -- I feel like a full three or four hour parallel coding session with subagents is now generally fitting into a single master session. The stats claim Opus at 1M is about like 5.4 at 256k -- these needle long context tests don't always go with quality reasoning ability sadly -- but this is still a significant improvement, and I haven't seen dramatic falloff in my tests, unlike q4 '25 models. p.s. what's up with sonnet 4.5 getting comparatively better as context got longer?

zmmmmm

Noticed this just now - all of a sudden i have 1M context window (!!!) without changing anything. It's actually slightly disturbing because this IS a behavior change. Don't get me wrong, I like having longer context but we really need to pin down behaviour for how things are deployed.

8cvor6j844qw_d6

Oh nice, does it mean less game of /compact, /clear, and updating CLAUDE.md with Claude Code?

aliljet

Are there evals showing how this improves outputs?

johnwheeler

This is incredible. I just blew through $200 last night in a few hours on 1M context. This is like the best news I've heard all year in regards to my business. What is OpenAIs response to this? Do they even have 1M context window or is it still opaque and "depends on the time of day"

wewewedxfgdf

The weirdest thing about Claude pricing is their 5X pricing plan is 5 times the cost of the previous plan. Normally buying the bigger plan gives some sort of discount. At Claude, it's just "5 times more usage 5 times more cost, there you go".

gaigalas

I'm getting close to my goal of fitting an entire bootstrappable-from-source system source code as context and just telling Claude "go ahead, make it better".

vicchenai

The no-degradation-at-scale claim is the interesting part. Context rot has been the main thing limiting how useful long context actually is in practice — curious to see what independent evals show on retrieval consistency across the full 1M window.

margorczynski

What about response coherence with longer context? Usually in other models with such big windows I see the quality to rapidly drop as it gets past a certain point.

pixelpoet

Compared to yesterday my Claude Max subscription burns usage like absolutely crazy (13% of weekly usage from fresh reset today with just a handful prompts on two new C++ projects, no deps) and has become unbearably slow (as in 1hr for a prompt response). GGWP Anthropic, it was great while it lasted but this isn't worth the hundreds of dollars.

dominotw

can someone tell me how to make this instruction work in claude code "put high level description of the change you are making in log.md after every change" works perfectly in codex but i just cant get calude to do it automatically. I always have to ask "did you update the log".

chaboud

Awesome.... With Sonnet 4.5, I had Cline soft trigger compaction at 400k (it wandered off into the weeds at 500k). But the stability of the 4.6 models is notable. I still think it pays to structure systems to be comprehensible in smaller contexts (smaller files, concise plans), but this is great. (And, yeah, I'm all Claude Code these days...)

arjie

This is fantastic. I keep having to save to memory with instructions and then tell it to restore to get anywhere on long running tasks.

swader999

I notice Claude steadily consuming less tokens, especially with tool calling every week too

thunkle

Just have to ask. Will I be spending way more money since my context window is getting so much bigger?

aragonite

Do long sessions also burn through token budgets much faster? If the chat client is resending the whole conversation each turn, then once you're deep into a session every request already includes tens of thousands of tokens of prior context. So a message at 70k tokens into a conversation is much "heavier" than one at 2k (at least in terms of input tokens). Yes?

alienbaby

is this the market played in front of our eyes slice by slice: ok, maybe not, but watching these entities duke it out is kinda amusing? There will be consequences but may as well sit it out for the ride, who knows where we are going?

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed