DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence

cmrdporcupine 146 points 14 comments April 24, 2026
huggingface.co · View on Hacker News

Discussion Highlights (6 comments)

woeirua

Hmm. Looks like DeepSeek is just about 2 months behind the leaders now.

cmrdporcupine

Pricing: https://api-docs.deepseek.com/quick_start/pricing "Pro" $3.48 / 1M output tokens vs $4.40 for GLM 5.1 or $4.00 for Kimi K2.6 "Flash" is only $0.28 / 1M and seems quite competent (EDIT: Note that if you hit the setting that opencode etc hit (deepseek-chat / deepseek-reasoner) for DeepSeek API, it appears to be "flash".)

taosx

So the R line (R2) is discontinued or folder back into v4 right?

anonzzzies

From this thread [0] I can assume that because, while 1.6T, it is A49B, it can run (theoretically, very slow maybe) locally on consumer hardeware, or is that wrong? [0] https://news.ycombinator.com/item?id=47864835

statements

The quality of this model vs the price is an insane value deal.

gwern

Main discussion: https://news.ycombinator.com/item?id=47884971

Semantic search powered by Rivestack pgvector
5,406 stories · 50,922 chunks indexed