DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence
cmrdporcupine
146 points
14 comments
April 24, 2026
Related Discussions
Found 5 related stories in 63.9ms across 5,406 title embeddings via pgvector HNSW
- DeepSeek-V4 Technical Report [pdf] tianyicui · 19 pts · April 24, 2026 · 73% similar
- DeepSeek v4 impact_sy · 455 pts · April 24, 2026 · 71% similar
- DeepSeek by Hand in Excel teleforce · 13 pts · March 18, 2026 · 66% similar
- Reducto releases Deep Extract raunakchowdhuri · 46 pts · April 06, 2026 · 50% similar
- Intelligence is a commodity. Context is the real AI Moat adlrocha · 28 pts · March 01, 2026 · 50% similar
Discussion Highlights (6 comments)
woeirua
Hmm. Looks like DeepSeek is just about 2 months behind the leaders now.
cmrdporcupine
Pricing: https://api-docs.deepseek.com/quick_start/pricing "Pro" $3.48 / 1M output tokens vs $4.40 for GLM 5.1 or $4.00 for Kimi K2.6 "Flash" is only $0.28 / 1M and seems quite competent (EDIT: Note that if you hit the setting that opencode etc hit (deepseek-chat / deepseek-reasoner) for DeepSeek API, it appears to be "flash".)
taosx
So the R line (R2) is discontinued or folder back into v4 right?
anonzzzies
From this thread [0] I can assume that because, while 1.6T, it is A49B, it can run (theoretically, very slow maybe) locally on consumer hardeware, or is that wrong? [0] https://news.ycombinator.com/item?id=47864835
statements
The quality of this model vs the price is an insane value deal.
gwern
Main discussion: https://news.ycombinator.com/item?id=47884971