NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute
sdpmas
122 points
25 comments
March 19, 2026
Related Discussions
Found 5 related stories in 43.6ms across 3,471 title embeddings via pgvector HNSW
- NanoGPT Slowrun: Language Modeling with Limited Data, Infinite Compute sdpmas · 147 pts · March 04, 2026 · 80% similar
- GPT‑5.4 Mini and Nano meetpateltech · 217 pts · March 17, 2026 · 53% similar
- GPT‑5.3 Instant meetpateltech · 319 pts · March 03, 2026 · 53% similar
- TurboQuant: Redefining AI efficiency with extreme compression ray__ · 509 pts · March 25, 2026 · 52% similar
- GPT 5.4 Thinking and Pro twtw99 · 64 pts · March 05, 2026 · 52% similar
Discussion Highlights (7 comments)
yorwba
Related: Discussion on the initial NanoGPT Slowrun announcement: https://news.ycombinator.com/item?id=47251259 (185 points 15 days ago, 39 comments)
littlestymaar
> Data efficiency matters because compute grows much faster than data [2] (referencing a paper from 2022) I'm not convinced this is particularly true in today's world, if you have more compute, you can simply generate more, and higher quality, artificial data. That's what all labs have been doing since at least 2023. Also, the post references the Chinchilla-optimal training as a comparison baseline, but everyone has moved far beyond Chinchilla scaling, small models are routinely trained on 10-400 times more data than (1-40T tokens) than the Chinchilla-optimal number, so the entire industry went the complete opposite of what they are proposing. That doesn't mean the techniques presented here are useless or anything (I'm not qualified to judge) but you should take the introduction with a grain of salt.
nsnzjznzbx
We will get to the point where you can quickly bootstrap i.e. an LLM can train a better LLM in a loop, leave it and it can really learn. Like learn learn. "Train yourself to solve this problem see OBJECTIVE.md"
abeppu
In their little algorithm box on Chain Distillation, they have at step 2b some expression that involves multiplying and dividing by `T`, and then they say "where α = 0.5, T = 1.0". I think someone during the copy-editing process told them this needed to look more complicated?
andai
What's the human baseline? How many cats does a human need to see to learn what a cat is, vs an AI? Maybe not quite a fair comparison since my human brain has been "learning" for half a billion years before I was born. I wonder if there's an equivalent of that for AI. Evolving the architectures?
naasking
Great project. On the matter of data efficiency and regularization, I'd love to see someone try scaling GrokAlign!
pastescreenshot
The result is interesting, but the practical question for me is where the compute bill lands once you include both training and serving. If a fixed-data regime pushes you toward ensembles plus chain distillation, is the endgame “serve the ensemble”, or do you expect most of the gain can be compressed back into a single deployable model later? That seems like the difference between a neat scaling result and a generally usable recipe.