AutoKernel: Autoresearch for GPU Kernels
frozenseven
44 points
10 comments
March 11, 2026
Related Discussions
Found 5 related stories in 50.3ms across 3,471 title embeddings via pgvector HNSW
- Autoresearch: Agents researching on single-GPU nanochat training automatically simonpure · 82 pts · March 07, 2026 · 66% similar
- Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster hopechong · 145 pts · March 19, 2026 · 64% similar
- Autoresearch on an old research idea ykumards · 325 pts · March 23, 2026 · 55% similar
- Show HN: Autoresearch@home austinbaggio · 55 pts · March 11, 2026 · 52% similar
- How Kernel Anti-Cheats Work davikr · 102 pts · March 15, 2026 · 49% similar
Discussion Highlights (7 comments)
NitpickLawyer
... and so it begins. For a bit of context, goog already did something like this two generations of models ago, as announced in this blog post[1] from May '25: > AlphaEvolve is accelerating AI performance and research velocity. By finding smarter ways to divide a large matrix multiplication operation into more manageable subproblems, it sped up this vital kernel in Gemini’s architecture by 23%, leading to a 1% reduction in Gemini's training time. We are now seeing the same thing "at home", for any model. And with how RL heavy the new training runs have become, inference speedups will directly translate in faster training as well. [1] - https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...
sspehr
Have you benchmarked this against autoscheduling like with TVMs Ansor?
veselin
I guess we will have a lot more benefits if we can get this to work on something like llama.cpp - since it really has a lot of kernels for different quantizations, a lot of home users, high hardware diversity - so it is a likely place with highest bang for the buck. I guess they can be a contributor there.
ademeure
This is very cool! I've been working on something somewhat similar over the last few weeks, but trying to be much more general and arguably over-engineered! I like the scope of this project, keeping it limited to Triton and specific kinds of kernels makes it quite simple and efficient. I'm confused by the progress graph though; it looks like it's benchmarking a 4096x4096x4096 fp16 matmul rather than a full repo, and it claims a 1.31x improvement vs cuBLAS... while running at 187 TFLOPS which is 18.9% of peak utilization? cuBLAS definitely gets much closer to peak than that - most likely it's limited by CPU overhead or something else? Benchmarking is hard! Either way I'm excited to see other people working on this, I think it's an extremely promising area over the next 6 months.
aviinuo
Something seems off. For the 4kx4kx4k fp16 GEMM, cutlass is like 3x faster than this.
easygenes
Cool! I’ve been working on adding the same thing for Apple Silicon within my general “make autoresearch a serious tool” project here: https://github.com/Entrpi/autoresearch-everywhere
m3kw9
when will open source code bases like Swift, Rust etc put these on their routines to squeeze out the last bit of juice from its stone?