TorchTPU: Running PyTorch Natively on TPUs at Google Scale
mji
105 points
4 comments
April 23, 2026
Related Discussions
Found 5 related stories in 67.7ms across 5,406 title embeddings via pgvector HNSW
- The eighth-generation TPU: An architecture deep dive meetpateltech · 67 pts · April 22, 2026 · 63% similar
- Our eighth generation TPUs: two chips for the agentic era xnx · 427 pts · April 22, 2026 · 61% similar
- NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute sdpmas · 122 pts · March 19, 2026 · 51% similar
- Google unveils chips for AI training and inference in latest shot at Nvidia wslh · 12 pts · April 22, 2026 · 50% similar
- Linux PTP mainline development war story and new features ahlCVA · 11 pts · April 20, 2026 · 46% similar
Discussion Highlights (3 comments)
in-silico
This is great to see. I did trained some research models using the existing PyTorch/XLA on TPUs, and it was a mess of undocumented behavior and bugs (silently hanging after 8 hours of training!). If anyone is trying to use PyTorch on TPU before TorchTPU is released, you can check out the training pipeline that I ended up building to support my research: https://github.com/aklein4/easy-torch-tpu
Reubend
Sounds good, but my main question is: is this a fork, or a new backend they're building in (like MPS)?
noracists
Very excited for this.