Executing programs inside transformers with exponentially faster inference

u1hcw9nx 17 points 3 comments March 12, 2026
www.percepta.ai · View on Hacker News

Discussion Highlights (3 comments)

andy12_

This seems a really interesting path for interpretability, specially if a big chunk of a model's behavior occurs pseudo-symbolically. This is an idea I had thought about, integrating tools into the main computation path of a model, but I never imagined that it could be done efficiently with just a vanilla transformer. Truly, attention is all you need (I guess).

galsapir

one of the most interesting pieces I've read recently. Not sure I agree with all the statements there (e.g. without execution the system has no comprehension) - but extremely cool

pennomi

It makes sense that a next token predictor could execute assembly code. This is fascinating work, especially with the memory implementation.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed