Show HN: Timber – Ollama for classical ML models, 336x faster than Python
kossisoroyce
85 points
8 comments
March 02, 2026
Related Discussions
Found 5 related stories in 51.1ms across 3,471 title embeddings via pgvector HNSW
- Show HN: PhAIL – Real-robot benchmark for AI models vertix · 20 pts · March 31, 2026 · 47% similar
- Executing programs inside transformers with exponentially faster inference u1hcw9nx · 17 pts · March 12, 2026 · 45% similar
- Show HN: I built Wool, a lightweight distributed Python runtime bzurak · 13 pts · March 14, 2026 · 45% similar
- Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs PrismML · 182 pts · March 31, 2026 · 44% similar
- Google's 200M-parameter time-series foundation model with 16k context codepawl · 22 pts · March 31, 2026 · 44% similar
Discussion Highlights (7 comments)
jnstrdm05
I have been waiting for this! Nice
Dansvidania
Can’t check it out yet, but the concept alone sounds great. Thank you for sharing.
mehdibl
Ollama is quite a bad example here. Despite popular, it's a simple wrapper and more and more pushed by the app it wraps llama.cpp. Don't understand here the parallel.
tl2do
Since generative AI exploded, it's all anyone talks about. But traditional ML still covers a vast space in real-world production systems. I don't need this tool right now, but glad to see work in this area.
brokensegue
"classical ML" models typically have a more narrow range of applicability. in my mind the value of ollama is that you can easily download and swap-out different models with the same API. many of the models will be roughly interchangeable with tradeoffs you can compute. if you're working on a fraud problem an open-source fraud model will probably be useless (if it even could exist). and if you own the entire training to inference pipeline i'm not sure what this offers? i guess you can easily swap the backends? maybe for ensembling?
rudhdb773b
If the focus is performance, why use a separate process and have to deal with data serialization overhead? Why not a typical shared library that can be loaded in python, R, Julia, etc., and run on large data sets without even a memory copy?
o10449366
Can you tell us more about the motivation for this project? I'm very curious if it was driven by a specific use case. I know there are specialized trading firms that have implemented projects like this, but most industry workflows I know of still involve data pipelines with scientists doing intermediate data transformations before they feed them into these models. Even the c-backed libraries like numpy/pandas still explicitly depend on the cpython API and can't be compiled away, and this data feed step tends to be the bottleneck in my experience. That isn't to say this isn't a worthy project - I've explored similar initiatives myself - but my conclusion was that unless your data source is pre-configured to feed directly into your specific model without any intermediate transformation steps, optimizing the inference time has marginal benefit in the overall pipeline. I lament this as an engineer that loves making things go fast but has to work with scientists that love the convenience of jupyter notebooks and the APIs of numpy/pandas.