Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs

PrismML 182 points 80 comments March 31, 2026
prismml.com · View on Hacker News

Discussion Highlights (20 comments)

yodon

Is Bonsai 1 Bit or 1.58 Bit?

stogot

What is the value of a 1 bit? For those that do not kno

syntaxing

Super interesting, building their llama cpp fork on my Jetson Orin Nano to test this out.

alyxya

I expect the trend of large machine learning models to go towards bits rather than operating on floats. There's a lot of inefficiency in floats because typically they're something like normally distributed, which makes the storage and computation with weights inefficient when most values are clustered in a small range. The foundation of neural networks may be rooted in real valued functions, which are simulated with floats, but float operations are just bitwise operations underneath. The only issue is that GPUs operate on floats and standard ML theory works over real numbers.

OutOfHere

How do I run this on Android?

Archit3ch

Doesn't Jevons paradox dictate larger 1-bit models?

_fw

What’s the trade-off? If it’s smaller, faster and more efficient - is it worse performance? A layman here, curious to know.

jjcm

1 bit with a FP16 scale factor every 128 bits. Fascinating that this works so well. I tried a few things with it. Got it driving Cursor, which in itself was impressive - it handled some tool usage. Via cursor I had it generate a few web page tests. On a monte carlo simulation of pi, it got the logic correct but failed to build an interface to start the test. Requesting changes mostly worked, but left over some symbols which caused things to fail. Required a bit of manual editing. Tried a Simon Wilson pelican as well - very abstract, not recognizable at all as a bird or a bicycle. Pictures of the results here: https://x.com/pwnies/status/2039122871604441213 There doesn't seem to be a demo link on their webpage, so here's a llama.cpp running on my local desktop if people want to try it out. I'll keep this running for a couple hours past this post: https://unfarmable-overaffirmatively-euclid.ngrok-free.dev

hatthew

I feel like it's a little disingenuous to compare against full-precision models. Anyone concerned about model size and memory usage is surely already using at least an 8 bit quantization. Their main contribution seems to be hyperparameter tuning, and they don't compare against other quantization techniques of any sort.

keyle

Extremely cool! Can't wait to give it a spin with ollama, if ollama could list it as a model that would be helpful.

ariwilson

Very cool and works pretty well!

marak830

It's been a hell of a morning for llama heads - first this, then the claude drop and turboquant. I'm currently setting this one up, if it works well with a custom LoRa ontop ill be able to run two at once for my custom memory management system :D

bilsbie

I can’t see how this is possible. You’re losing so much information.

zephyrwhimsy

Cursor and similar AI-native IDEs are interesting not because of the AI itself, but because they demonstrate that the IDE paradigm is not settled. There is room for fundamental rethinking of how developers interact with codebases.

wild_egg

Don't have a GPU so tried the CPU option and got 0.6t/s on my old 2018 laptop using their llama.cpp fork. Then found out they didn't implement AVX2 for their Q1_0_g128 CPU kernel. Added that and getting ~12t/s which isn't shabby for this old machine. Cool model.

andai

Does anyone know how to run this on CPU? Do I need to build their llama.cpp fork from source? Looks like they only offer CUDA options in the release page, which I think might support CPU mode but refuses to even run without CUDA installed. Seems a bit odd to me, I thought the whole point was supporting low end devices! Edit: 30 minutes of C++ compile time later, I got it running. Although it uses 7GB of RAM then hangs at Loading model. I thought this thing was less memory hungry than 4 bit quants? Edit 2: Got the 4B version running, but at 0.1 tok/s and the output seemed to be nonsensical. For comparison I can run, on the same machine, qwen 3.5 4B model (at 4 bit quant) correctly and about 50x faster.

plombe

Interesting post. Curious to know how they arrived at intelligence density = Negative log of the model's error rate divided by the model size.

kent8192

Oh, boy. This good tool hates my LM Studio... The following message appears when I run Bonsai in my LM Studio. I think my settings have done something wrong. ``` Failed to load the model Error loading model. (Exit code: null). Please check the settings and try loading the model again. ```

drob518

I’m really curious how this scales up. Bonsai delivers an 8B model in 1.15 GB. How large would a 27B or 35B model be? Would it still retain the accuracy of those large models? If the scaling holds, we could see 100+B models in 64 GB of RAM.

andai

The site says 14x less memory usage. I'm a bit confused about that situation. The model file is indeed very small, but on my machine it used roughly the same RAM as 4 bit quants (on CPU). Though I couldn't get actual English output from it, so maybe something went wrong while running it.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed