We sped up bun by 100x

sdan 53 points 57 comments April 02, 2026
vers.sh · View on Hacker News

Discussion Highlights (19 comments)

homarp

with a different git client https://news.ycombinator.com/item?id=47618895 to discuss the git implementation

Night_Thastus

Aside: That font is really hard on my eyes. Anyone else?

hvenev

This blog post calls libgit2 "git's C library" as if it is in any way related to git. I don't think it is.

mfitton

What it cost doesn't actually say what it cost. I wonder what models they used. Napkin math of Opus for everything (probably not true) with no caching suggests $67,000. Cool article though!

sc68cal

So, they implemented a git client in zig, that had some significant speedups for their usecase. However: > The git CLI test suite consists of 21,329 individual assertions for various git subcommands (that way we can be certain ziggit does suffice as a drop-in replacement for git). <snip> > While we only got through part of the overall test suite, that's still the equivalent of a month's worth of straight developer work (again, without sleep or eating factored in).

redoh

100x is a bold claim but the Zig approach to optimizing hot paths in Bun makes a lot of sense. There is so much low hanging fruit when you actually dig into how package managers interact with git under the hood. Nice writeup, the before/after benchmarks are convincing.

nightpool

Seems like they actually sped bun up ~1x: When evaluating the complete bun install improvements, it came out speed-wise to about the same as the existing git usage (due to networking being the big bottleneck time-wise despite more cases being slightly faster with ziggit over multiple benchmarks). Except, it's done in 100% zig and those internal improvements pile up as projects consist of more git dependencies. All in all, it seems like a sensible upstream contribution. So you have to maintain a completely separate git implementation and keep that up to date with upstream git, all for the benefit of being indistinguishable on benchmarks. Oh well!

joaohaas

With the recent barrage of AI-slop 'speedup' posts, the first thing I always do to see if the post is worth a read is doing a Ctrl+F "benchmark" and seeing if the benchmark makes any fucking sense. 99% of the time (such as in this article), it doesn't. What do you mean 'cloneBare + findCommit + checkout: ~10x win'? Does that mean running those commands back to back result in a 10x win over the original? Does that mean that there's a specific function that calls these 3 operations, and that's the improvement of the overall function? What's the baseline we're talking about, and is it relevant at all? Those questions are partially answered on the much better benchmark page[1], but for some reason they're using the CLI instead of the gitlib for comparisons. [1] https://github.com/hdresearch/ziggit/blob/5d3deb361f03d4aefe...

butz

How does bun compare with upcoming Vite+?

jedisct1

Zig is a well-kept secret for writing highly efficient WebAssembly modules.

moralestapia

>it becomes possible to see upward of 100x speedups for some git operations. They really stretch the limits of an honest title there.

flykespice

AI slop with your usual hallucinated unrealistic speedups claims yawn immediate skip

cwillu

I think we might be getting to the point where submissions for projects that are primarily written by ai and/or ai agents need to be tagged with [agent] in the title

TimTheTinker

These "AI rewrite" projects are beginning to grate on me. Sure, if you have a complete test suite for a library or CLI tool, it is possible to prompt Claude Opus 4.6 such that it creates a 100% passing, "more performant", drop-in replacement. However, if the original package is in its training data, it's very likely to plagiarize the original source. Also, who actually wants to use or maintain a large project that no one understands and that doesn't have a documented history of thoughtful architectural decisions and the context behind them? No matter how tightly you structure AI work, probabilistic LLM logorrhea cannot reliably adopt or make high-level decisions/principles, apply them, or update them as new data arrives. If you think otherwise, you're believing an illusion - truly. A large software project's source code and documentation are the empirical ground-truth encoding of a ton of decisions made by many individuals and teams -- decisions that need to be remembered, understood, and reconsidered in light of new information. AI has no ability to consider these types of decisions and their accompanying context, whether they are past, present, or future -- and is not really able to coherently communicate them in a way that can be trusted to be accurate. That's why I can't and won't trust fully AI-written software beyond small one-off-type tools until AI gains two fundamentally new capabilities: (1) logical reasoning that can weigh tradeoffs and make accountable decisions in terms of ground-truth principles accurately applied to present circumstances, and (2) ability to update those ground-truth principles coherently and accurately based on new, experiential information -- this is real "learning"

carterschonwald

im pretty stoked about the llm harness theyre using. cause I wrote all the code thats not monopi code in that fork! despite it’s paucity of features, the changes i landed in it from my design notes actually have been so smooth in terms of comparative ux/ llm behavior that its my daily driver since ive stood it up. Previously, since early december, ive had to run a patch script on every update of claude code to make it stop undermining me. I didnt need a hilarious code leak to find the problematic strings in the minified js ;) I regard punkin-pi as a first stab at translating ideas ive had over the past 6 months for reliable llm harnesses. I hit some walls in terms of mono pi architecture for doing much more improvement with mono pi. so Im working on the next gen of agent harnesses! stay tuned!

CodeCompost

Is this more vibe-coded garbage?

mpalmer

The title is obviously dishonest. I do not hesitate to call it a lie. The post is also not about the speed increase, it's about how proud this team is of their agent orchestration scheme. As I understand it, there is really no speed difference at all between Zig and C, just some cognitive overhead associated with doing things "right" in C. It's all machine code at bottom. So why is this rewrite faster? Why did the authors choose Zig? How has the logic or memory management changed? The authors give us absolutely no insight whatsoever into the the Zig code. No indication that they know anything about Zig, or systems programming, at all. I wish this was an exaggeration. And really. With all this agentic power at your fingertips, why wouldn't they just contribute these improvements to git itself? I can think of at least one reason, that they don't want their changes to be rejected as unhelpful or low-quality.

4b11b4

So uh.. about that sentence where it says you only pass part of the test suite...

queenkjuul

Long-ass LLM write-up that repeats itself several times, i couldn't finish reading. Big if true, etc

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed