An ode to bzip

signa11 112 points 67 comments March 14, 2026
purplesyringa.moe · View on Hacker News

Discussion Highlights (13 comments)

elophanto_agent

bzip2 is the compression algorithm equivalent of that one coworker who does incredible work but nobody ever talks about. meanwhile gzip gets all the credit because it's "good enough"

joecool1029

Just use zstd unless you absolutely need to save a tiny bit more space. bzip2 and xz are extremely slow to compress.

saghm

Early on the article mentions that xz have zstd have gotten more popular than bzip, and my admitted naive understanding is that they're considered to have better tradeoffs in teems of collision compression time and overall space saved by compression. The performance section heavily discusses encoding performance of gzip and bzip, but unless I'm missing something, the only references to xz or zstd in that section are briefly handwaving about the decoding times probably being similar. My impression is that this article has a lot of technical insight into how bzip compares to gzip, but it fails actually account for the real cause of the diminished popularity of bzip in favor of the non-gzip alternatives that it admits are the more popular choices in recent years.

hexxagone

Notice that bzip3 has close to nothing to do with bzip2. It is a different BWT implementation with a different entropy codec, from a different author (as noted in the GitHub description "better and stronger spiritual successor to BZip2").

fl0ki

This seems as good a thread as any to mention that the gzhttp package in klauspost/compress for Go now supports zstd on both server handlers and client transports. Strangely this was added in a patch version instead of a minor version despite both expanding the API surface and changing default behavior. https://github.com/klauspost/compress/releases/tag/v1.18.4

pella

imho: the future is a specialized compressor optimized for your specific format. ( https://openzl.org/ , ... )

Grom_PE

PPMd (of 7-Zip) would beat BZip2 for compressing plain text data.

idoubtit

My experience does not match theirs when compressing text and code: > bzip might be suboptimal as a general-purpose compression format, but it’s great for text and code. One might even say the b in bzip stands for “best”. I've just checked again with a 1GB SQL file. `bzip2 -9` shrinks it to 83MB. `zstd -19 --long` to 52MB. Others have compressed the Linux kernel and found that bzip2's is about 15% larger than zstd's.

cobbzilla

My first exposure to bzip: The first Linux kernels I ever compiled & built myself (iirc ~v2.0.x), I packed as .tar.bz2 images. Ah the memories. Yes, there are better compression options today.

vintermann

If you're implementing it for Computercraft anyway, there's no reason to stick to the standard. It's well known that bzip2 has a couple of extra steps which don't improve compression ratio at all. I suggest implementing Scott's Bijective Burrows-Wheeler variant on bits rather than bytes, and do bijective run-length encoding of the resulting string. It's not exactly on the "pareto frontier", but it's fun!

thesz

BWT is a prediction by partial match (PPM) in disguise. Consider "bananarama": "abananaram" "amabananar" "ananaramab" "anaramaban" "aramabanan" "bananarama" "mabananara" "nanaramaba" "naramabana" "ramabanana" The last symbols on each line get context from first symbols of the same line. It is so due to rotation. But, due to sorting, contexts are not contiguous for the (last) character predicted and long dependencies are broken. Because of broken long dependencies, it is why MTF, which implicitly transforms direct symbols statistics into something like Zipfian [1] statistics, does encode BWT's output well. [1] https://en.wikipedia.org/wiki/Zipf%27s_law Given that, author may find PPM*-based compressors to be more compression-wise performant. Large Text Compression Benchmark [2] tells us exactly that: some "durilka-bububu" compressor that uses PPM fares better than BWT, almost by third.

eichin

Interesting detail on the algorithm but seems to completely miss that if you care about non-streaming performance, there are parallel versions of xz and gzip (pxzip encodes compatible metadata about the breakup points so that while xz can still decompress it, pxzip can use as many cores as you let it have instead.) Great for disk-image OS installers (the reason I was benchmarking it in the first place - but this was about 5 years back, I don't know if those have gotten upstreamed...)

ryan14975

One underappreciated reason DEFLATE/gzip persists: it's streaming-friendly. You can emit compressed bytes incrementally without buffering the entire input. BWT-based algorithms (bzip2) need the whole block in memory before the transform can run. This matters a lot for formats like ZIP, where the archive is constructed on-the-fly. You can pipe data through DEFLATE and compute CRC32 as you go, writing headers retroactively via Data Descriptors. Try that with bzip2 and you're stuck buffering 900KB blocks before you can output anything. zstd threading and framing are a step forward here, but DEFLATE's simplicity and tiny memory footprint (~32KB window) make it hard to kill in constrained or streaming contexts.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed