Jennifer Aniston and Friends Cost Us 377GB and Broke Ext4 Hardlinks

speckx 45 points 28 comments April 10, 2026
blog.discourse.org · View on Hacker News

Discussion Highlights (8 comments)

replooda

In short: Deduplication efforts frustrated by hardlink limits per inode — and a solution compatible with different file systems.

dj_rock

We were on a break...of your filesystem!

bravetraveler

As is always the case, short vs long term... but I think I'd put effort into migrating to a filesystem that is aware of duplication instead of trying to recreate one with links [while retaining duplicates, just fewer] . Effectiveness is debatable, this approach still has duplication. An insignificant amount, I'll admit. The filesystem handling this at the block level is probably less problematic/prone to rework and more efficient. edit: Eh, ignore me. I see this is preparing for [whatever filesystem hosts chose] thanks to 'ameliaquining' below. Originally thought this was all Discourse-proper, processing data they had.

uticus

And I thought this was a reference to a Win95 problem https://www.slashgear.com/1414245/jennifer-aniston-matthew-p...

UltraSane

This makes them look rather incompetent. Storing the exact same file 246,173 times is just stupid. Dedupe at the filesystem level and make your life easier.

trixn86

The Problem. The fix. The Limit. Is it just me or is everybody else just as fed up with always the same AI tropes? I've reached a point where I just close the tab the moment I read a headline "The problem". At least use tropes.fyi please

otterley

Another reason to use XFS -- it doesn't have per-inode hard link limits. (Some say ZFS as well, but it's not nearly as easy to use, and its license is still not GPL-friendly.)

niobe

Completely Claude written FWIW. I recongise the style.

Semantic search powered by Rivestack pgvector
4,179 stories · 39,198 chunks indexed