The human.json Protocol

todsacerdoti 25 points 14 comments March 08, 2026
codeberg.org · View on Hacker News

Discussion Highlights (10 comments)

GaryBluto

> human.json is a lightweight protocol for humans to assert authorship of their site content and vouch for the humanity of others. It uses URL ownership as identity, and trust propagates through a crawlable web of vouches between sites. This will not (and shouldn't) be used by more than a handful of people who were likely already friends anyway. I can't see it being helpful for anybody (unless accidentally visiting LLM blogspam melts your face à la Raiders of the Lost Ark) unless it's true intention is signalling you don't like LLMs to other people who don't like LLMs.

philippz

Reminds me a bit of FOAF https://en.wikipedia.org/wiki/FOAF

orsorna

Too bad they didn't choose a more human interchange format...

ai-psychopath

50 commits in 24 hours it's hilarious that the human.json protocol to fight AI slop is itself AI slop

deafpolygon

Virtue signaling at best; noise at worst… It’s trivial for an AI to add, and will be done so by anyone hoping to get a piece of that attention economy…

semyonsh

Something tells me GPG would be great for this concept, but it's probably not as accessible as to get people to paste a JSON somewhere.

castral

I think I saw Gaius Baltar implement this on Battlestar Galactica. It went well. /s Honestly seems more like a protocol for encoding a popularity contest, which is already what social media signalling does. How do you defend against self-reinforcing botnets and bad actors "cancelling" other people? I can dilute your human signal by creating massive amounts of LLM-generated noise.

alsetmusic

If nothing else, this at least inspired me to put a disclaimer on my own site declaring my AI policy. It's not so fancy and I think it's a good deal more credible than any formal protocol.

evolve2k

I’m a bit concerned that the content of humans.json will itself get mopped up by AI crawlers.

petterroea

If you have to perform a breadth-first search from your "seed" to verify a website, wouldn't every lookup become expensive relatively quickly? Unless max hops is set really low. Id assume you really need mass adoption for 5 degrees of separation to kick in, and that's still a lot of sites to crawl!

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed