LLM research on Hacker News is drying up
dcastm
30 points
11 comments
April 24, 2026
Related Discussions
Found 5 related stories in 71.7ms across 5,498 title embeddings via pgvector HNSW
- Enabling Codex to Analyze Two Decades of Hacker News Data ronfriedhaber · 77 pts · April 02, 2026 · 57% similar
- Show HN: Hackerbrief – Top posts on Hacker News summarized daily p0u4a · 66 pts · March 16, 2026 · 55% similar
- Profiling Hacker News users based on their comments simonw · 60 pts · March 22, 2026 · 54% similar
- The Hacker News Tarpit latexr · 44 pts · April 07, 2026 · 53% similar
- LLMs can unmask pseudonymous users at scale with surprising accuracy Gagarin1917 · 42 pts · March 04, 2026 · 53% similar
Discussion Highlights (3 comments)
simonw
Hacker News isn't a great place to discuss papers generally. Having a productive discussion around a paper requires at least reading and understanding the abstract, and the most successful content on HN (sadly) is content where people can jump in with an opinion purely from reading the headline. Anyone know of any forums that are good for discussing papers?
gessha
With big commercial labs clamming up about training details, hardware requirements going up, and overall tired sentiment about AI in general, that’s not really surprising. ML research shows up on the front page if it shows splashy claims or results. Mundane cockroach papers that advance the field one nudge at a time aren’t that interesting for the average reader. Cool to see the sentiment visualized though.
latexr
> So I asked Claude (…) > I asked Claude (…) > So I asked him (…) > (…) so I asked Claude (…) > (…) so I asked Claude (…) > Thanks, Claude. There doesn’t seem to be one bit of original research in the post, and no explanation of the data or conclusions. For example, what exactly does it mean that the papers “held up”, and how exactly did Claude reach that conclusion? If you don’t know, we can’t trust the data. If you do know, it should be in the post. As it is, the post is almost devoid of information. Everything was “I asked Claude”. There’s no value here (aside from some saved tokens) above just crafting a prompt and saying “here, ask your favourite model this question”.