A lawyer won Anthropic's hackathon – what everyone missed
idrdex
12 points
5 comments
March 25, 2026
Related Discussions
Found 5 related stories in 53.8ms across 3,471 title embeddings via pgvector HNSW
- Anthropic takes legal action against OpenCode _squared_ · 409 pts · March 19, 2026 · 57% similar
- Anthropic wins preliminary injunction in DoD fight on 1A m-hodges · 13 pts · March 26, 2026 · 56% similar
- Anthropic sues US defense department over blacklisting sideway · 14 pts · March 09, 2026 · 53% similar
- Anthropic has strong case against Pentagon blacklisting, legal experts say tartoran · 41 pts · March 11, 2026 · 53% similar
- Inside Anthropic's Killer-Robot Dispute with The Pentagon eoskx · 13 pts · March 01, 2026 · 52% similar
Discussion Highlights (5 comments)
idrdex
Author here. The blog argues that the real story from Anthropic's hackathon isn't that domain experts can build AI (they can) but that hackathon demos and production systems require fundamentally different things. A permit app that works on demo day and a permit system that survives when California revises the code, when the builder leaves, when a municipality asks for an audit trail — those are different problems. We're building a governance framework (CANONIC — CANONIC.org) where every AI capability is declared in a versioned contract. Curious what HN thinks about the gap between "domain expert can build" and "institution can trust what they built."
becomevocal
Simplifying here… sounds like essentially the split between (great) product managers and engineers We need both! Hurrah
roxolotl
> The hackathon winners understood something that most developers do not: the hard part of building useful AI is not the code, it is knowing what the system should do in the first place. This has always been true of all systems. Not that it isn’t an insight though since I don’t think enough people seem to get it. To build a system, with an LLM or without, you must know what the system needs to do. If you define it in C or in a markdown file it must still be defined. The advantage with LLMs is they bridge the gap between system definition and being able to simulate that system on a processor. The definition of the system is still required and it still must be precise. Even with “AGI” that’s still going to be true just as it’s true today with humans who do the translation between those who deeply understand a system and software.
toraway
This appears to be an advertisement for a (somewhat inscrutable) AI product they're selling called CANONIC, that also has a cryptocoin bolted on to it somehow.
g8oz
The AI editing of the article makes it a painful read. A shame because the point they were making is a good one, regarding AI coding apps empowering domain experts.