Anthropic's Project Glasswing sounds necessary to me
simonw
42 points
7 comments
April 07, 2026
Related Discussions
Found 5 related stories in 58.0ms across 3,871 title embeddings via pgvector HNSW
- Project Glasswing: Securing critical software for the AI era Ryan5453 · 1107 pts · April 07, 2026 · 70% similar
- Anthropic Subprocessor Changes tencentshill · 56 pts · March 26, 2026 · 53% similar
- The Anthropic Institute paulpauper · 11 pts · March 14, 2026 · 52% similar
- Anthropic, please make a new Slack georgewfraser · 227 pts · March 06, 2026 · 50% similar
- Anthropic's AutoDream Is Flawed k1musab1 · 12 pts · April 02, 2026 · 50% similar
Discussion Highlights (4 comments)
ChrisArchitect
Discussion: https://news.ycombinator.com/item?id=47679121 and Related: System Card: Claude Mythos Preview [pdf] https://news.ycombinator.com/item?id=47679258 Assessing Claude Mythos Preview's cybersecurity capabilities https://news.ycombinator.com/item?id=47679155 )
orenlindsey
I think AI bug scanning is a good thing, it will ensure almost all high severity get caught before entering prod. There can certainly be downsides but I am personally all for it.
verdverm
Strong agreement. I include https://roost.tools in this category of necessary efforts. A strong privacy law would be great, but a more political thing, though there is much we can do as technologists.
ghm2199
So my home router, all my iot devices attached to it from printers to projectors, not to mention custom stacks like Lutron. BLE based locks, car key fobs. All of these technically could have zero day vulnerabilities and people/companies who made it don't have the resources to buy 20000$ of tokens to go debug them... Maybe they don't care but if they do, what if they can't afford such models or get access in time. I would like to know how can someone like me defend against them?