Pentagon formally labels Anthropic supply-chain risk
klausa
394 points
255 comments
March 05, 2026
Related Discussions
Found 5 related stories in 53.3ms across 3,471 title embeddings via pgvector HNSW
- The Pentagon Officially Notifies Anthropic That It Is a 'Supply Chain Risk' intunderflow · 13 pts · March 05, 2026 · 85% similar
- Anthropic Sues Pentagon over 'Supply Chain Risk' Label budoso · 17 pts · March 09, 2026 · 78% similar
- Judge blocks Pentagon effort to 'punish' Anthropic with supply chain risk label prawn · 346 pts · March 26, 2026 · 74% similar
- Anthropic sues Trump admin. seeking to undo "supply chain risk" designation djoldman · 11 pts · March 09, 2026 · 69% similar
- Anthropic has strong case against Pentagon blacklisting, legal experts say tartoran · 41 pts · March 11, 2026 · 63% similar
Discussion Highlights (19 comments)
m_ke
We can all thank the VCs and CEOs who fully embraced and enabled this administration
jawns
The consequence is that any company that does business with the U.S. military, and potentially any company that does business with the government in general, must stop using Anthropic's products for that work. Anthropic has vowed to fight this designation in court. Without weighing in on the constitutionality or legality of the move, I think it's obvious that this kind of retaliation power is unmatched by any private business that has a contractual dispute. If a private business doesn't like Anthropic's terms, it can walk away from the deal, but it can't conduct coordinated retaliation with other companies before ending up in antitrust territory or potentially violating the Sherman Act. Now for my editorializing: The fact that Pete Hegseth is willing to apply this type of designation against a U.S. company simply because he doesn't like its terms is pretty chilling. It's all the more scary once you consider which terms he objects to.
scuff3d
Huh, and I thought conservatives were all about government staying our of the way of the private sector. Go figure...
nickysielicki
Does anyone know which law firm is representing anthropic?
seydor
A reminder to Anthropic, european residence visas start at $250K
mrtksn
Isn’t it actually quite fair that if you are not compliant with whatever the government wants you to do you will be supplying chain risk? For example from history we know that Schindler from Schindler's List was indeed a supply chain risk. He harbored persecuted people, he took and sabotaged government contracts. He did the moral but anti-government and illegal things. He was corrupt traitor from governments perspective. The current US government already is labeled as fascist by many, the guy who designated Anthropic supply chain risk is allegedly a war criminal. I don’t see why anyone not into these things would not be a supply chain risk. I know that its very unpopular or divisive to say this but Anthropic can be a hero only after all this is over. At this time people in charge do double tap on survivors and take pride for not having conscience, they give speeches about these things.
mentalgear
I said it before and I say it again: If openly bribing a crony gov to cancel your competitor is now the de-facto standard of making business in the US, I don't see how any rational investor could still see US companies as a secure investment. When the rule of law degrades into pay-to-play politics, the inevitable result is a mass exodus of both capital and top-tier talent. And to add to this quoting another commentator on the issue: First the Meritocracy goes, then the Freedom goes.
oompydoompy74
Exported all my chats and deleted my ChatGPT account yesterday. The current administration not liking you is the strongest signal I could possibly have to go all in on a particular company.
jmspring
Next up, after some sort of bribe, the administration opens up Qwen models to be used by the Pentagon.
baxtr
I would love to understand in more detail what kind of use cases we’re talking about. Is this about locating the right target for a sortie for example?
wrs
Once again our leadership is "playing government" like a bunch of 12-year-olds, lashing out impulsively without thinking of the consequences. And no doubt once again it'll take a year for this to wind its way through the legal system and be reversed long after the damage is done, as is finally happening with the tariff fiasco.
pinkmuffinere
https://archive.ph/IVDtq
eth0up
First, I personally predict, for myself, Anthropic will bend soon and this will be history. The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading. Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress. We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases. I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be. Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system. 'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon. My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse. And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction. One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more. A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times. I could go on, or post essays, but I such is not well received in this savage land. The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. * *See comment history
Herring
Since the end of WW2, and especially since the end of the Cold War, Democratic administrations have presided over significantly higher job growth than Republican administrations. https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.c...
germandiago
This is awful. That a disagreement tjat involves politics can make a company ruined is really awful. The civil society should be quite concerned about this kind of attacks.
martinwright
Part of me wonders if it was a plan to squeeze between Anthropic & big gov contracts
neves
Is this the reason Claude models disappeared from AWS cloud in Brazil?
creddit
Naturally OpenAI also releases their new model on the same day. Makes sense, obviously, but yeesh.
wg0
Has this happened before?