Palantir and Anthropic AI helped the US hit 1k Iran targets in 24 hours

rainhacker 105 points 87 comments March 07, 2026
www.moneycontrol.com · View on Hacker News

Discussion Highlights (20 comments)

zppln

> According to the report, the AI tools are also used to evaluate the outcomes of strikes after they are initiated. Can I get off this train, please?

empath75

1k targets and a few hundred school girls.

qoez

"You're totally right, that was a hospital not a terrorist hideaway! My mistake!"

bigcloud1299

You are absolutely correct I shouldn’t have done that.

ModernMech

How are they ensuring 0 hallucinations?

bschwindHN

There are not enough ethics courses being taught in schools and universities. https://calebhearth.com/dont-get-distracted

everdrive

I'm really curious to understand why this was done and why it was necessary. I cannot imagine the AI was used to identify targets without any base information. ie, I imagine the military already had a list of targets and locations. How would the AI know from satellite imagery that something was a military target. If that's the case, why did they need help selecting targets? I can only imagine that the military bases and targets are well known and well studied. What would they have actually needed AI assistance for?

onlyrealcuzzo

Is that what targeted the school? Or was that intentional?

pmarreck

The actual article says it helped with planning.

password54321

If it has good enough capacity for the military it is very possible we are receiving dumbed down / nerfed versions of Claude.

pmarreck

I’m guessing there won’t be a lot of Palmer Luckey fans among the commenters here

Robdel12

So will there be droves of people canceling their Claude subscription too? None of these companies are clean and I think it’s hilarious HN and the rest of SV has been duped by Dario. He’s playing the game better than Sam is, imo. Nothing Dario has said has indicated he is regretful about their partnership with Palantir or any of the stuff they’ve done with the DoD in the past 2.5 years. Edit: this Washington Post article seems to be the original source: https://www.washingtonpost.com/technology/2026/03/04/anthrop...

Trasmatta

This is the AI we're using to kill people now, surely it won't make any mistakes or confidently target civilians on accident: https://youtube.com/shorts/WxbHtYzBnvo?si=xh4kda_DuNvHFx0L

AndrewKemendo

Can we get a better source? That website is broken on mobile and I can’t even scroll to see the source I can’t be the only one who couldn’t see it on ff/safari

MagicMoonlight

But how? The models are thick as shit. They can regurgitate existing knowledge, but it’s not like Iran and its military installations are publicly documented. And you can’t trust any decision it makes if you feed it the information, because it just makes up answers. It has no actual intelligence.

xyzzy9563

People in the HN comments would rather wait for Iran to get nuclear weapons and detonate them on their neighbors than use AI to do surgical strikes on Iran to take out these programs.

bhouston

Whoa, near the top of the front page, and then immediately pushed to the second page of hacker news. Yikes.

GolfPopper

Note that the headline isn't about how effective hitting those targets was, or how successful at achieving its aims the bombing campaign was. They hit 1000 targets in 24 hours. And yet, a week later, the Iranian regime is intact, American allies are still under constant bombardment, interceptor stocks are running low, and half of America's long-range, high-altitude transportable radar have been destroyed. This looks like shooting the broad side of a barn, and then painting bullseyes around every bullet hole.

parsimo2010

This article is a total overstatement designed to boost stock prices and none of the actual users can counter the claim because it would require revealing classified information. This is the same kind of claim you’ve all seen before about AI systems doing something amazing and it’s really just a bunch of people sitting in a call center in a third world country controlling the system remotely. Only in this case it’s a bunch of senior airmen and staff sergeants sitting in an intel shop doing all the work. Sure, Palantir made a UI but it just plain sucks. And Claude probably fixed some typos in the targeting packages. But let’s not believe that either system was influential to target selection. CENTCOM created a similar number of targets at the beginning of the Syrian civil war before any of these LLMs existed and it took a similar amount of time. We ended up not striking them, but the plans were made after Assad used chemical weapons. All the fixed locations in Iran had packages written and sitting on the shelf before Trump was even elected the first time. The AI in this war added basically no value. Any claim that Palantir did something useful for the government should immediately be viewed as suspect. I’ve used their software, and it sucks. I cannot understand how they got such big contracts to make a shitty UI that poorly integrates other systems’ data.

SirensOfTitan

Even if you don't care about the needless human suffering the US has caused from this operation, this conflict threatens global stability because of oil supply disruptions, and if the US keeps this up it could get quite bad very quickly. I worked briefly in defense-tech and there is a huge blindspot in this field. While I worked with a ton of thoughtful, ethical, and talented people from the military, there is a veritable blind spot when it comes to support of the "warfighter." It is certainly noble and worthwhile work to protect soldiers from harm through technology, but I got some sense some people (actually especially the tech people who were never in the military) didn't think enough about the ethical concerns when dealing with people attached to the US's "enemies." And further, what about when the US itself is the aggressor? While active warfighters have to follow chain of command, companies can and should apply ethical constraints--but they often don't because DoD contracts are lucrative and (especially if you're not a prime) hard won. I've had a lot of fun playing with Claude 4.6, but it is entirely unacceptable that this technology is being used in this conflict with Iran. I will cancel my account once this month's subscription is up in 2 weeks. The US is the aggressor here, that is certain. Support of this conflict as a private company that supposedly is oriented toward ethics is extremely illuminating. Now with that, I have thought a tremendous amount about whether someone like Dario could even steer the ship away from support of a conflict like this at this point. We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training. There is certainly an argument to be made that if he did so, he might lose confidence of investors and lose control entirely. This begs the question: is shareholder/capital optimization the best way to organize our society?

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed