AI got the blame for the Iran school bombing. The truth is more worrying
- beloch - 10208 sekunder sedan"Three clicks convert a data point on the map into a formal detection and move it into a targeting pipeline. These targets then move through columns representing different decision-making processes and rules of engagement. The system recommends how to strike each target – which aircraft, drone or missile to use, which weapon to pair with it – what the military calls a “course of action”. The officer selects from the ranked options, and the system, depending on who is using it, either sends the target package to an officer for approval or moves it to execution."
----------------
Maven is a tool for use in the middle of a war. When both sides are firing, minutes saved can mean lives saved for your side. Those lives, at least partly, balance the risks of hitting a bad target.
This was not a strike made in the middle of a war. If Maven was used in the strike that took out a school, it was being used as part of a sneak attack. Nobody was shooting back while this was being planned. Minutes saved were not lives saved. There should have been a priority placed on getting the targets right. Humans should have been double and triple checking every target by other means. This clearly didn't happen. The school was obviously a school that even had its own website. Humans would have spotted this if they had done more than make their three clicks and move on to the next target.
Whoever made the choice to use Maven to plan a sneak attack without careful checking made an unforced error when they had all the time in the world to prevent it. Whether it was overconfidence in their tools or a complete disregard for the lives of civilians that caused this lapse, they are directly responsible for the deaths of those little girls. I sincerely hope there are (although I doubt there will be) consequences for this person beyond taking that guilt to their grave.
- ZeroGravitas - 5030 sekunder sedanThe House of Saud put out an interesting think piece suggesting the whole war might be a result of AI psychosis.
https://news.ycombinator.com/item?id=47540422
The submission here is flagged dead though.
- Lerc - 10985 sekunder sedan"the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target."
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
- phillipcarter - 11185 sekunder sedanWorth mentioning that the author wrote about this first on his substack: https://artificialbureaucracy.substack.com/p/kill-chain
- tunesmith - 11635 sekunder sedanReally fascinating article. Bits of bias here and there, like "The US military has been trying to close the gap between seeing something and destroying it for as long as that gap has existed" -- you can respond to seeing and understanding something without destroying it -- but it underscores, to me at least, how much denser the "fog of war" has become. The fog of media reporting in general. Those first few paragraphs felt like a breath of fresh air.
- machinecontrol - 11173 sekunder sedanInteresting article. Seems like AI-washing isn't just for layoffs anymore.
- Betelbuddy - 7250 sekunder sedanIts not a war crime if the AI does it?
- burnte - 9795 sekunder sedanWhen AI gets something wrong, it's the operator's fault, IMO.
- keiferski - 8935 sekunder sedanBefore it was the gods, then God, then Nature, and now AI. Human beings really have a fundamental issue with accepting responsibility for their actions.
From a certain angle, the entire industrial and computer age looks like a massive effort to remove all responsibility for our actions, permanently.
- shykes - 10398 sekunder sedanYou can't have a serious discussion of this bombing without addressing the information warfare component. To this day we don't know what actually happened. Between the general public and the facts, there are many middlemen, all with their own distorting factor: the IRGC; the US government; western press outlets such as the Guardian; and the people quoted by the press.
IRGC is making claims that no other party can verify first-hand. Everything from the number of explosions, the extent of the physical damage, the number of wounded and dead, the number of civilians wounded and dead - these are all unverified claims and should be treated as such. Not only is the IRGC obviously biased and incentivized to maximize media pressure on the US and Israel: they are known for information warfare of exactly this nature. To take their statements at face value, and present them as established facts in the opening paragraph, as this article does, is journalistic malpractice.
Again, the basic facts on the ground are not known, yes all parties are projecting narratives with a certainty that we should all be suspicious of.
Without this stable foundation of knowing what actually happened, and why, the very premise of this article collapses on itself.
EDIT: the flurry of responses to this post illustrate the problem. It's difficult to even have a respectful, fact-driven discussion on this topic, because everyone is tempted (and encouraged) to rush to their political battle stations. Nobody wants to discuss information warfare, because they're too busy engaging in it. I think that's worrying and problematic. No matter which "side" you're on, it should be possible to distinguish what is known and what is not; and implementing basic information hygiene. Or do you think you are uniquely immune to disinformation?
- sessionfs - 6377 sekunder sedanAi makes mistakes, we all know that.
- albatross79 - 7030 sekunder sedanHad Iran done anything to the US as heinous as this one "mistake" in the last 50 years that compares? Imagine if some country did this to us and just brushed it off as a mistake.
- EtienneDeLyon - 6784 sekunder sedanIsn't it a more reasonable explanation that the IDF deliberately had this school bombed because those schoolgirls were the children of Islamic Revolutionary Guard Corps officers?
The intentional murder of enemy children is a tactic of the IDF. They've done it for decades.
- ck2 - 10821 sekunder sedanYou know how that was done with a Tomahawk
They've now burnt though almost ONE THOUSAND of those
They cost $4 million each, so that's another $4 BILLION that has to be replaced too
Imagine several more months of that or even through 2029
- jameskilton - 12158 sekunder sedanSomething that a lot of tech people, especially in Silicon Valley, seem to want to forget, is that at every level you still have people making decisions. AI is suggesting but someone, somewhere, still has to make the decision to act on that suggestion.
It's still people doing people things.
- throwaway613746 - 9728 sekunder sedanAI isn't an excuse for war crimes. Remember this at, and after, election time.
- csmpltn - 9357 sekunder sedan[flagged]
- gowld - 10126 sekunder sedan[flagged]
- nahuel0x - 10324 sekunder sedanIsrael and the US are bombing lots of schools and hospitals and civilian infrastructure, this is not the only case. This is intentional genocide, not a software/organizational/human error.
- amarant - 10274 sekunder sedan>The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.
Would it be poor taste to make joke about gradle being superior here? The dad in me really wants to make that joke...
- ognav - 11921 sekunder sedanThe Guardian carrying water for the AI industry. The distinction between Maven and Claude is futile. We get that Maven is Palantir, but it integrates Claude:
https://www.reuters.com/technology/palantir-faces-challenge-...
Going into a generic rant about anti-AI people after missing sources and believing the Department of War is just extremely poor journalism from the newspaper that destroyed evidence after a command from GCHQ.
I hope this is a single "journalist" and that the Guardian has not been bought.
- rnab147 - 10885 sekunder sedanWaPO writes that Claude selected targets:
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
This unknown Guardian contributor writes a missive against "Luddites" while using the typical AI booster arguments that always turn around anti AI arguments.
Just like two five year olds: "You have a big nose." "No, you have a big nose."
We learn from this clown that anti AI people suffer from AI psychosis because they are reading WaPo and Reuters.
- sva_ - 7578 sekunder sedanTurning a military building into a girl's school, and then having this school right next to other military buildings - is this something that happens often? Or were there ulterior motives behind it?
Nördnytt! 🤓