The First AI War: How Algorithms Are Fighting in Iran

The US-Israel strikes on Iran aren't just a geopolitical escalation, they're the live debut of AI-driven warfare at scale. Here's what's actually happening.

M

Muunsparks

2026-03-11

10 min read

In the first 24 hours of Operation Epic Fury on February 28, 2026, the United States and Israel reportedly struck over 1,000 targets inside Iran — an operational tempo that would have taken days or weeks in any prior conflict. That speed wasn't just about bombers and missiles. It was about machines processing intelligence faster than human analysts ever could.

A War That Started Before the First Bomb Dropped

The 2026 Iran conflict didn't begin in a vacuum. It came after two years of escalating tension: the Twelve-Day War in June 2025, a joint US airstrike on Iran's nuclear facilities that same year, mass protests in Iran in early 2026, and the collapse of nuclear negotiations in February. When US and Israeli forces launched simultaneous strikes on Tehran, Isfahan, Qom, Karaj, and Kermanshah in the early hours of February 28, they killed Supreme Leader Ali Khamenei and triggered one of the most significant regional conflicts since the 2003 Iraq invasion.

Iran responded with hundreds of ballistic missiles and nearly 2,000 drones, hitting US military installations across the Gulf, civilian infrastructure in Azerbaijan, Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the UAE, and forcing a partial closure of the Strait of Hormuz. Brent crude surged past $119 per barrel. Dubai International Airport was damaged and temporarily closed. More than 1,700 people have been killed across the region as of day 11.

The humanitarian and geopolitical dimensions of this conflict are staggering and still unfolding. But for anyone tracking where military technology is heading, something else demands attention: this is the first conflict where AI has been demonstrably central to how strikes are planned, sequenced, and executed — not as a future capability being tested, but as live operational infrastructure.

How AI Is Actually Being Used

The phrase "AI in warfare" gets thrown around loosely. Here's what it concretely means in this conflict.

Intelligence Fusion at Machine Speed

AI systems in this conflict are connected to drone feeds, satellite imagery, signals intelligence, radar data, and telecommunications intercepts — processing terabytes of data in real time at speeds no human team can match. The US military's Maven Smart System (MSS), built by Palantir, identifies and prioritizes potential targets from that data stream. The Washington Post has reported that Anthropic's Claude has been integrated with Maven to boost detection and simulation capabilities, though neither Palantir nor Anthropic confirmed this to AFP.

The practical result: a "target factory" rather than a target notebook. What was once a slow process of human analysts correlating sources has become a machine-speed pipeline. According to Ynet News, the IDF's new AI division — called "Bina," established only months ago and headed by a brigadier general — is operating for the first time in this conflict. Its stated goal is to "turn one tank into 100 tanks and one soldier into 100 soldiers."

Compressing the Kill Chain

"Kill chain" is military shorthand for the sequence from target identification to weapons release. AI is compressing that sequence dramatically.

Amir Husain, co-author of Hyperwar: Conflict and Competition in the AI Century, told Fortune that AI is already playing a significant role in the OODA loop — Observe, Orient, Decide, Act — particularly in observation, tactical decision-making, and the act phase via autonomous drones. Craig Jones, author of The War Lawyers, put it more starkly: "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought." He suggested the scale and tempo of the February 28 strikes would have been "impossible, or almost impossible" without AI.

Air Defense That Learned From 60,000 Missiles

On the Israeli side, AI is also playing a significant defensive role. Israeli civilians have reportedly noticed that missile alerts arrive earlier and impact locations are predicted more precisely than during the June 2025 conflict. According to Ynet News, this improvement comes from an AI system trained on approximately 60,000 flight paths of missiles and drones launched at Israel since October 7, 2023 — every launch location, timestamp, altitude, speed, and impact point. The system processes incoming threats in real time and can "replay events backward," allowing analysts to trace where a threat originated and what preceded it.

Autonomous Drone Swarms

The US has deployed LUCAS unmanned aerial vehicles — approximately $35,000 per unit, produced by Arizona-based Spekter-Works — in what US Central Command describes as their first operational deployment. These drones operate fully autonomously, communicate with each other in flight, divide targets among themselves, and conduct suicide attacks on Iranian radar systems without ground guidance. Iran, meanwhile, has launched its Shahed-series drones — the same platform it supplied to Russia for use in Ukraine — with the US reportedly deploying copycat versions of that same design.

The Anthropic Dimension

Here's where it gets complicated for anyone following the AI industry specifically.

One day before the strikes began, the US government sidelined Anthropic as a supplier, reportedly designating it a supply-chain risk. The Pentagon subsequently signed a contract with OpenAI. Then, as of March 5, Dario Amodei is reportedly back in talks with the Department of Defense.

Why the falling out? Anthropic has drawn two hard lines: no mass civilian surveillance, and no fully autonomous weapons. Amodei has been explicit about this: "Frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk."

The irony is sharp. Reports indicate that Claude — integrated with Palantir's Maven — was reportedly used to assess intelligence and generate targeting recommendations in the early strikes. Anthropic's position is that this is different from autonomous weapons: AI as decision support, with human operators in the loop, is a line they're willing to hold. Autonomous weapons — where the AI pulls the trigger without human clearance — is the line they're not.

Whether that distinction holds up under the operational pressures of a fast-moving war is a question military ethicists are actively debating right now in Geneva, where legal experts are meeting this week specifically to discuss lethal autonomous weapons systems.

# Simplified illustration of how AI-assisted targeting might work conceptually
# (Not representative of any classified system — for educational framing only)

def process_intelligence_stream(signals, imagery, humint):
    """
    Real systems ingest terabytes of multi-source intelligence.
    AI correlates pattern-of-life data across source types to
    surface high-confidence targeting recommendations.
    """
    fused_data = correlate_sources(signals, imagery, humint)
    candidate_targets = rank_by_confidence(fused_data)
    
    # Human operators receive ranked recommendations — not autonomous action
    return [t for t in candidate_targets if t.confidence > THRESHOLD]

def correlate_sources(signals, imagery, humint):
    # Cross-reference electronic signatures, visual ID, and human reporting
    # This is where LLMs like Claude reportedly assist in analysis
    pass

Why This Matters — and What It Gets Wrong

The "AI reduces civilian casualties through precision" argument is a standard talking point from defense contractors and military officials alike. It's worth examining carefully.

Israel's "Lavender" system, used in Gaza, was reported to be wrong at least 10% of the time — a rate that translated into thousands of civilian casualties. The early days of the Iran conflict have already seen strikes hit schools, hospitals, and the Grand Bazaar in Tehran. The Red Crescent reported over 600 civilians killed in the opening days. UN human rights experts have characterized some strikes as potential war crimes under the Rome Statute.

"There is no evidence that AI lowers civilian deaths or wrongful targeting decisions and it may be that the opposite is true," Craig Jones told Nature magazine.

The deeper structural problem is accountability. Existing laws of armed conflict require that a human be responsible for targeting decisions. When AI shortens the kill chain to the point where human "oversight" is rubber-stamping machine recommendations in seconds under combat pressure, the legal and moral framework starts to break down. As Jones puts it: the key question when a school is struck near a military installation isn't whether it was human or machine — it's "how old was the data" and who is accountable for the decision.

Military officials consistently insist that AI is a decision-support tool, not a decision-maker, and that human judgment remains essential. That may be true in doctrine. Whether it's operationally true at the tempo this conflict is running is a separate question.

The Competitive Dynamics That Override Ethics

The companies with the most aggressive AI-for-defense positioning — Palantir and Anduril, primarily — make a specific argument: if the US constrains itself while China develops autonomous military systems at speed, Washington loses the technological arms race. "If we are not there with lethal AI, our enemies will be."

This argument has real force and real danger. It's also driving acquisition decisions in real time. China is actively prototyping AI that can pilot unmanned combat vehicles, detect and respond to cyberattacks, and identify and strike targets across domains, according to Georgetown's Center for Security and Emerging Technology.

The result is a race with no meaningful international governance. The Geneva discussions happening this week are years behind the technology. The US "Political Declaration on Responsible Military Use of AI" is a declaration, not binding law. And as Michael Horowitz of the University of Pennsylvania notes, "the current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent."

The Takeaway

  • This is the live debut of AI-driven warfare at scale. The speed and volume of the February 28 strikes — 900+ in the first 12 hours — were enabled by AI-assisted targeting pipelines, not just aircraft and missiles. Whether you call it the "first AI war" is semantic; what's not semantic is that something structurally different happened.
  • "Human in the loop" is being stress-tested in real time. The doctrine says humans make final decisions. The operational tempo of this conflict is compressing those decisions to the point where the distinction may be more legal cover than operational reality.
  • The Anthropic situation reveals an uncomfortable truth. The company drew lines around autonomous weapons and mass surveillance. Those lines may be principled — but they're also being challenged by a government that signed a deal with a competitor the moment Anthropic said no, and is reportedly in new talks now. The question of who sets the ethics of AI in warfare is genuinely unresolved.
  • Iran's drone strategy is also AI-adjacent. Thousands of Shahed drones represent a low-cost, high-volume approach to attrition warfare. The next generation of these — AI-enhanced, capable of autonomous navigation and distributed targeting — will be accessible to non-state actors as well as nation-states.
  • The accountability gap is the real unsolved problem. When an AI recommendation leads to a strike on a school, who is responsible? The officer who approved it in three seconds? The company that built the model? The military that deployed it? Nobody has a good answer, and the conflict is moving faster than the legal frameworks designed to constrain it.

Tags: AI, warfare, autonomous-weapons, drones, Anthropic, military-AI, LLM