US Bans Anthropic Over Refusal to Enable AI for Military Use

Anthropic’s removal from classified military networks contrasts with OpenAI’s new agreement with the U.S. Army to deploy AI models under stated safety principles.

Summary

On February 27, 2026, the United States government banned Anthropic from classified military networks after the company refused to permit flexibility in AI use for military applications, including mass surveillance and autonomous weapons. Hours later, on February 28, OpenAI announced an agreement with the U.S. Army to deploy AI models in such networks under defined safety principles. The ban on Anthropic included a 'supply chain risk' designation, jeopardizing its planned IPO and removing its role in setting AI military standards, while OpenAI’s move reflects greater alignment with national security directives.

Terms & Concepts
  • Artificial Intelligence (AI): Computer systems designed to perform tasks that typically require human intelligence, such as language understanding or decision-making.
  • Mass Surveillance: Large-scale monitoring of individuals or groups, often by governments, using technology to collect data.
  • Autonomous Weapons: Weapons systems capable of targeting and engaging without human intervention.