Anthropic Rejects $200M Pentagon Deal Over AI Control, DoD Pivots to OpenAI
TechCrunch AI March 6, 2026
Anthropic declined a lucrative $200 million Pentagon contract over concerns about military control of its AI models for autonomous weapons and surveillance, prompting the DoD to label it a supply-chain risk and turn to OpenAI. This standoff highlights a critical emerging challenge for executives: balancing significant government contracts with ethical AI deployment, forcing companies to define their red lines. The episode also signals a growing tension between AI developers' principles and national security demands, potentially shifting the competitive landscape for defense-related AI partnerships.
Key Intelligence
•**Anthropic** notably walked away from a $200 million U.S. Pentagon contract, refusing to grant the military full control over its AI models.
•**The core dispute** centered on the potential use of Anthropic’s AI in autonomous weapons and mass domestic surveillance, raising significant ethical concerns.
•**Consequently, the Pentagon** designated Anthropic a 'supply-chain risk,' highlighting the increasing tension between AI developers and government agencies.
•**Shifting gears, the DoD** subsequently awarded the lucrative contract to OpenAI, indicating a new strategic alignment for military AI partnerships.
•**This incident underscores** the complex ethical tightrope AI firms must walk between commercial opportunity and responsible technology deployment.
•**Meanwhile, reports suggest** ChatGPT uninstalls surged 295%, possibly reflecting broader public apprehension about AI applications and data control amidst such news.