Federal Judge Blocks Pentagon's 'Supply Chain Risk' Label on Anthropic — Calls Designation 'Orwellian'
A California federal judge temporarily blocks the Pentagon's 'supply chain risk' designation on Anthropic, calling the measure 'arbitrary and Orwellian.'
On March 26, 2026, Judge Rita Lin of the U.S. District Court for the Northern District of California issued a temporary restraining order blocking the Pentagon's 'supply chain risk' designation against AI developer Anthropic, as reported by AP News, BBC, and CNN.
The dispute originated from Anthropic's decision to implement ethical guardrails preventing its AI model Claude from being used in autonomous weapons systems or mass surveillance. When Defense Secretary Pete Hegseth and President Trump demanded the removal of these restrictions and Anthropic refused, the Pentagon designated the company as a 'supply chain risk' on March 5, 2026 — the first such designation ever applied to a U.S. company.
Judge Lin sharply criticized the measure as 'arbitrary and capricious,' calling it 'Orwellian' to treat a U.S. company as a 'potential adversary' simply for expressing disagreement with the government. Microsoft and former senior military officials filed amicus briefs supporting Anthropic's position.
Notably, just days before the designation, reports emerged that the Pentagon had been using Claude for target identification in 'Operation Epic Fury,' an air strike campaign against Iran. This revelation highlighted the contradiction between the military's reliance on Anthropic's technology and its punitive action against the company.
The ruling represents a landmark case in the intersection of AI military use and ethical principles, establishing an important precedent that AI companies can resist government pressure to maintain their ethical commitments. The case is expected to intensify broader societal debate about the appropriate role of AI in military operations.
Sources
Tools Mentioned in This Article
AI Newsletter
Get the latest AI tools and news delivered daily
Related Articles
Washington State Signs First-in-Nation AI Chatbot Safety Laws — Mandates Minor Protection and AI Disclosure
Washington Governor Ferguson signs two AI safety bills (HB 2225 & HB 1170) into law — the first US chatbot regulation mandating minor protection and AI content disclosure.
Sanders & AOC Introduce AI Data Center Moratorium Act — Halt Construction Until Federal Safety Rules Enacted
Senator Bernie Sanders and Rep. AOC introduce the AI Data Center Moratorium Act, pausing new data center construction until comprehensive federal AI safety legislation is enacted.
Washington State Signs Two AI Laws — Criminalizing Deepfakes and Regulating AI Chatbots for Minors
Washington state signs two AI laws criminalizing deepfakes and regulating AI chatbot interactions with minors. Set to take effect June 10, 2026.