Trump’s Directive: A Major Shift in AI Usage for Federal Agencies
President Donald Trump ordered all federal agencies on February 27, 2026, to halt the use of Anthropic’s Claude AI, following the company’s refusal to provide access for military applications, such as surveillance and autonomous weapons.
This directive marks an unprecedented intervention in the growing influence of artificial intelligence on national security. The administration announced a six-month phase-out period while imposing severe penalties for non-compliance. This action reflects the administration’s strategy of tightening control over AI technologies deemed sensitive for military use.
The Dispute Behind the Decision
The conflict arose when the Pentagon pressured Anthropic to remove safeguards that prevent the misuse of Claude for mass surveillance and other military applications. Anthropic’s CEO Dario Amodei stood firm against these demands, citing the potential for harm to both warfighters and civilians. “We are open to collaborating on research and development but will not compromise on oversight,” Amodei stated prior to the directive.
Trump’s order, issued on the social platform Truth Social, outlined that a failure to comply would result in “major civil and criminal consequences.” The administration established the General Services Administration’s immediate revocation of Anthropic’s OneGov agreement, further isolating the company from federal contracting opportunities.
Moreover, Defense Secretary Pete Hegseth identified Anthropic as a “supply-chain risk to national security,” leading to a directive barring military contractors from engaging with the firm. This designation follows a period of collaboration where Antropic’s AI systems, initially used in various federal contexts, had garnered significant adoption across several government branches.
Impacts on Federal Agencies and Operations
The decision disrupts a range of existing and planned applications of Claude across numerous federal departments, including the Department of Health and Human Services, the Office of Personnel Management, NASA, and the Department of Energy. Each of these agencies had integrated Claude into operations such as personnel management and advanced research projects.
Notably, the Pentagon itself had utilized Claude for intelligence, planning, and cyber operations, among other applications. The abrupt withdrawal could lead to a substantial setback in federal AI integration efforts, forcing agencies to pivot quickly to alternate solutions.
Industry observers have warned that this move might discourage technology companies from developing AI solutions for government use, given the stringent controls imposed. The inability of other firms to compete within a landscape marked by rigorous compliance expectations may consolidate power among established players.
What’s Next for Anthropic and the Government?
In the wake of the directive, Anthropic expressed its intentions to engage further with government agencies to explore constructive paths forward. The company’s openness to negotiations reinforces potential avenues, albeit in a challenging regulatory climate. The possibility remains that discussions could pave the way for a revised agreement that balances security concerns and innovative AI deployment in government operations.
As the timeline unfolds over the next six months, the tech industry and policymakers will closely monitor the situation’s evolution. With Congress urging dialogue surrounding the implications of national security risks, the fallout could reshape the future landscape of AI governance in the U.S., thereby setting precedents for private-public partnerships in technology innovation and military applications.









