Claude AI Sees Record Sign-Ups After US Labels It Supply-Chain Risk
Claude AI Record Sign-Ups After US Labels It Supply-Chain Risk

Anthropic's Claude AI chatbot has experienced an unprecedented surge in user adoption, with the company reporting more than one million new sign-ups every single day. This remarkable growth has propelled the Claude application to the very top of both Apple's App Store and Google Play download charts, overtaking its main rival, OpenAI's ChatGPT.

Pentagon Dispute Sparks User Exodus

The dramatic increase in Claude's popularity follows a highly publicized dispute between Anthropic and the United States Department of War. The conflict centers on Anthropic's firm refusal to allow its artificial intelligence technology to be utilized for autonomous weapons systems or domestic surveillance operations. The company has implemented what it describes as essential safety guardrails within Claude's architecture to prevent such military applications.

Supply-Chain Risk Designation

In a significant escalation on Wednesday, the Department of War officially informed Anthropic that its products are now classified as a supply-chain risk, effective immediately. This designation marks the first time such a label has been applied to a domestic American company, as the term has historically been reserved exclusively for foreign firms with connections to adversarial nations.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Secretary of War Pete Hegseth characterized Anthropic's safety restrictions as "ideological whims," while President Donald Trump publicly criticized the company, claiming it was operated by "Leftwing nut jobs" who were actively threatening United States national security interests.

Military Statement and Legal Challenge

The Pentagon released an official statement, first obtained by Politico, explaining their position: "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of critical capability and put our warfighters at risk."

This new designation effectively prohibits all federal agencies and government contractors from utilizing the Claude chatbot while performing work for the United States military. Anthropic has responded by declaring the move unlawful and has announced its intention to challenge the supply-chain risk label through the court system.

Anthropic's Defense and Market Impact

Anthropic CEO Dario Amodei issued a strong rebuttal to the Department of War's actions: "We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making – that is the role of the military. We are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modelling and simulation, operational planning, cyber operations, and more."

Mike Krieger, Anthropic's chief product officer, confirmed the extraordinary user growth statistics, revealing that the daily sign-up rate has reached historic levels since the controversy began. Meanwhile, OpenAI's ChatGPT has faced noticeable user backlash following CEO Sam Altman's decision to reach an agreement with the United States government in the aftermath of the Anthropic-Pentagon fallout.

The situation represents a significant moment in the evolving relationship between artificial intelligence developers and government military agencies, highlighting the ethical dilemmas and commercial consequences that emerge when private technology companies establish boundaries around how their innovations can be deployed in national defense contexts.

Pickt after-article banner — collaborative shopping lists app with family illustration