Elon Musk's Sharp Retort to Anthropic CEO's AI Consciousness Claim
Musk's Two-Word Reply to Anthropic CEO on AI Consciousness

Elon Musk delivered a pointed two-word retort to Anthropic CEO Dario Amodei after he raised the possibility that the company's artificial intelligence models, including the popular Claude chatbot, may have gained consciousness. The tech mogul's sharp comeback came in response to a post on X by the cryptocurrency-based prediction market Polymarket, which highlighted Amodei's comments about AI showing symptoms of anxiety.

Musk's Blunt Response to AI Consciousness Claims

In the post, Polymarket quoted Amodei as saying, "Anthropic CEO says Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety." Musk swiftly replied with the succinct phrase, "He's projecting," suggesting that Amodei might be attributing his own human emotions or biases to the AI system rather than observing genuine consciousness.

Amodei's Cautious Stance on AI Consciousness

During an interview with The New York Times, Amodei elaborated on his company's approach, stating, "We've taken a generally precautionary approach here. We don't know if the models are conscious." He further explained that Anthropic is actively engaged in the field of interpretability, which involves examining the internal workings of AI models to understand their processes.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Amodei noted, "We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we're open to the idea that it could be." He described how researchers have observed activations in the models that correlate with concepts like anxiety, particularly when the AI encounters situations that humans would associate with such emotions.

Clash with the Pentagon Over AI Ethics

Amodei's comments come amidst a significant conflict between Anthropic and the U.S. Department of Defense regarding the military applications of AI. The company has previously refused a Pentagon request to remove safeguards on domestic surveillance and fully autonomous weapons, citing ethical concerns.

This stance led the Trump administration to label Anthropic a "supply chain risk"—a designation typically reserved for foreign adversaries—last week. Anthropic has threatened legal action if the Pentagon proceeds with what it calls a "legally unsound" measure, marking an unprecedented public confrontation with an American company.

Consumer Support and Market Impact

Despite losing major partnerships with defense contractors as a result of its ethical position, Anthropic has experienced a surge in consumer support. Over the past week, downloads of the Claude app have skyrocketed, with the company reporting more than a million new sign-ups daily.

This surge has propelled Claude past competitors like OpenAI's ChatGPT and Google's Gemini, making it the top AI app in over 20 countries on Apple's App Store. The public response suggests a growing alignment with Anthropic's moral stance on AI development and its cautious approach to consciousness and military use.

Pickt after-article banner — collaborative shopping lists app with family illustration