An artificial intelligence safety specialist has resigned from one of the globe's foremost AI companies, issuing a stark warning that the "world is in peril." Mrinank Sharma, formerly employed at Claude creator Anthropic, expressed profound concerns about imminent dangers stemming from "a whole series of interconnected crises unfolding in this very moment."
Departure Amid Growing AI Safety Concerns
This resignation occurs against a backdrop of escalating apprehension regarding the safety of artificial intelligence systems and the conduct of the corporations developing them. Sharma's announcement coincided roughly with an OpenAI researcher's departure over the firm's choice to integrate advertisements into ChatGPT, highlighting broader industry tensions.
Sharma's Personal Reckoning and Future Plans
In his statement, Sharma articulated a clear personal resolution: "It is clear to me that the time has come to move on. I continuously find myself reckoning with our situation." Rather than persisting in efforts to secure AI systems, he intends to dedicate himself to poetry, relocate to the United Kingdom, and "become invisible."
He elaborated on the peril facing humanity, noting it extends beyond artificial intelligence or bioweapons to encompass a complex web of simultaneous emergencies. "We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences," Sharma cautioned.
Challenges in Upholding Values
Throughout his tenure, Sharma observed significant difficulties in aligning actions with core values. "Moreover, throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions," he wrote. This struggle manifested internally, within the organization under pressures to compromise on essential principles, and across wider society.
Notable Contributions to AI Safety Research
Sharma reflected with particular pride on his investigative work into "AI sycophancy and its causes," alongside research into how artificial intelligence could potentially facilitate bioterrorism. These contributions underscore his deep engagement with critical safety issues prior to his departure.
The expert's exit from Anthropic signals a poignant moment in the ongoing discourse about artificial intelligence's trajectory, emphasizing the urgent need for ethical governance and wisdom in technological advancement.



