Women Use AI Less Than Men Over Ethical Fears, Study Reveals
Gender gap in AI use driven by ethical concerns

In an era where the ethics of our food, clothes, and media are constantly scrutinised, a new frontier of moral consumerism is emerging: the choice of artificial intelligence. A recent study has uncovered a significant gender divide in the adoption of generative AI, driven largely by profound ethical concerns.

The Gender Gap in AI Adoption

Research published in December 2025 revealed a striking disparity: women are using generative AI tools far less than men, with a gap of up to 18%. The study suggests this reluctance stems from women exhibiting "more social compassion, traditional moral concerns, and pursuit of equity."

Greater concern for the social good may partly explain women's lower adoption rates. The ethical worries cited are extensive, ranging from fears that using chatbots for work is unfair or amounts to cheating, to deeper anxieties about data privacy, the potential for AI to enable unethical behaviour, and the entrenchment of societal biases.

From Social Media Mistakes to AI Risks

Campaigner Laura Bates, author of The New Age of Sexism: How the AI Revolution is Reinventing Misogyny, has long warned about these dangers. She argues that unchecked AI can amplify misogyny, harassment, and inequality. This can manifest in virtual assistants with default female voices in subservient roles, bias in hiring algorithms, or the creation of damaging deepfake content.

Giving evidence to the Women and Equalities Committee last year, Bates noted that many of the same ethical concerns were raised about social media two decades ago. She warned that we are now seeing the same mistakes repeated on a greater scale with artificial intelligence.

The Search for an Ethical Chatbot

The quest for ethical AI is fraught with complexity, starting from the very foundation of the technology. Large language models like those powering ChatGPT and Gemini are trained on vast amounts of text scraped from the internet, often with little regard for copyright or creator consent. This has sparked expensive legal battles, though rulings like a US judge's finding that Anthropic's use of books constituted "fair use" have not provided a clear ethical framework.

In response, some companies are attempting to codify ethics. Anthropic used "constitutional AI" based on the Universal Declaration of Human Rights to build its Claude assistant. DeepMind employs a "robot constitution" for real-world robots. However, Anthropic admitted its early systems became "judgemental or annoying," highlighting the difficulty of applying high-minded principles in practice.

Transparency remains a key issue. While companies like French AI firm Mistral emphasise open-source work, campaigners point to significant gaps. Notably, at the 2025 AI Action Summit in Paris, the UK and US governments refused to sign a pledge calling for ethical and safe AI, despite 60 other nations agreeing.

A Matter of Consumer Choice

The recent controversy surrounding Grok, the chatbot from Elon Musk's xAI, exemplifies the problem. The tool was used to generate sexualised and violent images, particularly of women, demonstrating how AI without strict safeguards can cause harm. This incident suggests that, ultimately, the solution may lie in informed consumer choice.

Just as we consider the provenance of a sofa or the ethics of a clothing brand, selecting an AI tool may increasingly become a matter of research, ethical alignment, and aesthetic preference. In the face of mounting concerns about privacy, bias, and societal impact, the demand for genuinely ethical AI is likely to grow, pushing both companies and regulators to take the issue more seriously.