AI Chatbots Pose Cancer Risk by Pushing Unproven Alternative Treatments
AI Chatbots Risk Cancer Patients with Unproven Treatment Advice

AI Chatbots Frequently Recommend Unproven Cancer Therapies, Raising Alarm Among Health Experts

A recent study has uncovered that artificial intelligence chatbots are increasingly suggesting alternative treatments for cancer, a trend that is causing significant concern among medical professionals and health officials. Researchers at the Lundquist Institute for Biomedical Innovation conducted an evaluation of several popular AI models, including Grok, ChatGPT, and Gemini, focusing on health-related queries that are prone to misinformation. The findings indicate that these chatbots often provide responses that could potentially jeopardize patient safety by steering individuals away from established, approved medical therapies such as chemotherapy.

Study Highlights Widespread Problematic Responses from AI Systems

In the study, experts assessed the chatbots' answers to cancer treatment inquiries and rated nearly half of the responses as "problematic." Alarmingly, about 20% were deemed "highly problematic" due to containing significant inaccuracies or subjective interpretations that lack scientific backing. The research pointed out that these AI systems frequently present a "false balance" by listing unproven alternative therapies, such as acupuncture and herbal medicine, even after issuing disclaimers or warnings. This approach gives equal weight to both scientific evidence and non-scientific sources, which can mislead users into considering ineffective or harmful options.

Rising Use of AI for Healthcare Advice Amplifies Misinformation Risks

Health experts caution that this practice risks amplifying health misinformation, particularly as reliance on AI for medical guidance grows. Statistics show that approximately one in four adults in the United States now utilize AI tools for healthcare advice, despite only a third of them expressing trust in the information provided. This disconnect highlights a dangerous gap where patients may act on unreliable recommendations, potentially delaying or avoiding proven treatments like chemotherapy, which could worsen outcomes. The study underscores the urgent need for improved oversight and accuracy in AI-generated health content to protect vulnerable individuals seeking reliable medical information.

Pickt after-article banner — collaborative shopping lists app with family illustration
Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list