Runners are being cautioned that they could be endangering their health by following 'dangerous' training plans created by AI chatbots. Personal trainers have voiced strong criticism against fitness applications that utilise ChatGPT or similar artificial intelligence models, highlighting the potential for these systems to deliver inaccurate information that might lead to harm.
Concerns Over Cookie-Cutter Plans
Apps such as Runna, which employ AI to generate workout routines, have faced backlash on social media platforms. Users have described these offerings as 'cookie-cutter training plans' that fail to account for individual needs. According to experts, this one-size-fits-all approach can be particularly risky for runners.
Lack of Personalisation and Human Connection
Chris Beavers, a personal trainer, explained to a national newspaper that AI-generated plans are typically based on what the large language model perceives as an optimal routine. However, they often lack accurate tailoring to the specific runner. For instance, an AI might recommend a high-intensity workout without considering the necessity for rest periods, which could result in injuries and hinder running objectives.
Mr Beavers emphasised that relying exclusively on AI eliminates the human connection that provides essential support and accountability, aspects that technology cannot fully replicate. This sentiment was echoed by fellow personal trainer Nick Berners-Price, who pointed out that amateur runners are especially vulnerable. Without proper analysis from a sports lab, their bio-mechanics may not be correctly addressed, increasing the risk of harm.
Nutritional Risks and Incomplete Information
Nutritionist Ella Rauen-Prestes Butler added to the warnings, noting that ChatGPT can also supply incorrect diets and meal plans for runners, potentially negatively impacting their performance. A common problem cited is runners being advised to 'carb up' excessively, which can disrupt blood sugar levels. She remarked, 'AI can be very dangerous … people used to refer to Dr Google but now it’s Dr ChatGPT.'
Counterarguments and Industry Response
Despite these concerns, some argue that AI models are not inherently 'dangerous' but rather 'incomplete.' Runna has previously stated that their plans are developed by experienced coaches using proven training principles. An algorithm then customises and adjusts these coach-designed plans based on each runner's progress, feedback, and real-world performance.
The company acknowledged that running, particularly long-distance running, is a high-impact sport where injury risk can never be entirely eliminated. They noted that factors such as sleep, nutrition, stress, prior injuries, and training outside a plan all influence this risk.
ChatGPT has been contacted for comment regarding these allegations, but no response has been provided at this time. As the debate continues, runners are urged to approach AI-generated fitness advice with caution and consider consulting human professionals for tailored guidance.



