UK consumers and the financial system are being left exposed to "serious harm" due to a failure by the government and regulators to properly address the risks posed by artificial intelligence, a powerful group of MPs has warned.
A 'Wait-and-See' Approach Criticised
In a damning new report, the cross-party Treasury committee criticised ministers, the Bank of England, and the Financial Conduct Authority (FCA) for adopting a passive "wait-and-see" stance towards the rapid adoption of AI across the financial sector. This comes despite widespread use, with more than 75% of City of London firms now deploying the technology.
Insurers and international banks are leading the charge, using AI to automate administrative work and even for core operations like processing insurance claims and assessing how creditworthy customers are. However, the UK has not developed any specific laws to govern this use, relying instead on existing general rules which regulators claim are sufficient.
Clear Risks to Consumers and Stability
The MPs highlighted a host of dangers stemming from the current regulatory vacuum. A key concern is the lack of transparency in how AI influences financial decisions, which could unfairly disadvantage vulnerable people seeking loans or insurance. The report also found it unclear who would be held accountable—data providers, tech developers, or the banks themselves—if an AI system causes harm.
Beyond individual consumers, the committee warned of systemic threats to the UK's financial stability. The increasing reliance on AI amplifies cybersecurity risks and creates a dangerous dependence on a handful of large US tech firms for essential services. Perhaps most alarming is the risk of "herd behaviour," where AI-driven firms could make identical, catastrophic financial decisions during an economic shock, potentially triggering a new financial crisis.
Calls for Immediate Regulatory Action
Meg Hillier MP, chair of the Treasury committee, stated plainly: "Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying."
The report urges regulators to move swiftly from observation to action. Its recommendations include:
- Developing new stress tests to assess the City's resilience to AI-driven market shocks.
- Compelling the FCA to publish "practical guidance" by year's end on how consumer protection rules apply to AI.
- Clarifying legal accountability for AI-related failures.
In response, an FCA spokesperson said the regulator had done "extensive work" on safe AI use and would review the report carefully. The Treasury stated it aimed to "strike the right balance" between risk and opportunity, while the Bank of England said it had already taken steps to assess AI risks and would consider the recommendations.
Nevertheless, the committee's conclusion was stark: by continuing to wait and see, the authorities are actively exposing the public and the economy to potentially severe damage.