Home Office AI in Asylum Cases Faces Legal Scrutiny Over Unlawful Use Claims
Legal experts have issued a stark warning that the Home Office's deployment of Artificial Intelligence in processing asylum claims could be unlawful, potentially opening the door to court action against the government. The controversy centres on the use of AI tools by caseworkers to summarise interview transcripts with asylum seekers and search policy guidance, such as determining whether a country is safe for return.
Transparency Failures and Flawed Summaries
Asylum seekers are not informed when AI is applied to their interview testimony, leaving them in the dark about the technology's impact on their claims. A government evaluation of the AI tool used for summarising interviews revealed that nine per cent of summaries were so defective they had to be discarded. Additionally, five per cent of caseworkers using AI to summarise policy documents expressed a lack of confidence in the tool's accuracy.
Despite these issues, the technology has been promoted for its efficiency. A 2024 pilot evaluation suggested AI interview summaries could save 23 minutes per case, with an additional 37 minutes saved when officials used AI to search for information about a migrant's country of origin.
Legal Opinion Highlights Potential Unlawfulness
A legal opinion prepared by lawyers at Cloisters Chambers and Doughty Street Chambers for the Open Rights Group argues that the Home Office's AI use is "likely to be unlawful." It contends that the government has failed to meet legal obligations and standards outlined in its own AI playbook, including transparency with the public and consideration of alternatives before deploying such tools.
Robin Allen KC and Dee Masters of Cloisters Chambers, who contributed to the opinion, stated: "Where AI tools are used without adequate safeguards, there is a real risk that unlawful or unfair decisions may result." They emphasised the need for "full transparency" on AI usage.
Background and Broader Concerns
AI tools were first trialled by Home Office caseworkers in a 2024 pilot scheme before a wider rollout in 2025. In April 2025, then-home secretary Yvette Cooper asserted that AI would help officials make swift decisions, preventing asylum seekers from being "stuck in limbo at the taxpayers' expense." However, no public data exists on how many asylum claims are decided with AI assistance.
While the backlog for initial asylum decisions has decreased under Labour, the number of people awaiting asylum appeals has surged. Recent tribunal statistics show over 100,000 individuals were waiting for an appeal at the end of December 2025. Analysis by the Refugee Council indicates that 36 per cent of determined appeals are successful, rising to 66 per cent when Home Office reconsiderations are included.
Imran Hussain, director of external affairs at the Refugee Council, commented that these figures highlight "poor quality decision-making by the Home Office."
Ethical and Safeguard Issues
The legal opinion further criticises the government for not implementing safeguards to ensure "meaningful human control" over AI tools and for inadequate consideration of how AI influences decisions. It warns that AI adoption risks decision-makers relying on inaccurate information or overlooking relevant facts in asylum claims.
Lawyers also argue that the government has failed to adhere to AI ethical principles, including fair treatment of people with protected characteristics such as sex, race, or disability, and commitments to transparency. This follows previous reports on warnings about the Home Office's plans to use AI facial-recognition technology to assess the age of unaccompanied asylum-seeking children.
Calls for Action and Government Response
Sara Alsherif, migrants rights programme manager at Open Rights Group, called for an "immediate ban on the use of these tools," stating, "these tools are not the answer." She added: "Determining whether someone can or cannot seek refuge in the UK is one of the most serious and life-changing decisions the government can make. There must be the utmost transparency, fairness, and accuracy. But asylum applicants are not even being informed that opaque AI tools are being used in the assessment of their case, nor being given the opportunity to correct errors that might be made."
The group believes the legal opinion could pave the way for legal challenges from asylum seekers affected by AI use. In response, a Home Office spokesperson said: "AI will not decide asylum claims. It will strengthen the support we give to caseworkers, ensuring faster, high‑quality decisions made by trained officials."
