Court orders HMRC to reveal AI use in tax credit decisions

Ft

The UK’s tax authority, HM Revenue & Customs (HMRC), has been ordered by a court to disclose whether it has deployed artificial intelligence (AI) in making decisions on tax credit applications, marking a significant victory for transparency in government’s use of advanced technology. The landmark ruling stems from a persistent challenge by a tax adviser who suspected that automated systems were behind the rejection of Research and Development (R&D) tax credit claims, prompting questions about fairness and accountability in a crucial area of business support.

This legal directive comes amidst growing scrutiny of AI’s role in public sector decision-making, particularly where such decisions can have profound financial implications for individuals and businesses. The tax adviser’s initial transparency request, seeking details on HMRC’s use of large language models and generative AI within its R&D Tax Credits Compliance Team, had previously been rejected by the Revenue on grounds of potentially prejudicing the assessment or collection of tax. However, the court’s decision underscores a rising judicial expectation for government bodies to be open about their algorithmic processes.

While the specific details of the court case remain under wraps, the ruling enters a landscape where HMRC already extensively leverages AI for data analysis, identifying discrepancies in taxpayer information, and cross-referencing various databases to detect undeclared income or anomalies. Yet, the application of AI in complex areas like R&D tax credits, designed to incentivise innovation, introduces a different layer of concern. R&D tax claims often involve intricate technical assessments, and recent First-tier Tribunal (FTT) cases have highlighted HMRC’s challenges in accurately evaluating these. For instance, a July 2024 ruling involving Get Onbord Ltd (GOL) saw the tribunal side with a software company developing an AI system for client verification, overturning HMRC’s rejection of their R&D tax credit claim. This case, among others, suggested a “lack of understanding” by HMRC officials regarding the technical nuances of advanced technologies like AI in their assessments, and notably shifted the burden of proof onto HMRC to refute claims once sufficient evidence was provided by the claimant. Similarly, decisions in late 2024 and early 2025 concerning “subsidised” or “contracted out” R&D claims, where HMRC lost against SMEs like Collins Construction and Stage One Creative Services, have led the Revenue to reconsider its guidance and not appeal the rulings, signalling a broader pattern of judicial pushback against HMRC’s interpretations.

The imperative for transparency in AI-driven public services is not merely a legal nicety; it is a cornerstone of public trust. Public law principles dictate that government bodies must provide reasons for their decisions, and a “duty of candour” requires full disclosure of relevant information, especially when AI is suspected of leading to flawed or biased outcomes. The Ministry of Justice’s own “AI Action Plan for Justice,” published in July 2025, champions transparency by committing to publish AI use cases through the Algorithmic Transparency Recording Standard (ATRS) Hub, aiming for public scrutiny and accountability. This commitment contrasts with the broader UK government’s stance, as seen in June 2025, not to compel private tech firms to disclose how they train their AI models, though the current HMRC case focuses on internal government use rather than private sector development.

However, the rapid adoption of AI also presents inherent risks, as evidenced by instances of AI “hallucinations” – where systems generate inaccurate or fabricated information – impacting even HMRC’s internal enquiry teams. Furthermore, the courts have issued stern warnings to legal professionals against relying on AI-generated fictitious legal precedents, underscoring the critical need for human oversight and verification in any AI-assisted process. This latest ruling against HMRC serves as a powerful reminder that while AI promises efficiency, its deployment in sensitive governmental functions demands rigorous oversight, clear accountability, and, most critically, unwavering transparency.

[[]] A court just ordered HMRC to reveal its AI secrets, forcing a new era of transparency in government algorithms.