AI Regulation in the UK

In this article our Regulatory Risk and Compliance team discusses the UK’s proposed regulatory framework for AI. It does this by first outlining the UK’s current approach, before exploring the “principle based” approach set out in the recent government whitepaper and then discussing examples of how regulators have begun to regulate AI.

5 July 2023

Introduction

2023 has been an eventful year for artificial intelligence (AI). It has seen ChatGPT become the fastest-growing consumer application in history, Universal Music Group writing to Apple and Spotify requesting that they block AI-generated music from their streaming platforms and an open letter calling for the pause of AI development receiving more than 1,100 signatures, including signatures from Elon Musk and Steve Wozniak.

This has predictably focussed regulators’ attention on AI, with governments across the world considering their regulatory regimes. To this end, on 29 March 2023, the UK government published its whitepaper titled “a pro-innovation approach to AI regulation”, (the Whitepaper) which sets out the UK’s proposed regulatory framework for AI.

What is AI?

Translating a definition of AI into legal language has proven difficult. The Whitepaper states that the UK’s regime will define AI by reference to the presence of two characteristics, “adaptivity” and “autonomy”. This differs from other regimes, such as the EU’s draft AI Regulation which defines AI as:

software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

It remains to be seen how the Whitepaper’s proposed definition translates to legal language, but there will be scope for argument as to what qualifies as AI. However, defining AI by reference to its key characteristics provides some “futureproofing” that is not present in the EU regime.

How is AI currently regulated in the UK?

The UK does not currently have a tailored regulatory framework for AI. Obligations arise under a patchwork of existing legislation such as the Equality Act 2010, as well as sector-specific regulators like the Financial Conduct Authority (the FCA), the Prudential Regulation Authority (the PRA) and the Bank of England.

This approach has been criticised for creating uncertainty and inconsistency as to what compliance means in practice.

How AI is currently used - AI and Financial Advice

One of the key areas where AI is already impacting consumers is financial services, in particular, the development of “Robo Advice”. These tools allow banks, financial advisers and other firms to provide AI-driven advice to consumers on pensions, investments and savings, without the need for direct input from a human.

The advantages to this model – in terms of cost and ease of accessibility – are clear, and the FCA has given Robo Advice a cautious welcome, emphasising the key role it might play in allowing access to financial advice for consumers who might not otherwise have the time or resources to do so.

Take-up of Robo Advice has been slow in practice, with studies suggesting that consumers still prefer direct human contact and input on their advice.

Moreover, given that the FCA is clear that Robo Advice is subject to the same regulatory requirements as ‘traditional’ human advice, the legal risks are significant. Unlike a human financial adviser, who may make a mistake in one piece of advice and then realise and rectify it, mistakes in Robo Advice may only be discovered after they have been repeated across many hundreds of clients.

To date, Robo Advice has focussed on traditional ‘branching model’ decision-making processes – it remains to be seen how developments in large language models like ChatGPT may be incorporated into Robo Advice in the future.

How will the UK regulate AI in the future?

The Whitepaper sets out the government’s ambition for a co-ordinated, coherent framework to regulate AI which will: (a) make responsible innovation easier while reducing regulatory uncertainty, (b) increase public trust in AI and (c) strengthen the UK’s position, as a global leader in AI. The framework will be “iterative”, consisting of five “cross-cutting principles” set by the government and implemented by existing regulators.

The principles are as follows:

Safety, security and robustness. AI should function in a robust, secure and safe way throughout its lifecycle and risks should be continually identified, assessed and managed.

Appropriate transparency and explainability. AI should be appropriately transparent and explainable, i.e., it should provide information on how, when and for what purposes AI is being used, while also making it possible for parties to access, interpret and understand its decision-making processes.

Fairness. AI should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes.

Accountability and governance. Governance measures should be in place to ensure effective oversight of the supply and use of AI, with clear lines of accountability established across the AI life cycle.

Contestability and redress. Impacted third parties or users of AI should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

(Together, the Principles.)

The first iteration of the framework

Initially, the Principles will be issued on a non-statutory basis with existing regulators being responsible for their implementation. This will be done by e.g., issuing guidance regarding best practice or how the principles interact, with existing legislation.

Regulators’ approaches will be required to be proportionate to the specific uses of AI. For example, AI used to identify scratches on machinery should be treated differently from AI that assesses the structural integrity of infrastructure.

The Whitepaper acknowledges some potential risks that are not suitably covered by the framework. Here, regulators will be expected to collaborate with the government and other regulators to identify potential actions, such as changes to regulators’ remits or the issuance of joint guidance.

The second iteration of the framework

The government will collect and assess data regarding the efficacy of the framework throughout its first iteration. Then, if deemed necessary, it will introduce a new duty for regulators to have due regard to the principles. This would, according to the Whitepaper, allow regulators to:

exercise their expert judgement and determine that their sector or domain does not require action to be taken. The introduction of the duty will, however, give regulators a clear mandate and incentive to apply the principles where relevant to their sectors or domains.”

Central functions

The government will have central functions relating to coordinating, monitoring and adapting the framework - examples of these include:

Monitoring, assessment and feedback. The government will develop and maintain a monitoring and evaluation framework to assess cross-economy and sector-specific impacts of the framework.

Support coherent implementation of the Principles. The government will identify barriers preventing regulators from implementing the principles. They will also identify any conflicts or inconsistencies in how the principles are interpreted by different regulators, working with them to resolve discrepancies that significantly impact impact on innovation.

Cross-sectoral risk assessment. The government will maintain an “AI risk register” and support coordination between regulators on risks that cut across remits, while facilitating the issuing of joint guidance. Further, they will identify where risks are not adequately addressed by the framework.

Possible issues with the framework

It remains to be seen how regulators will implement the principles. However, some potential issues are already being raised, such as:

Diverging requirements. Many AI products will operate in multiple markets and, therefore, risk facing diverging interpretations of the principles. However, this may be assuaged by the fact that the regulatory burden will fall on users in specific sectors rather than developers, minimising potential “regulatory crossover”. Although the government has central functions aimed at addressing divergences, they are unlikely to become too involved and “centralise” too much of the framework as this would go against its spirit.

Regulator expertise. With regulators expected to implement the Principles, they will each have to build a deep technical knowledge base in an environment where resources are constrained. A central regulatory body, such as what is proposed in the EU, would arguably avoid this “duplication” of work.

Focus on the use of AI. As stated above, the framework focuses on regulating the use of AI rather than the technology. While certain regulators’ remits will catch AI’s development, e.g., The Information Commissioner’s Office (ICO), it is to be seen that there will not be a bespoke regime for the development of AI. This appears intentional as AI developers will not generally be regulated entities meaning the framework will have limited impact on them.

Liability. Given that the Principles will not initially have statutory footing, there is limited scope for enforcement, either privately or by regulators. To this end, the Whitepaper states that some regulators have already voiced concerns that they do not have the statutory power to effectively implement the framework. Although contestability forms part of the Principles and the government has voiced its willingness to adapt remits, more detail is required as to how enforcement rights will be implemented.

Criticism of the Whitepaper

A number of activist groups published a joint ‘Alternative AI White Paper’ in June 2023, criticising the approach taken by the government in the Whitepaper and pointing to the possibility that AI could increase the risk of unfairness and discrimination faced by individuals. The Alternative AI White Paper calls for transparency, human accountability, effective mechanisms of redress and prohibitions against AI that could threaten fundamental rights, such as privacy.

How have regulators responded?

ICO

Given the large amount of data used and processed by AI, ICO will be an influential regulator under the framework. ICO has published various papers on AI, such as guidance on explaining the decision making of AI, guidance regarding best practice for data protection-compliant AI (the ICO Guidance) and has responded to the Whitepaper.

In line with the Whitepaper, the ICO Guidance suggests that  users/developers of AI should take a risk based approach to compliance with data protection laws but that “in the vast majority of cases, the use of AI will involve a type of processing likely to result in a high risk to individuals’ rights and freedoms and will therefore trigger the legal requirement for you to undertake a [data protection impact assessment].” (Emphasis added.)

Given ICO’s view that much of AI’s usage will be high risk from a data protection perspective, any guidance will be highly relevant to most  users/developers of AI. Although ICO has not yet published its suggested approach to the framework, the ICO Guidance provides insight as to what its take on the principles might be.

Transparency

Data protection law requires the “fair and transparent processing” of data, which is similar to the Principle of “appropriate transparency and explainability”. To comply with this, the ICO Guidance recommends that companies using AI are open about how AI makes decisions and how personal data is used to train and test it.

Fairness

Data protection law also provides a right to fair treatment and non-discrimination, which is similar to the Principle of “fairness”.

The ICO Guidance notes that the usage of AI places the user / developer at high risk of falling foul of the need for non-discrimination and, therefore, the fairness principle. To illustrate this, the ICO Guidance highlights the risk arising from imbalanced training data where certain classes of persons may be underrepresented. This could prompt AI to consider such groups statistically “less important” and, therefore, pay less attention to patterns found in the groups.

To mitigate this risk, the ICO Guidance recommends that users of AI should have anti-discrimination measures and policies in place, while also monitoring their use of AI on an ongoing basis. Policies should clearly set out both the process and the person responsible for the final validation of an AI decision.

The ICO Guidance also recommends that users replacing a traditional decision-making system with AI should consider running the two in parallel for some time. This would flag discrepancies between the two processes and allow for the investigation of any significant divergence.

To be clear, the ICO Guidance relates to compliance with data protection law and does not necessarily provide guidance as to how ICO sees compliance with the Principles. However, it does provide a useful insight into how a regulator may view a risk-based, proportionate regime for AI regulation.

Financial services

Banks, insurers and other financial services firms hold large amounts of data and operate in an environment where there is huge demand for innovation. Many in the financial sector have been quick to recognise the potential benefits of machine learning and other AI techniques.

In October 2022, the Bank of England and the FCA published the results of a survey into the state of machine learning in UK financial services, showing a trend towards an increasing use of machine learning applications. Among the firms that responded to the survey, 72% reported using or developing such applications.

However, almost half of the firms that responded to that survey said that financial services regulation constrains the deployment of machine learning applications. A quarter of firms (25%) said this is due to a lack of clarity within existing regulation.

The use of AI in financial services can bring important benefits, but can also create new risks (or amplify existing ones). Financial sector regulators, therefore, have an obvious role to play in the safe and responsible adoption of AI by financial services firms.

The regulators who supervise financial services – principally, the FCA, the PRA and the Bank of England – generally adopt a “technology neutral” approach when imposing regulations. Core principles, rules and regulations for financial services firms do not usually mandate or prohibit specific technologies.

Within the context of the framework for AI, the FCA, the PRA and the Bank of England launched a consultation on AI at the end of last year, which closed in February 2023. The results of the consultation are currently awaited.

The consultation sought input on AI’s potential benefits and risks, how the current regulatory framework applies to AI, whether additional clarification of existing regulation may be helpful and how policies adopted by supervisory authorities can best support further safe and responsible adoption of AI.

It identified various “cross-cutting” legal requirements and guidance for financial services firms affecting three key stages of the AI lifecycle:

Data. There are existing requirements and guidance targeted at data quality, data privacy, data infrastructure and data governance.

Model. Existing regulations for managing capital risks may provide safeguards surrounding the model development, validation and review processes.

Governance. A third set of “cross-cutting” requirements and guidance is focused on proper procedures, clear accountability and effective risk management at various levels of operations, e.g., the Senior Managers & Certification Regime (SMCR).

As much of the existing regulation for financial services firms is framed in “technology neutral” terms, determining how it applies in the context of novel technologies such as AI can be a real challenge. It is, therefore, to be welcomed that supervisory authorities such as the FCA, the PRA and the Bank of England are actively seeking to gain and share feedback on the interface between financial services regulation and AI.

How we can help?

Our Regulatory Risk and Compliance team has acted on some of the largest and most innovative regulatory projects across the UK, Europe and Asia, representing both regulators and other interested parties. We welcome feedback on these themes and would be happy to have a dialogue with interested parties. Please feel free to get in touch with one of our named contacts or via your usual Shepherd and Wedderburn relationship contact.