AI Regulation: Compliance

1st December 2025

The EU AI Act is highly prescriptive and to a certain extent this makes compliance more straightforward even if it means your planned uses of AI need to be re-thought and there is the burden of some extra paperwork. However, in the UK, we are relying on individual regulators to provide guidance based originally on five overarching principles; safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The principles have subsequently been expanded in the “AI Playbook” (published in February 2025) and it is up to each regulator to exemplify how they will regulate the use of AI, in the context of existing regulatory frameworks and the AI Playbook.

This article examines three significant case studies that demonstrate how emerging technologies intersect with key areas of law and regulation. These include:

  • the use of AI-based chatbots for customer service in the context of consumer rights;
  • generative AI in relation to the Online Safety Act;
  • the application of generative AI in creating advertisements under the CAP Code.

Each case provides practical insights into the legal implications and compliance considerations involved. For a deeper analysis and further details on these examples, please continue reading below.

Customer service and chatbots

This is a scenario that will be familiar to anyone who has tried to contact (for example) their streaming provider in the last few months. You are offered a chat facility as the first way to resolve a query, let’s say a request to change tariff.  The chat is actually a “bot” driven by generative AI in order to establish a natural language dialogue between the consumer and the service provider. The benefit to the service provider is obvious; the vast majority of queries from consumers fall into a narrow range of regularly repeated queries and the chatbot can process these quickly, factually and at low cost. The consumer can also benefit from an efficient and rational interaction. But how does this work in the context of complying with consumer law?

Under the latest consumer protection regulations arising from the Digital Markets, Competition and Consumer Act, the definition of an unfair practice is one that is “likely to cause the average consumer to take a transactional decision they would not have otherwise taken.” If an AI chatbot is configured to provide sales support with the aim of selling more or improving the returns for the business, the business needs to be certain that the responses remain fair to consumers, providing accurate and reasoned replies that are not misrepresentative, unbalanced or overly pressuring. Unlike humans, who can be trained in their interactions to take the subtleties of this obligation into account, an AI bot that has been designed to “sell” may “reason” that offering low-cost alternatives, admitting negatives or draw-backs to a product or service is not compatible with its objectives and start to use language and tactics that would not be compliant. To be clear, generative AI does not actually reason; it merely looks to patterns of language that meet criteria; it is not making value-judgments.

At present the Competition and Markets Authority note in their guidance that they will be looking for businesses to have “fair dealing” when using AI and are specifically cognisant that “AI based technologies enable bad actors to create false or misleading information more easily, at lower cost and greater scale.” The CMA goes on to note that “if necessary, [we shall] tackle firms that do not play by the rules…through enforcement action.”

Generative AI use in relation to the online safety

In this scenario, the relevant regulator is Ofcom. In an open letter in November 2024, published as a response to a rising trend for AI models capable of creating persona based on deceased persons, Ofcom noted that “a site or app includes a Generative AI chatbot that enables users to share text, images or videos generated by the chatbot with other users will be a user-to-user service…and…Generative AI tools that enable the search of more than one website and/or database are ‘search services’ within the meaning of the Act.” Both user to user services and search services are captured by the Act which imposes obligations to prevent illegal content and content harmful to children from being published.

Ofcom goes on to remind such providers “…to prepare risk assessments relating to the chance of users encountering illegal or harmful content…and to implement measures to mitigate and manage those risks.”

The requirements of the Online Safety Act are most stringent in relation to children and an obvious problematic scenario relates to an AI prompt and response where a child enquires about a harmful subject (for example eating disorders or suicidal ideation.) The service provider will be in breach of the Act, if the content created and served by the Generative AI is harmful. Once again, it is important to remind ourselves that the Generative AI system is not malevolent, it is merely providing the best response to the prompt it has been given, in the context of the rules that it has been refined against. As such, providers will need to take detailed steps to understand the age of their users and configure the systems to respond to children’s questions in a way that is not harmful for the age of the user and does not enable children to reach further harmful content.

There is a further additional complexity for Generative AI systems. The Online Safety Act also includes so-called “cross-cutting” duties on captured services to protect privacy and freedom of expression whilst adhering to the legal obligations relating to illegal and harmful content. At times, this will amount to a judgment-call. For example, whether prompts and responses produced are hate speech or political opinion. Such judgments may be challenged and the Generative AI provider will need to show how the balancing test has been carried out and to defend its position in relation to the obligations, which means both (i) justifying and explaining the theoretic and systematic methodology and (ii) keeping specific records on a case by case basis.

Generative AI Use in the creation of advertisements.

The regulation of advertising in the UK is done by the Advertising Standards Authority (“ASA” and is codified in the “CAP Code.”

Consistent with the UK Government’s general approach to AI regulation, the CAP Code does not have anything specific to say about the dos and don’ts of AI, but a circular from May 2025 notes that all the “normal” rules apply whether AI has been used or not. Key to the rules is not to mislead. The ASA underlines this point by asking the question “what’s the mischief?” that the use of AI may be causing that would warrant disclosing that AI has been used in the creation of the advert? For example, if AI had been used to represent cosmetic effects of a beauty product that are not attainable in real life, it would be necessary to disclose the use of AI. The circular also points out that there could be occasions where disclosure regarding the use of AI is key to “explaining” the advert. For example, if the advert lampoons a celebrity or well-known creative. The industry body the Incorporated Society for British Advertisers goes further in their guidance to members with 12 principles regarding the use of AI. Of particular interest are principles 2 and 11 stating that any use of AI should not undermine trust and should be transparent, respectively.