EU’s Artificial Intelligence Act – first measures to take effect from 2nd February 2025

4th February 2025

The EU is taking a hard line on its approach to regulation of artificial intelligence (AI) compared to international peers, including the UK. The EU’s first wave of AI activity bans are due to come into force on 02 February 2025.

The current legislative mood in the UK is bullish about AI as an engine for growth, with no new AI-specific rules and regulations currently before Parliament. Regulation of the space is effectively devolved to existing structures (such as the Information Commissioner’s Office and the Financial Conduct Authority) under a limited principles-based approach, using existing rules (such as the UK GDPR) to regulate the technology like any other.

The EU, on the other hand, clearly feels very differently and has given significant fanfate to its passing of detailed legislation that is rigorous and requires all parts of an AI value chain to actively show compliance. It is highly likely that organisations based outside of the EU but seeking revenues from EU customers/users will need to comply with some parts of those regulations.

By way of background, the Artificial Intelligence Act (“Act”) became law in the EU in August 2024, but its first real impact is due to arise on 2nd February 2025, when the first tranche of its rules come into force, effectively banning certain uses of AI in the EU.

The Act has extra-territorial effect (akin to GDPR) which means that if an activity has an affect within the EU (e.g. by being sold/licensed to EU based customers) then that activity it is captured by the AI Act and businesses active in the EU are in scope regardless of being located in the UK.

Broad Principles of the AI Act

The Act relates to any party that provides, imports, distributes or deploys AI. This means that both the developers, resellers and users of AI are in scope. The definition of AI is derived from the OECD’s definition but excludes AI systems for national security, military, defence and scientific research.
The Act requires all captured systems to be risk assessed and established as either; prohibited, high risk, limited risk or minimal risk. Prohibited systems are banned from 2nd February.

What is a prohibited system?

The prohibited systems are:
• Biometric categorisation based on sensitive characteristics (because these could lead to discrimination;)
• Facial recognition from untargeted data collection (such as web-trawling/scraping;)
• Facial recognition for mass surveillance (subject to limited public security exceptions;)
• Social scoring based on behaviours;
• Risk scoring for potential to commit criminal activity;
• Emotion recognition;
• Systems that exploit a vulnerability such as exploitation of minors or persons with health conditions; and
• Systems that use subliminal techniques.

What is a high-risk system?

High-risk systems will be strictly regulated. High risk systems are identified and categorised in the Act. It is possible to show via risk assessment that a system is not high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.

Each high-risk system needs to:
• Undergo a conformity assessment to show that the system is tested, that bias has been mitigated and the system is designed to protect fundamental rights;
• Must be transparent it is an AI system including where audio or videos are created;
• Allow human oversight of its processing;
• Publish technical documents relating to the system; and
• Maintain detailed logs of prompts, responses and “decisions”
The burden of the obligations is asymmetric between providers and deployers, and providers are expected to assist deployers with the heavy-lifting of compliance and risk assessments.

Other considerations

Limited risk systems are required to be entirely transparent to users that the system is an AI system and to specifically highlight the use of AI generated or manipulated audio and video no later than a user’s first interaction with the system.

Minimal risk systems, such as spam filters will not face regulation beyond the initial risk assessment setting out why they are a minimal risk system.

Providers of general purpose AI systems are required to register with the EU AI office, comply with codes of conduct and set out compliant policies relating to copyrighted or other protected materials. Where a general purpose AI system poses a systemic risk, these must be modelled, mitigated and reported upon and additional precautions taken in relation to cybersecurity risk.

Penalties and Enforcement

The Act will be enforced by a new EU AI Office; which has been given fining powers of up to 7% of turnover for breaches of prohibitions and 3% for failure to comply with the requirements for high-risk AI systems.

A new AI Liability Act is being debated by the EU’s legislature which will additionally provide persons damaged by a breach of the Act a right to damages.

Steps for UK Businesses

UK businesses that provide or deploy AI systems in the EU should have risk assessments in place setting out the risk of any AI system. Systems that are prohibited should be withdrawn from use in the EU immediately. High risk systems will require compliance steps to be completed either next August or August 2027 depending on the purpose of the system.