Contact Us
Back to Blog

EU's AI Act: One step closer, but will it stand the test of time?

As the EU’s Artificial Intelligence (AI) Act edges closer to reality, lawyers, economists, and technologists working on AI R&D, competition and regulation should pay close attention. The Act, advancing amidst a dramatic 36-hour ‘marathon’ negotiation last Friday, may well significantly impact the global AI landscape, though how far remains to be seen.

A key sticking point, which we explore below, was how best to regulate Generative AI, and in particular foundation models such as large language models like Open AI’s GPT-4 and Google’s newly released Gemini. These models power AI products like ChatGPT and Bard – collectively referred to in the Act as ‘general purpose AI’. Some fear regulation could stifle the growth of European models like France’s Mistral AI, whereas others think it doesn’t go far enough.

It’s always going to be a delicate balance to regulate such a complex, fast-moving nascent sector as AI – having sufficient guardrails without inadvertently suffocating new entrants or slowing down the pace of innovation. But from what we’ve seen to date it manages to tread this fine line and it represents a strong democratic statement about the need for these new technologies to be held accountable. That said, as with most regulation, the real test will come when the Act moves from the theoretical into real-world implementation.

Here, the Commission has a real challenge on its hands: regulating complex sectors like AI requires a deep understanding of the technical details and implications. For example, the Act includes a two-tier approach to foundation models, based at least initially on the amount of computational resources required to train the model but potentially other factors. But how will this work in practice? And how will the details behind the minimum transparency and information sharing obligations play out? What’s more, the sector is rapidly evolving and some parts of the Act may already be outdated by the time the regulations come into force.

We briefly summarise the key elements and provide our initial thoughts. While the text isn’t yet finalised, there’s a lot already for businesses developing AI applications and using foundation models, as well as competition specialists, to start grappling with now.

The Act in a snapshot

The EU’s AI Act is a landmark achievement as it’s the first fully fledged legislative AI regulation to hit the books in the Western hemisphere. First proposed in 2021, it’s been a while in the making. The overall purpose of the Act is to ensure AI in Europe is developed and deployed safely, i.e., protecting democracy, the rule of law and sustainability while fostering growth and innovation. The rules establish a set of tiered obligations for AI based on risk and impact, with specific rules for ‘general purpose AI’. All this will be overseen by the Commission’s new EU AI office.

The figure below provides a summary of the tiered risk approach to AI. While the text is not yet available, following last week’s trilogue discussions, the key rules will include:

  • Outright bans for ‘unacceptable risks’: a relatively self-contained group of systems specified upfront in the Act will be prohibited (such as biometric categorisation using sensitive characteristics, social scoring or untargeted scraping of facial images), only applying to systems used within the EU.
  • Obligations for ‘high-risk’ AI systems: systems that could affect safety or fundamental rights negatively will have to comply with strict requirements such as risk-mitigation systems, high quality of data sets, provision of detailed technical documentation for self-certification and clear transparent information to users, human oversight and a high level of robustness, accuracy and cybersecurity. This only applies to a small subset of AI systems covering, for example, key critical infrastructures or systems involving biometric identification.
  • Transparency risk requirements: there will be basic transparency requirements to make sure, for example, that users are aware when they are interacting with AI e.g., via chatbots, and that AI generated content and deep fakes are labelled and synthetic content marked.
  • Minimal risk systems: most AI systems (such as AI-enabled recommender systems or spam filters) will not be subject to any obligations but can sign up to a voluntary code of conduct.
  • Law enforcement exemptions for agencies: for example, the use of real-time remote biometric identification systems where strictly necessary and now with additional safeguards.

                                                                                                                               Summary of the tiered risk approach to AI

 Source: AI-Presentation-CEPS-Webinar-L.-Sioli-23.4.21.pdf

While not included in the original draft of the Act, there will now also be dedicated obligations specifically for general purpose AI i.e., generative AI including foundation models. This includes:

  • A two-tier approach with automatic categorisation for ‘systemic’ models e.g., those with computing power above 1025 FLOPs, which mirrors the US Executive Order. There is also reportedly an annexe that sets out qualitative criteria including the number of business users and the model’s parameters – and other criteria may also be considered. While it’s not yet clear which current models meet the threshold, most likely GPT-4 does. How much flexibility and discretion there will be on designation is unclear, and it looks as if this could be a moving target.
  • ‘Systemic risk’ models with high impact will face stricter rules relating to managing risks, monitoring serious incidents, performing model evaluation and adversarial testing.
  • Exemptions for models that are released under free and open-source licenses such as those where model weights are made publicly available, unless, for example, they are ‘systemic risk’ models and the other obligations below.
  • All models must comply with copyright legislation and publish summaries of copyrighted data used for training but ‘without prejudice to trade secrets.’ What this means in practice in terms of the data which should be used to train models and the level of granularity is unclear. If the summaries are only a couple of pages, it’s unlikely to be very effective for concerned publishers and content owners.
  • All developers of models have responsibility along the Generative AI supply chain, where they must provide all the necessary information to enable compliance to downstream AI applications in the high-risk category.

The Act is seeking to balance imposing regulatory compliance responsibilities on the creators of foundation models and those developing downstream applications. The right balance is particularly crucial for foundation models — where countless applications can be developed with relatively minimal investment — and will ensure that regulatory compliance is distributed effectively across the AI value chain.

Will it bite?

Time will tell. The regulations are extensive and hefty sounding for higher risk or systemic risk AI systems. The last-minute changes resulted in some watering down that may affect how strongly the Act impacts the sector (e.g., biometrics and foundation models), but nevertheless the rules remain wide-ranging.

The Act won’t just apply to firms with foundation models or new AI products, but also to existing businesses that use the technology – particularly important in high-risk sectors like medicine or law enforcement.

Consumers will have the right to launch complaints and sizeable fines can also be imposed (€7.5m to €35m, or up to 1.5% to 7% of global turnover, dependent on the breach and size of firm).

Will other countries follow suit?

Some other countries have already made moves towards similar regulations and many others may look to the EU’s AI Act as a template (the ‘Brussels effect’). For example:

  • US – in October, President Biden issued an Executive Order on AI safety standards, which is similar in some respects, though the EU AI Act goes further on transparency.
  • China has already imposed specific legislation for generative AI.
  • India’s Competition Commission has opened a market study and its Chair recently sounded the alarm over anticompetitive concerns, echoing similar concerns by the head of Germany’s competition authority, the Bundeskartellamt.
  • Other countries looking at competition in AI include Portugal and South Africa.

Similar to the GDPR, we expect the EU AI Act could serve as a model for other governments, and businesses may find it more efficient to adopt EU standards globally rather than tailor each individual AI system to meet different regulatory standards.

Will the UK again get left behind like with the Digital Markets Act?

The UK Government so far seems keener on ‘hands off’ industry-led regulation as reflected in its AI White Paper outlining key principles, echoed in the CMA’s preliminary work (see our blog for more on that) and the AI safety summit. But we suspect that won’t be the last word. The UK’s CMA have an ongoing programme of work and plan a further update in March next year that will include ‘reflections on further developments’ and look at the role of AI chips (currently largely dominated by Nvidia but with others seeking to play catch up). The CMA is also currently looking into the partnership between Microsoft and OpenAI using its merger control powers. There’s also ongoing work by other regulators, such as the UK’s Information Commissioner’s Office who last week issued a warning against ‘2024 becoming the year people lose trust in AI’.

More battles ahead?

Though Friday’s decision marks the political agreement between the European Parliament and the Council and is the last big hurdle, it isn’t quite the final word. The Act is now subject to formal approval by both bodies and member states. Once adopted, there will be a transitional period of two years before it comes into force. This means obligations will only kick-in around 2025-2026. Legal fights are also anticipated – like the DMA and DSA, which are currently facing designation appeals on multiple fronts.

The real challenge for the AI Act may be the pace of change itself. In a few years, the nature of the models and their deployment may look very different, possibly rendering the Act’s current thresholds and the requirements obsolete or incomplete. And the Commission faces a daunting task to ensure it is tooled up to engage at the right level of detail and ensure effective implementation. On the other hand, as many commentators have argued, this is a better outcome than simply relying on self-regulation because of the fast-moving nature of the issues.