As the EU’s Artificial Intelligence (AI) Act edges closer to reality, lawyers, economists, and technologists working on AI R&D, competition and regulation should pay close attention. The Act, advancing amidst a dramatic 36-hour ‘marathon’ negotiation last Friday, may well significantly impact the global AI landscape, though how far remains to be seen.
A key sticking point, which we explore below, was how best to regulate Generative AI, and in particular foundation models such as large language models like Open AI’s GPT-4 and Google’s newly released Gemini. These models power AI products like ChatGPT and Bard – collectively referred to in the Act as ‘general purpose AI’. Some fear regulation could stifle the growth of European models like France’s Mistral AI, whereas others think it doesn’t go far enough.
It’s always going to be a delicate balance to regulate such a complex, fast-moving nascent sector as AI – having sufficient guardrails without inadvertently suffocating new entrants or slowing down the pace of innovation. But from what we’ve seen to date it manages to tread this fine line and it represents a strong democratic statement about the need for these new technologies to be held accountable. That said, as with most regulation, the real test will come when the Act moves from the theoretical into real-world implementation.
Here, the Commission has a real challenge on its hands: regulating complex sectors like AI requires a deep understanding of the technical details and implications. For example, the Act includes a two-tier approach to foundation models, based at least initially on the amount of computational resources required to train the model but potentially other factors. But how will this work in practice? And how will the details behind the minimum transparency and information sharing obligations play out? What’s more, the sector is rapidly evolving and some parts of the Act may already be outdated by the time the regulations come into force.
We briefly summarise the key elements and provide our initial thoughts. While the text isn’t yet finalised, there’s a lot already for businesses developing AI applications and using foundation models, as well as competition specialists, to start grappling with now.
The EU’s AI Act is a landmark achievement as it’s the first fully fledged legislative AI regulation to hit the books in the Western hemisphere. First proposed in 2021, it’s been a while in the making. The overall purpose of the Act is to ensure AI in Europe is developed and deployed safely, i.e., protecting democracy, the rule of law and sustainability while fostering growth and innovation. The rules establish a set of tiered obligations for AI based on risk and impact, with specific rules for ‘general purpose AI’. All this will be overseen by the Commission’s new EU AI office.
The figure below provides a summary of the tiered risk approach to AI. While the text is not yet available, following last week’s trilogue discussions, the key rules will include:
Summary of the tiered risk approach to AI
While not included in the original draft of the Act, there will now also be dedicated obligations specifically for general purpose AI i.e., generative AI including foundation models. This includes:
The Act is seeking to balance imposing regulatory compliance responsibilities on the creators of foundation models and those developing downstream applications. The right balance is particularly crucial for foundation models — where countless applications can be developed with relatively minimal investment — and will ensure that regulatory compliance is distributed effectively across the AI value chain.
Time will tell. The regulations are extensive and hefty sounding for higher risk or systemic risk AI systems. The last-minute changes resulted in some watering down that may affect how strongly the Act impacts the sector (e.g., biometrics and foundation models), but nevertheless the rules remain wide-ranging.
The Act won’t just apply to firms with foundation models or new AI products, but also to existing businesses that use the technology – particularly important in high-risk sectors like medicine or law enforcement.
Consumers will have the right to launch complaints and sizeable fines can also be imposed (€7.5m to €35m, or up to 1.5% to 7% of global turnover, dependent on the breach and size of firm).
Some other countries have already made moves towards similar regulations and many others may look to the EU’s AI Act as a template (the ‘Brussels effect’). For example:
Similar to the GDPR, we expect the EU AI Act could serve as a model for other governments, and businesses may find it more efficient to adopt EU standards globally rather than tailor each individual AI system to meet different regulatory standards.
The UK Government so far seems keener on ‘hands off’ industry-led regulation as reflected in its AI White Paper outlining key principles, echoed in the CMA’s preliminary work (see our blog for more on that) and the AI safety summit. But we suspect that won’t be the last word. The UK’s CMA have an ongoing programme of work and plan a further update in March next year that will include ‘reflections on further developments’ and look at the role of AI chips (currently largely dominated by Nvidia but with others seeking to play catch up). The CMA is also currently looking into the partnership between Microsoft and OpenAI using its merger control powers. There’s also ongoing work by other regulators, such as the UK’s Information Commissioner’s Office who last week issued a warning against ‘2024 becoming the year people lose trust in AI’.
Though Friday’s decision marks the political agreement between the European Parliament and the Council and is the last big hurdle, it isn’t quite the final word. The Act is now subject to formal approval by both bodies and member states. Once adopted, there will be a transitional period of two years before it comes into force. This means obligations will only kick-in around 2025-2026. Legal fights are also anticipated – like the DMA and DSA, which are currently facing designation appeals on multiple fronts.
The real challenge for the AI Act may be the pace of change itself. In a few years, the nature of the models and their deployment may look very different, possibly rendering the Act’s current thresholds and the requirements obsolete or incomplete. And the Commission faces a daunting task to ensure it is tooled up to engage at the right level of detail and ensure effective implementation. On the other hand, as many commentators have argued, this is a better outcome than simply relying on self-regulation because of the fast-moving nature of the issues.