Leaked EU AI Act Drafts: Key Takeaways

Home » Insights » Leaked EU AI Act Drafts: Key Takeaways

by | February 14, 2024

The recent leaks of consolidated texts relating to the European Union’s Artificial Intelligence Act (AI Act) have stirred significant attention, revealing a closer look at what might become the blueprint for AI regulation not only in Europe but potentially serving as a model globally. The leaks, as reported by Euractiv Technology Editor Luca Bertuzzi and European Parliament Senior Advisor Laura Caroli, provide a rare glimpse into the ongoing negotiations and the possible future of AI governance.

The AI Act is groundbreaking in its attempt to regulate a technology that is rapidly evolving and increasingly influential in many aspects of daily life. The legislation’s progress is crucial, given the complexity and potential impact of AI systems on society.

The leaks indicate that the legislation is moving forward, albeit amidst challenges, including the tight timeline to finalize the text before the upcoming parliamentary elections this June. This urgency is compounded by concerns among major EU members, like France, over specific provisions related to regulating foundation models, which are central to the development of AI technologies. The possibility of France forming a blocking minority highlights the contentious nature of the negotiations and the difficulty in reaching a consensus on how to regulate such a complex and far-reaching technology.

The leaked texts reveal several key points and timelines that are critical for understanding the proposed regulations. Notably, the phased approach to enforcement, with different provisions coming into effect at different times, suggests a pragmatic acknowledgment of the need for a gradual implementation. This approach allows stakeholders to adapt to the new regulatory environment, ensuring compliance without stifling innovation.

The focus on governance, knowledge, and training is particularly noteworthy. The emphasis on “AI literacy” and the requirement for competent oversight of high-risk AI systems underscore the importance of human oversight and the development of expertise in managing AI’s societal impacts. These provisions suggest a balanced approach to AI regulation, aiming to harness its benefits while mitigating risks.

However, the extended timeline for obligations related to high-risk AI systems, not applicable until 36 months after entry into force, might raise concerns about the pace at which regulatory measures can effectively address the rapid advancements in AI technology. This delay could potentially leave a gap in oversight and risk mitigation during a critical period of AI development and deployment.

The leaks also highlight the ambitious scope of the AI Act, with provisions covering everything from general-purpose AI systems to specific obligations for providers and deployers of high-risk systems. The establishment of AI regulatory sandboxes at the national level is a forward-thinking inclusion, promoting innovation while ensuring that new AI applications can be tested in a controlled environment to assess their compliance and impact.

In conclusion, the leaked documents of the AI Act offer valuable insight into the European Union’s ambitious efforts to regulate artificial intelligence. While the act represents a significant step forward in addressing the complex challenges posed by AI, the negotiations and finalization process reflect the delicate balance between promoting technological innovation and ensuring safety, transparency, and accountability. As the legislative process unfolds, it will be crucial to continue monitoring developments and engage in informed discussion on the future of AI regulation.