The full text and final version of the European Union's (EU) AI Law, a risk-based regulation of groundbreaking artificial intelligence applications, has been published in the Union's Official Journal.
The new law comes into force 20 days later, on August 1, and its provisions will be fully applicable to AI developers 24 months later, by mid-2026. But the law takes a phased approach to implementing the EU's AI rulebook, with various deadlines between now and when it comes into force, and beyond, for different legal provisions to start applying.
EU lawmakers signed a political agreement in December on the bloc's first comprehensive rulebook on AI.
The framework imposes different obligations on AI developers depending on the use case and perceived risk: while the majority of AI uses are deemed low risk and therefore unregulated, a small number of potential use cases for AI are prohibited by law.
So-called “high risk” use cases, such as biometric applications of AI and AI used in law enforcement, employment, education and critical infrastructure, are permitted by law, but developers of such apps will have obligations in areas such as data quality and anti-bias.
The third risk tier also applies looser transparency requirements to makers of tools such as AI chatbots.
There are also some transparency requirements for makers of General Purpose AI (GPAI) models, such as OpenAI's GPT, the technology that ChatGPT is based on. The strongest GPAI are usually set based on a computational threshold, but may also require systematic risk assessments to be performed.
Some in the AI industry, backed by a handful of member states' governments, have lobbied hard to ease GPAI obligations, fearing the law could stifle Europe's ability to produce homegrown AI giants that can compete with U.S. and Chinese rivals.
Phased implementation
First, the list of prohibited uses of AI will take effect six months after the law comes into force, i.e. early 2025.
Examples of prohibited (or “unacceptable risk”) uses of AI that will soon be illegal include Chinese-style social credit scoring, compiling facial recognition databases by indiscriminately scraping the internet and CCTV, and the use of real-time remote biometric identification by law enforcement in public places, such as to search for missing or kidnapped people, unless one of a number of exceptions applies.
The Code of Conduct will then apply to developers of AI apps that fall within its scope nine months after it comes into force, around April 2025.
The EU AI Office, a legally established ecosystem building and oversight body, is responsible for providing these codes, but the question remains as to who will actually write the guidelines.
A Euractiv report earlier this month said the EU was seeking consultancy firms to draft the code, raising concerns from civil society that companies in the AI industry would be able to influence the shape of the rules that apply to them. More recently, MLex reported that the AI Office would launch a call for expressions of interest from specific stakeholders to draft a code of conduct for general-purpose AI models, following pressure from MEPs to make the process inclusive.
Another key deadline is 12 months from the date of entry into force, i.e. 1 August 2025, at which point the law's rules on GPAIs will begin to apply, requiring them to comply with transparency requirements.
Some high-risk AI systems are given the most lenient compliance deadlines, allowing them to meet their obligations 36 months after the treaty comes into force, up until 2027. Other high-risk systems will need to comply sooner than 24 months later.