It's official: the European Union's risk-based regulation of the application of artificial intelligence comes into force on Thursday, August 1, 2024.
This kicks off a rolling countdown of compliance deadlines that will apply to different types of AI developers and applications, with most provisions being fully enforced by mid-2026. However, the first deadlines for a small number of prohibited uses of AI in certain circumstances, such as the use of remote biometric authentication by law enforcement in public places, will come into effect in just six months.
Under the European Union's approach, most applications of AI are deemed to pose low or no risk and therefore will not be subject to regulation.
Some potential uses of AI have been categorised as high risk, such as biometric and facial recognition, and AI used in sectors such as education and employment. Systems used in these sectors will need to be registered in an EU database and developers will need to comply with risk and quality management obligations.
The third “limited risk” tier applies to AI technologies such as chatbots and tools that can be used to create deepfakes, which must meet certain transparency requirements to ensure users are not fooled.
Another key element of the law applies to developers of so-called general purpose AI (GPAI). Here too, the EU has adopted a risk-based approach, with most GPAI developers facing light transparency requirements. Only some of the most robust models will be required to carry out risk assessments and mitigation measures.
What exactly GPAI developers need to do to comply with the AI Act is still under discussion, as the code of conduct has yet to be drawn up. Earlier this week, the AI Office, a strategic oversight and AI ecosystem building body, launched a call for consultation and participation in this rulemaking process, saying it plans to finalize the code in April 2025.
OpenAI, developer of the GPT large-scale language model on which ChatGPT is based, wrote in its own AI law primer late last month that it plans to “work closely with the EU AI Secretariat and other relevant authorities as the new law comes into force in the coming months,” including by putting together technical documentation and other guidance for downstream providers and deployers of GPAI models.
“If your organization is trying to determine how to comply with AI law, you should first classify your in-scope AI systems. Identify the GPAI and other AI systems you use, determine how to classify them, and consider the obligations arising from your use case,” OpenAI adds, offering its own compliance guidance for AI developers. “You should also determine whether you are a provider or deployer with respect to your in-scope AI systems. These issues can be complex, so consult your legal counsel if you have any questions.”