The first draft of the Code of Practice applicable to providers of general-purpose AI models under the European Union's AI Law has been published, with a call for feedback (open until 28 November) as the drafting process continues into next year. ) was also carried out. Formal compliance deadlines will begin over the next few years.
Pan-EU legislation that came into force this summer regulates the application of artificial intelligence based on a risk-based framework. But it also covers some metrics in more powerful fundamental (or general purpose) AI models (GPAI). This is where this code of practice comes into play.
Among the companies likely to be included in this bracket are OpenAI, maker of the GPT model (which powers the AI chatbot ChatGPT), Google with Gemini GPAI, Meta with Llama, Anthropic with Claude, and companies such as France's Mistral. If you want to ensure compliance with AI laws and avoid the risk of enforcement for non-compliance, you should follow the Generic AI Code of Practice.
To be clear, this Code is intended to provide guidance for meeting EU AI law obligations. GPAI providers may choose to deviate from best practice recommendations if they believe compliance can be demonstrated through other means.
The first draft of the code is 36 pages long, but the drafters warn that it is light on detail as it is a “high-level drafting plan outlining the code's guidelines and goals.” , which is likely to be quite long.
The draft is littered with box-outs asking “open questions” that have not yet been answered by the working group tasked with developing the code. It is clear that the feedback being sought from industry and civil society will play an important role in shaping the content of specific sub-measures and key performance indicators (KPIs) that have not yet been included.
However, this document shows what will happen (in terms of expectations) to GPAI manufacturers once the relevant compliance deadlines apply.
GPAI's transparency requirements for manufacturers are scheduled to take effect on August 1, 2025.
However, for the strongest GPAIs, which the law defines as having a “systemic risk,” risk assessment and mitigation requirements must be followed 36 months after entry into force (or August 1, 2027). is expected.
It should also be noted that the draft code was devised with only a “few” GPAI manufacturers and GPAIs at systemic risk in mind. “If that assumption turns out to be incorrect, we may need to consider future measures such as introducing a more detailed system of step-by-step measures aimed at primarily focusing on models posing the greatest systemic risk.” “Significant changes to the draft may be necessary,” the drafters warned.
Copyright
In terms of transparency, the Code sets out how GPAI must comply with information provisions, including in the area of copyrighted material.
An example here is “Sub-Measure 5.2”, which currently signs the name of every web crawler used for GPAI development and provides details of the associated robots.txt functionality “including when crawling” I have made a promise to you.
Makers of GPAI models have faced multiple lawsuits from rights holders alleging that the AI companies illegally processed copyrighted information, with questions raised over how they obtained the data to train their models. We continue to face questions.
Another commitment set out in the draft code calls for GPAI providers to have a single point of contact and grievance redress to make it easier for rights holders to air their complaints “directly and quickly” .
Other proposed measures related to copyright include data sources used for “training, testing, and validation” and permissions for access and use of protected content for the development of general-purpose AI, provided by GPAI. Contains documentation of what you are expected to do.
systemic risk
The strongest GPAIs are also subject to rules in EU AI law aimed at mitigating so-called “systemic risks”. These AI systems are currently defined as models trained using more than 10^25 FLOPs of total computational power.
The Code includes a list of risk types that signatories are expected to treat as systemic risks. They include:
Offensive cybersecurity risks (such as vulnerability discovery). Chemical, biological, radiological and nuclear risks. “Loss of control” (here we mean “the inability to control powerful autonomous general-purpose AI”) and the automated use of models for AI research and development. Persuasion and manipulation, including large-scale disinformation/misinformation that can pose risks to democratic processes and lead to a loss of trust in the media. Massive discrimination.
This version of the code also allows GPAI authors to identify other types of system risks not explicitly listed, such as “massive” privacy violations, surveillance, and uses that may pose a risk to public health. It also suggests that it is possible. And one of the open questions this document raises here is which risks should be prioritized in addition to the main categories. Another is how the systemic risk taxonomy should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate images).
The Code also addresses “dangerous model capabilities” (e.g., cyber-attacks or “weapons acquisition or proliferation capabilities”) and “dangerous model tendencies” (e.g., inconsistent with human intentions or values; have; lack of reliability and security, and resistance to goal modification.
While many details are still being worked out as the drafting process continues, the code's authors said that its measures, sub-measures and KPIs should be “proportionate” and, in particular, “consider the size and capacity of the organization.” The company writes that it focuses on “adjusting it to suit the needs of people.” Small and medium-sized enterprises and startups with less financial resources than certain providers, especially those at the forefront of AI development. They add that attention should also be paid to “various distribution strategies (e.g. open source) that, where appropriate, reflect proportionality principles and consider both benefits and risks.”
Many of the open questions raised by this draft concern how specific measures should be applied to open source models.
Safety and security inside the frame
Another measure in the code concerns the “Safety and Security Framework” (SSF). GPAI authors will be required to detail their risk management policies and “continuously and thoroughly” identify systemic risks that may arise from the GPAI.
There's an interesting sub-scale here about “anticipation of risk.” This would require signatories to include in their SSFs a “best effort estimate” of a timeline for when to develop models that trigger systemic risk indicators, such as the capabilities and propensities of risky models mentioned above. This could mean that starting in 2027, leading AI developers will set a timeframe within which model development is expected to exceed a certain risk threshold.
Elsewhere, the draft code says it will focus on GPAIs with systemic risks, by using a “best-in-class assessment” of model capabilities and limitations and applying a “suitable set of methodologies” to do so. There is. Examples listed include Q&A sets, benchmarks, red teaming and other adversarial testing methods, human enrichment research, model organisms, simulations, and proxy evaluation of classified materials.
Another supplementary measure regarding “Notification of Significant Systemic Risks” is established under the Act, requiring signatories “if there are strong reasons to believe that a significant systemic risk is likely to materialize.” This obliges the AI Bureau, which is the monitoring and management agency, to be notified of any such incidents.
The Code also sets out measures regarding “reporting of serious incidents.''
“The signatories undertake to identify and track significant incidents arising from general purpose AI models that pose systemic risks, and to document and report relevant information and possible corrective actions to the AI Secretariat without undue delay. appropriate for national competent authorities,” but a related open question asks for opinions on “what constitutes a serious incident.” So it seems like more work needs to be done here to clarify the definition.
The draft code includes further questions regarding “possible corrective actions” that may be taken in response to a serious incident. Among other formulations we asked for feedback, we also asked, “What is an appropriate critical incident response process for an open weight or open source provider?”
“The first draft of the code was developed based on a preliminary review of existing best practices by four expert working groups, stakeholder consultation input from approximately 430 submissions, responses from provider workshops, and international approaches ( the G7 Code of Conduct, the Frontier AI Safety Initiative, the Bletchley Declaration, outcomes from relevant governments and standard-setting bodies) and, most importantly, the AI Act itself,” the authors continued in their conclusion.
“We emphasize that this is only a first draft and therefore the draft code proposals are provisional and subject to change,” they added. “We therefore look forward to your constructive feedback as we further develop and update the content of the Code and work toward a more detailed final form by May 1, 2025.”
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.