The European Union has launched a consultation on rules that would apply to general-purpose AI model (GPAI) providers such as Anthropic, Google, Microsoft, and OpenAI under the EU AI Act, a risk-based framework for regulating the application of artificial intelligence. Lawmakers hope that a code of conduct would help ensure “trustworthy” GPAI by providing developers with guidance on complying with legal obligations.
The EU AI law was adopted earlier this year and is due to come into force soon, on August 1. However, compliance deadlines will be phased in, with the Code of Conduct due to apply nine months later, in April 2025. This gives the EU time to develop guidance.
The Commission is seeking responses to the consultation from GPAI providers operating in the EU, as well as from businesses, civil society representatives, rights holders and academic experts.
“This consultation is an opportunity for all stakeholders to provide their views on the topics covered in the initial Code of Conduct detailing rules for general-purpose AI model providers,” the committee wrote. “The consultation will also inform related work by the AI Office, in particular with regard to a template and accompanying guidance for an outline of content used to train general-purpose AI models.”
The consultation is a questionnaire divided into three sections: the first covers transparency and copyright-related provisions of the GPAI; the second concerns rules on risk classification, assessment and mitigation of GPAIs with so-called systemic risks (defined in the AI Act as models trained above a certain computational threshold); and the third concerns the review and monitoring of the GPAIs' code of conduct.
The committee said the first draft code would be prepared “based on the comments submitted and responses to targeted questions.”
Those responding to the consultation will have the opportunity to influence the shape of the template the AI Office provides to GPAI providers to meet the legal requirement to provide an outline of their model training content, and it will be interesting to see how detailed that template ultimately becomes.
More information about the consultation can be found here. The deadline for submissions is 10 September 2024 at 6pm CET.
The EU has also invited participation in the drafting of the Code through virtual meetings in four working groups. An iterative drafting process will be used to produce the guidance.
The AI Office is inviting “qualified general-purpose AI model providers, downstream providers, other industry associations, other stakeholder groups such as civil society organizations and rights holder associations, academia, and other independent experts to express their willingness to participate in the development of the code of conduct.”
The deadline for submitting expressions of interest to take part in the drafting process is 25 August 2024 at 6pm CET.
In addition, GPAI providers will also have the opportunity to participate in workshops with the Chair and Vice-Chair of the General Assembly. According to the AI Office, these workshops are intended to “contribute to inform each round of iterative drafting, in addition to participating in the General Assembly.”
“The AI Office will ensure the transparency of these discussions, including by preparing minutes and making them available to all Assembly participants,” the report noted.
The AI Office itself will appoint the conference's chair and vice-chair, and is accepting applications for these key steering roles from “interested independent experts.”
The call and consultation on the code comes amid concerns that private entities may be excluded from the drafting process. Earlier this month, Euraactive reported that the European Commission plans to turn to consultancy firms to draft the code, leading to concerns that the process could be biased in favor of AI giants.
The commission seems keen to allay such concerns. “We encourage the participation of all interested parties,” the commission wrote on Tuesday. “The AI Office invites proposals from a wide range of stakeholders, including academia, independent experts, industry representatives such as general-purpose AI model providers and downstream providers integrating models into AI systems, civil society organizations, rights holders, and public authorities.”