The European Union has revealed the structure of its new AI Office, an ecosystem-building and oversight body that will be established under the Union's AI law. The risk-based regulatory framework for artificial intelligence is expected to come into force by the end of July, following the regulation's final approval by EU lawmakers last week. The AI Office will come into force on June 16.
The AI Office reflects the European Union's ambitious ambitions for AI. It will play a key role in shaping the European AI ecosystem in the coming years. It will play a dual role in regulating AI risks and promoting AI adoption and innovation. But the European Union also expects the AI Office to have a broader influence on the global stage, as many countries and regions try to figure out how to approach AI governance. The AI Office will be made up of five units in total:
Here is a breakdown of the focus of each of the EU AI Office’s five units:
One unit will address “regulation and compliance,” which will include working with EU member states to help harmonize and enforce AI law. “The unit will contribute to the investigation and enforcement of possible violations and sanctions,” the Commission said, indicating that the office is intended to support national EU governing bodies that the law will set up to enforce the regime's broad application.
The other unit will deal with “AI safety.” The Commission said the unit will focus on “identification of systemic risks, possible mitigations, assessment and testing approaches for highly competent general-purpose models.” General-purpose models (GPAI) refer to the recent wave of generative AI techniques, such as the underlying models that underpin tools like ChatGPT. But the EU said what will most concern the unit are GPAI with so-called “systematic risks,” which the law defines as models trained above a certain computational threshold.
As the AI Office will be responsible for directly enforcing the AI Act's regulations on the GPAI, relevant departments are expected to carry out testing and evaluation of the GPAI, as well as exercise their powers to request information from AI giants to enable oversight.
The AI Office’s Compliance Unit’s work will also include developing templates that GPAI is expected to use, such as one that summarizes copyrighted material used to train models.
The law’s provisions on GPAI will likely require the creation of a dedicated AI Safety Unit to fully implement them, but this also appears intended to respond to international developments in AI governance since the EU law was drafted, such as the UK and US announcing their own AI Safety Authorities last fall. The big difference, however, is that the EU’s AI Safety Unit will have legal powers.
A third unit of the AI office will be dedicated to an area the European Commission has dubbed “excellence in AI and robotics,” supporting and funding AI research and development. The Commission said the unit will align with the previously announced “GenAI4EU” initiative, which aims to promote the development and dissemination of generative AI models, including upgrading Europe's supercomputer network to support model training.
The fourth unit will focus on “AI for Society.” The Commission said the unit will “design and implement” the Commission's international engagement on large-scale projects where AI could have a positive impact on society, such as in areas such as weather modeling, cancer diagnosis, and digital twins for artistic reconstruction.
In April, the EU announced that its planned AI cooperation with the US on AI safety and risk research would also focus on collaboration on the use of AI in the public interest, meaning this component of the AI Office had already been outlined.
Finally, the fifth unit will address “AI Innovation and Policy Coordination.” The Commission said its role will be to ensure the implementation of the European Union's AI strategy, including “monitoring trends and investments, stimulating the adoption of AI through the establishment of a network of European Digital Innovation Hubs and AI Factories, and promoting an innovative ecosystem by supporting regulatory sandboxes and real-world testing.”
Three of the five divisions of the EU AI Office broadly address AI adoption, investment, and ecosystem building, with only two on regulatory compliance and safety, likely intended to provide further reassurance to industry that the EU's rapid development of an AI rulebook is not anti-innovation, as some domestic AI developers have complained. The EU also argues that trust will drive AI adoption.
The European Commission has already appointed heads of some of the AI Office's divisions and the overall head of the office, but the head of AI Safety has yet to be named. The position of Chief Scientific Advisor is also vacant. Confirmed appointments include: Head of AI Office, Lucila Scioli; Head of Regulation and Compliance, Kilian Gross; Head of Excellence in AI and Robotics, Cécile Huet; Head of AI for Social Welfare, Martin Bailey; and Head of AI Innovation and Policy Coordination, Malgorzata Nikowska.
The AI Office was established by a European Commission decision in January, and preparations, including deciding on its organizational structure, began in late February. The Office is housed within the EU's digital department, DG Connect, and is currently headed by Thierry Breton, Commissioner for the Internal Market.
The AI office will eventually have more than 140 people, including technical staff, lawyers, political scientists and economists. About 60 staff members have been deployed so far, the EU said Wednesday, with plans to add more jobs over the next few years as the law comes into force and becomes fully operational. The AI law has a phased implementation of rules, with some provisions taking effect six months after the law comes into force, while others will take longer — a year or more.
One of the AI Office's key roles going forward will be to draw up codes of conduct and best practices for AI developers, and the EU wants to play an interim role until a legal rulebook is phased in.
A European Commission official said the code is due to come into force shortly after the AI law comes into force later this summer.
Other work of the AI Office will include working with the various forums and expert bodies that will be established by the AI Act to consolidate the EU's governance and ecosystem-building approach, including the European Artificial Intelligence Committee made up of representatives from Member States, a scientific panel of independent experts, and a broader advisory forum made up of stakeholders from industry, start-ups, SMEs, academia, think tanks and civil society.
“The first meeting of the AI Committee is expected to take place before the end of June,” the Commission said in a press release, adding: “The AI Secretariat is preparing guidelines on the definition and prohibitions of AI systems, both of which must be submitted within six months of the AI Act entering into force. The Secretariat is also preparing to coordinate the development of a code of practice on obligations for general-purpose AI models within nine months of entry into force.”
The report has been updated with the names of confirmed appointments after the committee provided information.