While details are still emerging on how exactly AI regulation will be implemented and ensured, today a number of countries, including the United States, the United Kingdom and the European Union, signed a treaty on AI safety developed by the Council of Europe (COE), an international standards and human rights body.
The Council of Europe's Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (its full title) is described by the Council of Europe as “the first legally binding international treaty aimed at ensuring that the use of AI systems is fully compatible with human rights, democracy and the rule of law.”
The treaty was formally opened for signature at a conference held in Vilnius, Lithuania today, with signatories including the three major market countries, as well as Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino and Israel.
This list means that the COE framework now includes many countries where the world's largest AI companies are headquartered or have significant operations, but perhaps just as important are countries that were not previously included on the list, such as those in Asia, the Middle East and Russia.
This high-level treaty aims to focus on how AI intersects with three key areas: protecting against data misuse and discrimination, human rights, including ensuring privacy, protecting democracy, and protecting the “rule of law.” Essentially, this third article commits signatories to setting up a regulatory body to protect against “AI risks.” (It doesn't specify what those risks are, but this is also a circular requirement that references the other two key areas it's addressing.)
The treaty's more specific objectives are just as lofty as the field it aims to address. “The treaty provides a legal framework covering the entire lifecycle of AI systems,” the COE notes. “The treaty promotes AI advancement and innovation, and manages the risks it poses to human rights, democracy, and the rule of law. To stand the test of time, the treaty is technology-neutral.”
(Background: The COE is not a legislative body, but was established after World War II to defend human rights, democracy, and the European legal system. It writes and enforces treaties that are legally binding on signatory countries. It is the organization behind the European Court of Human Rights, for example.)
The regulation of artificial intelligence is a contentious issue in the technology world, with a complex matrix of stakeholders.
Various antitrust, data protection, financial and telecommunications watchdogs are making early moves to explore how to better govern AI, perhaps failing to anticipate other technological innovations and problems.
The thinking seems to be that if AI is going to bring about big changes to how the world works, it's important to be proactive because not all of those changes will work out for the best if we don't keep a close eye on them. But there's also clear concern among regulators about going too far, being accused of stifling innovation by reacting too soon or too broadly.
AI companies were also early entrants, arguing that they have a similar interest in what has come to be called AI safety. Skeptics describe private interests as regulatory capture, while optimists believe that companies need a seat at the regulatory table and better communicate about their activities and what comes next to inform appropriate policy and rulemaking.
There are always politicians, sometimes supporting regulators and sometimes taking a more pro-business stance, prioritizing corporate interests in the name of economic growth for their own countries (the previous UK government was a supporter of AI).
This combination has produced a wide variety of frameworks and declarations, including those emerging from events such as the AI Safety Summit to be held in the UK in 2023, the G7-led Hiroshima AI Process, and the resolution adopted by the United Nations earlier this year. We have also seen the creation of nationally based AI Safety Authorities, as well as regional regulations such as California's SB 1047 bill and the European Union's AI Act.
The COE treaty appears to hope to provide a way to harmonize all these efforts.
“The treaty will ensure that countries monitor technological developments and that any technology is controlled within strict standards,” the UK Ministry of Justice said in a statement about the treaty signing. “Once ratified and brought into force in the UK, the treaty will strengthen existing laws and measures.”
“We must ensure that the rise of AI upholds, rather than undermines, our standards,” COE Executive Director Marija Pejčinović Buric said in a statement. “The Framework Convention is designed to ensure exactly that. It is a strong, balanced document, the result of being drafted with an open and inclusive approach, ensuring that it benefits from multiple expert perspectives.”
“The Framework Convention is an open treaty with potentially global application. We hope that this signature will be the first of many, with ratifications to follow soon thereafter, so that the Convention can enter into force as soon as possible,” she added.
The original Framework Treaty, first negotiated and adopted by the Committee of Ministers of the Council of Europe in May 2024, will formally enter into force “on the first day of the month following the expiration of three months from the date of ratification by five signatory States, including at least three Council of Europe Member States”.
That means countries that signed on Thursday will have to ratify it individually, after which it will take another three months for the provisions to come into force.
It is unclear how long this process will take. The UK, for example, has stated its intention to work on AI legislation but has not set a clear timeline for when it will present a draft bill, specifically stating that it will provide an update on the implementation of the COE framework “in due course.”