The final report of the UN's High Level Advisory Body on Artificial Intelligence makes for surreal reading at times. Titled “AI Governance for Humanity”, the document highlights the paradoxical challenges of anchoring any control over a technology that is developing rapidly, being heavily invested in and being heavily promoted.
On the one hand, the report points out a “lack of global governance on AI,” which is quite correct. On the other hand, the UN advisory body has “considered hundreds of [AI] “Guides, frameworks and principles are being adopted by governments, companies, consortia, regional and international organizations.” The report adds another set of recommendations to the AI governance pile.
The overarching problem the report highlights is that there is no unified view on what to do about this powerful yet stupid technology, and different approaches to governing AI are piling up.
AI automation is certainly powerful: with the push of a button, you can adjust the output as needed. But AI can also be stupid: as the name suggests, AI is not intelligent, its output is a reflection of its input, and inappropriate input can lead to very inappropriate (and unintelligent) results.
As the report highlights, AI could indeed cause very big problems if its stupidity is combined with scale: For example, AI could amplify discrimination or spread disinformation, both of which are already happening at troubling scales in all sectors, causing very real harm.
But those commercially working on the generative AI fire that has been raging for the past few years are fascinated by the technology’s potential for scale, and are doing everything they can to downplay the risks of AI stupidity.
In recent years, part of this has been aggressive lobbying around the idea that we need rules to protect the world from so-called AGI (artificial general intelligence) — AI that can think for itself and is better than humans. But this is a fancy fiction designed to grab policymakers' attention, draw their attention to nonexistent AI problems, and normalize the harmful stupidity of the current generation of AI tools. (So the real PR game being played is to define the concept of “AI safety” and then turn it around by interpreting it as “worry about science fiction.”)
Defining AI safety narrowly distracts from the enormous environmental harm of putting ever more computing power, energy, and water into building data centers big enough to feed this voracious new beast of scale. There's not a high-level discussion about whether we can afford to keep scaling AI in this way, but perhaps there should be.
The advent of AGI also leads the conversation to skip over the myriad legal and ethical issues that cascade into the development and use of automated tools trained on other people’s information without their permission. Jobs and livelihoods are at risk. Entire industries are at risk. And so are individual rights and freedoms.
Words like “copyright” and “privacy” scare AI developers far more than the supposed existential risks of AGI, because AI developers are smart people who have not lost touch with reality.
But those with a stake in the expansion of AI choose to highlight only the potential benefits of the innovation, to minimize the application of “guardrails” (a minimalist metaphor used when technologists are finally forced to impose limits on their technology) that stand in the way of achieving the greater good.
Add in geopolitical conflicts and a bleak outlook for the global economy, and national governments are more likely to join the AI hype and fray, pushing for less governance in the hope that it might help expand their own nation’s AI champions.
Given this skewed backdrop, it’s no wonder that AI governance remains so confused and tangled. Even in the European Union, where lawmakers did indeed adopt a risk-based framework for regulating a small number of applications of AI earlier this year, the loudest voices debating this groundbreaking initiative still decry its existence, arguing that the law spells ruin for the EU’s chances for homegrown innovation. And they continue to do so after previous tech industry lobbying (led by France, which saw its interests as Mistral’s hopes of becoming a national champion of GenAI) watered it down.
New moves to ease EU privacy laws
The vested interests don't stop there. Meta, the owner of Facebook and Instagram and now a major AI developer, is openly lobbying to deregulate European privacy laws and remove restrictions on using people's information to train AI. Who's going to stop Meta from dismantling this turbulent data protection law and stripping Europe of its culture for advertising revenue?
Its latest open letter against the EU's General Data Protection Regulation (GDPR), reported by The Wall Street Journal, joins a host of other major companies that want deregulation for profit, including Ericsson, Spotify and SAP.
“Europe is less competitive and innovative than other regions and risks falling further behind in the AI era due to inconsistent regulatory decision-making,” the letter reportedly suggests.
Meta has a long history of violating EU privacy laws, including most of the top 10 total GDPR fines to date, totaling billions of dollars, so it shouldn't be a prime example of a legislative priority. But when it comes to AI, here we are. After violating so much EU law, should we listen to Meta's idea of removing the obstacle of having to break the law in the first place? This is AI-induced magical thinking.
But the real fear is the danger that lawmakers will swallow this propaganda and hand power over to those who want to automate everything — that is, put their blind faith in a headless god, big or small, in the hope that AI will automatically bring economic prosperity to all.
This is a strategy that completely ignores the fact that the (highly lightly regulated) digital developments of the past few decades have led to exactly the opposite result: an astonishing concentration of wealth and power siphoned off by a handful of giant platforms, known as Big Tech.
Clearly, the platform giants want to repeat the same thing with Big AI, but policymakers risk unwittingly following a self-serving path encouraged by an army of highly paid policy lobbyists. This is far from a fair fight — if it is even a fight at all.
There is no doubt that economic pressures are now prompting great soul-reflection in Europe. A long-awaited report published earlier this month by Italian economist Mario Draghi on the not-so-sensitive subject of the future of European competitiveness lamented self-imposed “regulatory burdens”, which he also described as “self-defeating for those in the digital sector”.
Given the timing of Meta's open letter, it seems likely that the company is reaching the same conclusion. But that's not surprising: Meta and several other companies that signed up to the movement calling for the deregulation of EU privacy laws are included in the long list of companies that Draghi consulted directly for the report. (Meanwhile, as others have pointed out, the Economist's contributor disclosure list does not include any digital or human rights groups, with the exception of consumer group BEUC.)
Recommendations from the UN AI Advisory Group
The asymmetric interests driving AI adoption while simultaneously downgrading and weakening governance efforts make a truly global agreement on how to rein in AI's scale and stupidity unlikely. But the UN's AI Advisory Group has some ideas that look promising, if anyone is willing to listen.
The report's recommendations include the establishment of an independent international scientific panel to explore AI's capabilities, opportunities, risks, and uncertainties and identify areas where further research focused on the public interest is needed (though you'd be hard pressed to find an academic who isn't already on the payroll of a major AI company). Another recommendation is an intergovernmental AI dialogue to be held twice a year between existing UN meetings to share best practices, exchange information, and increase international interoperability on governance. The report also mentions an exchange of AI standards that would maintain a register of definitions and promote the harmonization of standards internationally.
The UN agencies also propose creating what they call an “AI Capacity Building Network” to pool expertise and resources to help develop AI governance within governments and for the public good, and to establish a global fund for AI to address the digital divide that threatens to be significantly widened by the unequal distribution of automation technologies.
On data, the report suggests establishing what it calls a “Global AI Data Framework” to set definitions and principles for managing training data, including ensuring cultural and linguistic diversity. The effort should establish common standards for data provenance and its use, and ensure “transparency and rights-based accountability across jurisdictions.”
The UN agency also recommends the establishment of data trusts and other mechanisms, which it suggests could help foster the growth of AI without undermining control over information, such as through a “well-regulated global market for the exchange of anonymous data for training AI models” and “model agreements” to enable data access across borders.
The final recommendation is that the UN establish an AI office within the Secretariat to act as a coordinating body, report to and provide support to the Secretary-General, engage in outreach activities, and advise the UN Secretary-General. And one thing is clear: AI will require a huge amount of effort, organization, and sweat equity to avoid vested interests setting the governing agenda.