Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

BlueSky blocks Mississippi services across age guarantee laws

August 24, 2025

Openai warns against SPVs and other “unauthorized” investments

August 23, 2025

Amazon AGI Labs Chief defends his reverse Acquihire

August 23, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    BlueSky blocks Mississippi services across age guarantee laws

    August 24, 2025

    BlueSky blocks Mississippi services across age guarantee laws

    August 22, 2025

    Tiktok denies India's comeback after reporting that the website has been published

    August 22, 2025

    Google makes it easier to edit drive videos with the new VIDS shortcut button

    August 22, 2025

    X brings out the ability to like and follow the free tier of developer APIs

    August 22, 2025
  • Crypto

    Coinbase CEO explains why he fired an engineer who didn't try AI right away

    August 22, 2025

    Your next customer is destroying the 2025 Expo floor

    August 19, 2025

    Crypto Company Gemini File for Winklevoss Twins IPO

    August 16, 2025

    North Korean spies pretending to be remote workers have invaded hundreds of businesses, CloudStrike says

    August 4, 2025

    Telegram's Crypto Wallet will be released in the US

    July 22, 2025
  • Security

    Developers get prison time to disrupt the ex-employer's network with “kill switch”

    August 22, 2025

    Explain why hackers who exposed the North Korean government did that

    August 21, 2025

    Device searches at US borders hit record-breaking records, new data show

    August 20, 2025

    Listen and record all conversations “Always On” Harvard Dropout launches AI smart glasses

    August 20, 2025

    New Zero-Day startup offers $20 million for a tool that can hack your smartphone

    August 20, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Openai warns against SPVs and other “unauthorized” investments

    August 23, 2025

    Amazon AGI Labs Chief defends his reverse Acquihire

    August 23, 2025

    Y Combinator says Apple's App Store is hampering startup growth

    August 22, 2025

    Beanie baby in the brain rot era

    August 22, 2025

    Procuring multiple rounds of venture capital could be wrong for your startup

    August 21, 2025
TechBrunchTechBrunch

AI governance cannot be left to vested interests

TechBrunchBy TechBrunchSeptember 19, 20248 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


The final report of the UN's High Level Advisory Body on Artificial Intelligence makes for surreal reading at times. Titled “AI Governance for Humanity”, the document highlights the paradoxical challenges of anchoring any control over a technology that is developing rapidly, being heavily invested in and being heavily promoted.

On the one hand, the report points out a “lack of global governance on AI,” which is quite correct. On the other hand, the UN advisory body has “considered hundreds of [AI] “Guides, frameworks and principles are being adopted by governments, companies, consortia, regional and international organizations.” The report adds another set of recommendations to the AI ​​governance pile.

The overarching problem the report highlights is that there is no unified view on what to do about this powerful yet stupid technology, and different approaches to governing AI are piling up.

AI automation is certainly powerful: with the push of a button, you can adjust the output as needed. But AI can also be stupid: as the name suggests, AI is not intelligent, its output is a reflection of its input, and inappropriate input can lead to very inappropriate (and unintelligent) results.

As the report highlights, AI could indeed cause very big problems if its stupidity is combined with scale: For example, AI could amplify discrimination or spread disinformation, both of which are already happening at troubling scales in all sectors, causing very real harm.

But those commercially working on the generative AI fire that has been raging for the past few years are fascinated by the technology’s potential for scale, and are doing everything they can to downplay the risks of AI stupidity.

In recent years, part of this has been aggressive lobbying around the idea that we need rules to protect the world from so-called AGI (artificial general intelligence) — AI that can think for itself and is better than humans. But this is a fancy fiction designed to grab policymakers' attention, draw their attention to nonexistent AI problems, and normalize the harmful stupidity of the current generation of AI tools. (So the real PR game being played is to define the concept of “AI safety” and then turn it around by interpreting it as “worry about science fiction.”)

Defining AI safety narrowly distracts from the enormous environmental harm of putting ever more computing power, energy, and water into building data centers big enough to feed this voracious new beast of scale. There's not a high-level discussion about whether we can afford to keep scaling AI in this way, but perhaps there should be.

The advent of AGI also leads the conversation to skip over the myriad legal and ethical issues that cascade into the development and use of automated tools trained on other people’s information without their permission. Jobs and livelihoods are at risk. Entire industries are at risk. And so are individual rights and freedoms.

Words like “copyright” and “privacy” scare AI developers far more than the supposed existential risks of AGI, because AI developers are smart people who have not lost touch with reality.

But those with a stake in the expansion of AI choose to highlight only the potential benefits of the innovation, to minimize the application of “guardrails” (a minimalist metaphor used when technologists are finally forced to impose limits on their technology) that stand in the way of achieving the greater good.

Add in geopolitical conflicts and a bleak outlook for the global economy, and national governments are more likely to join the AI ​​hype and fray, pushing for less governance in the hope that it might help expand their own nation’s AI champions.

Given this skewed backdrop, it’s no wonder that AI governance remains so confused and tangled. Even in the European Union, where lawmakers did indeed adopt a risk-based framework for regulating a small number of applications of AI earlier this year, the loudest voices debating this groundbreaking initiative still decry its existence, arguing that the law spells ruin for the EU’s chances for homegrown innovation. And they continue to do so after previous tech industry lobbying (led by France, which saw its interests as Mistral’s hopes of becoming a national champion of GenAI) watered it down.

New moves to ease EU privacy laws

The vested interests don't stop there. Meta, the owner of Facebook and Instagram and now a major AI developer, is openly lobbying to deregulate European privacy laws and remove restrictions on using people's information to train AI. Who's going to stop Meta from dismantling this turbulent data protection law and stripping Europe of its culture for advertising revenue?

Its latest open letter against the EU's General Data Protection Regulation (GDPR), reported by The Wall Street Journal, joins a host of other major companies that want deregulation for profit, including Ericsson, Spotify and SAP.

“Europe is less competitive and innovative than other regions and risks falling further behind in the AI ​​era due to inconsistent regulatory decision-making,” the letter reportedly suggests.

Meta has a long history of violating EU privacy laws, including most of the top 10 total GDPR fines to date, totaling billions of dollars, so it shouldn't be a prime example of a legislative priority. But when it comes to AI, here we are. After violating so much EU law, should we listen to Meta's idea of ​​removing the obstacle of having to break the law in the first place? This is AI-induced magical thinking.

But the real fear is the danger that lawmakers will swallow this propaganda and hand power over to those who want to automate everything — that is, put their blind faith in a headless god, big or small, in the hope that AI will automatically bring economic prosperity to all.

This is a strategy that completely ignores the fact that the (highly lightly regulated) digital developments of the past few decades have led to exactly the opposite result: an astonishing concentration of wealth and power siphoned off by a handful of giant platforms, known as Big Tech.

Clearly, the platform giants want to repeat the same thing with Big AI, but policymakers risk unwittingly following a self-serving path encouraged by an army of highly paid policy lobbyists. This is far from a fair fight — if it is even a fight at all.

There is no doubt that economic pressures are now prompting great soul-reflection in Europe. A long-awaited report published earlier this month by Italian economist Mario Draghi on the not-so-sensitive subject of the future of European competitiveness lamented self-imposed “regulatory burdens”, which he also described as “self-defeating for those in the digital sector”.

Given the timing of Meta's open letter, it seems likely that the company is reaching the same conclusion. But that's not surprising: Meta and several other companies that signed up to the movement calling for the deregulation of EU privacy laws are included in the long list of companies that Draghi consulted directly for the report. (Meanwhile, as others have pointed out, the Economist's contributor disclosure list does not include any digital or human rights groups, with the exception of consumer group BEUC.)

Recommendations from the UN AI Advisory Group

The asymmetric interests driving AI adoption while simultaneously downgrading and weakening governance efforts make a truly global agreement on how to rein in AI's scale and stupidity unlikely. But the UN's AI Advisory Group has some ideas that look promising, if anyone is willing to listen.

The report's recommendations include the establishment of an independent international scientific panel to explore AI's capabilities, opportunities, risks, and uncertainties and identify areas where further research focused on the public interest is needed (though you'd be hard pressed to find an academic who isn't already on the payroll of a major AI company). Another recommendation is an intergovernmental AI dialogue to be held twice a year between existing UN meetings to share best practices, exchange information, and increase international interoperability on governance. The report also mentions an exchange of AI standards that would maintain a register of definitions and promote the harmonization of standards internationally.

The UN agencies also propose creating what they call an “AI Capacity Building Network” to pool expertise and resources to help develop AI governance within governments and for the public good, and to establish a global fund for AI to address the digital divide that threatens to be significantly widened by the unequal distribution of automation technologies.

On data, the report suggests establishing what it calls a “Global AI Data Framework” to set definitions and principles for managing training data, including ensuring cultural and linguistic diversity. The effort should establish common standards for data provenance and its use, and ensure “transparency and rights-based accountability across jurisdictions.”

The UN agency also recommends the establishment of data trusts and other mechanisms, which it suggests could help foster the growth of AI without undermining control over information, such as through a “well-regulated global market for the exchange of anonymous data for training AI models” and “model agreements” to enable data access across borders.

The final recommendation is that the UN establish an AI office within the Secretariat to act as a coordinating body, report to and provide support to the Secretary-General, engage in outreach activities, and advise the UN Secretary-General. And one thing is clear: AI will require a huge amount of effort, organization, and sweat equity to avoid vested interests setting the governing agenda.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

BlueSky blocks Mississippi services across age guarantee laws

August 24, 2025

Openai warns against SPVs and other “unauthorized” investments

August 23, 2025

Amazon AGI Labs Chief defends his reverse Acquihire

August 23, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.