Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

A comprehensive list of 2025 tech layoffs

June 17, 2025

Tumblr's content filtering system is incorrectly flagging posts as “mature”, users blame AI

June 17, 2025

Unlock scaling growth in TC at all stages and earn $210 for an additional 6 days

June 17, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Tumblr's content filtering system is incorrectly flagging posts as “mature”, users blame AI

    June 17, 2025

    Facebook announces that all videos on the platform will soon be shared as reels

    June 17, 2025

    Threads extend open social web integration with Fediverse feeds, user profile search

    June 17, 2025

    Streaming viewership surpassed cable and combined broadcasts for the first time last month, according to a report.

    June 17, 2025

    Mastodon updates its term to ban AI model training

    June 17, 2025
  • Crypto

    Unique, a new social media app

    June 17, 2025

    xNotify Polymarket as partner in the official forecast market

    June 6, 2025

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025
  • Security

    Pro-Israel hacktivist group claims responsiveness to alleged Iranian bank hacks

    June 17, 2025

    Pro-Israel Hacktivist Group has allegedly blamed for alleged Iranian bank hacks

    June 17, 2025

    As food shortages continue, UNFI says it is recovering from cyberattacks

    June 17, 2025

    UK Watchdog will fine 23andMe over 2023 data breach

    June 17, 2025

    Observability Startup Coralogix is ​​an extension of Unicorn, Eye India

    June 17, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    A comprehensive list of 2025 tech layoffs

    June 17, 2025

    Unlock scaling growth in TC at all stages and earn $210 for an additional 6 days

    June 17, 2025

    The well-known global VC Endeavor catalyst has raised $300 million, according to sources

    June 17, 2025

    Spotify's Daniel Ek has a big bet on Helsing, a European defence technology darling

    June 17, 2025

    Startup Battlefield 200 application closes midnight

    June 16, 2025
TechBrunchTechBrunch

EU AI law: Draft guidelines on general AI mark first step for big AI to become compliant

TechBrunchBy TechBrunchNovember 14, 20248 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


The first draft of the Code of Practice applicable to providers of general-purpose AI models under the European Union's AI Law has been published, with a call for feedback (open until 28 November) as the drafting process continues into next year. ) was also carried out. Formal compliance deadlines will begin over the next few years.

Pan-EU legislation that came into force this summer regulates the application of artificial intelligence based on a risk-based framework. But it also covers some metrics in more powerful fundamental (or general purpose) AI models (GPAI). This is where this code of practice comes into play.

Among the companies likely to be included in this bracket are OpenAI, maker of the GPT model (which powers the AI ​​chatbot ChatGPT), Google with Gemini GPAI, Meta with Llama, Anthropic with Claude, and companies such as France's Mistral. If you want to ensure compliance with AI laws and avoid the risk of enforcement for non-compliance, you should follow the Generic AI Code of Practice.

To be clear, this Code is intended to provide guidance for meeting EU AI law obligations. GPAI providers may choose to deviate from best practice recommendations if they believe compliance can be demonstrated through other means.

The first draft of the code is 36 pages long, but the drafters warn that it is light on detail as it is a “high-level drafting plan outlining the code's guidelines and goals.” , which is likely to be quite long.

The draft is littered with box-outs asking “open questions” that have not yet been answered by the working group tasked with developing the code. It is clear that the feedback being sought from industry and civil society will play an important role in shaping the content of specific sub-measures and key performance indicators (KPIs) that have not yet been included.

However, this document shows what will happen (in terms of expectations) to GPAI manufacturers once the relevant compliance deadlines apply.

GPAI's transparency requirements for manufacturers are scheduled to take effect on August 1, 2025.

However, for the strongest GPAIs, which the law defines as having a “systemic risk,” risk assessment and mitigation requirements must be followed 36 months after entry into force (or August 1, 2027). is expected.

It should also be noted that the draft code was devised with only a “few” GPAI manufacturers and GPAIs at systemic risk in mind. “If that assumption turns out to be incorrect, we may need to consider future measures such as introducing a more detailed system of step-by-step measures aimed at primarily focusing on models posing the greatest systemic risk.” “Significant changes to the draft may be necessary,” the drafters warned.

Copyright

In terms of transparency, the Code sets out how GPAI must comply with information provisions, including in the area of ​​copyrighted material.

An example here is “Sub-Measure 5.2”, which currently signs the name of every web crawler used for GPAI development and provides details of the associated robots.txt functionality “including when crawling” I have made a promise to you.

Makers of GPAI models have faced multiple lawsuits from rights holders alleging that the AI ​​companies illegally processed copyrighted information, with questions raised over how they obtained the data to train their models. We continue to face questions.

Another commitment set out in the draft code calls for GPAI providers to have a single point of contact and grievance redress to make it easier for rights holders to air their complaints “directly and quickly” .

Other proposed measures related to copyright include data sources used for “training, testing, and validation” and permissions for access and use of protected content for the development of general-purpose AI, provided by GPAI. Contains documentation of what you are expected to do.

systemic risk

The strongest GPAIs are also subject to rules in EU AI law aimed at mitigating so-called “systemic risks”. These AI systems are currently defined as models trained using more than 10^25 FLOPs of total computational power.

The Code includes a list of risk types that signatories are expected to treat as systemic risks. They include:

Offensive cybersecurity risks (such as vulnerability discovery). Chemical, biological, radiological and nuclear risks. “Loss of control” (here we mean “the inability to control powerful autonomous general-purpose AI”) and the automated use of models for AI research and development. Persuasion and manipulation, including large-scale disinformation/misinformation that can pose risks to democratic processes and lead to a loss of trust in the media. Massive discrimination.

This version of the code also allows GPAI authors to identify other types of system risks not explicitly listed, such as “massive” privacy violations, surveillance, and uses that may pose a risk to public health. It also suggests that it is possible. And one of the open questions this document raises here is which risks should be prioritized in addition to the main categories. Another is how the systemic risk taxonomy should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate images).

The Code also addresses “dangerous model capabilities” (e.g., cyber-attacks or “weapons acquisition or proliferation capabilities”) and “dangerous model tendencies” (e.g., inconsistent with human intentions or values; have; lack of reliability and security, and resistance to goal modification.

While many details are still being worked out as the drafting process continues, the code's authors said that its measures, sub-measures and KPIs should be “proportionate” and, in particular, “consider the size and capacity of the organization.” The company writes that it focuses on “adjusting it to suit the needs of people.” Small and medium-sized enterprises and startups with less financial resources than certain providers, especially those at the forefront of AI development. They add that attention should also be paid to “various distribution strategies (e.g. open source) that, where appropriate, reflect proportionality principles and consider both benefits and risks.”

Many of the open questions raised by this draft concern how specific measures should be applied to open source models.

Safety and security inside the frame

Another measure in the code concerns the “Safety and Security Framework” (SSF). GPAI authors will be required to detail their risk management policies and “continuously and thoroughly” identify systemic risks that may arise from the GPAI.

There's an interesting sub-scale here about “anticipation of risk.” This would require signatories to include in their SSFs a “best effort estimate” of a timeline for when to develop models that trigger systemic risk indicators, such as the capabilities and propensities of risky models mentioned above. This could mean that starting in 2027, leading AI developers will set a timeframe within which model development is expected to exceed a certain risk threshold.

Elsewhere, the draft code says it will focus on GPAIs with systemic risks, by using a “best-in-class assessment” of model capabilities and limitations and applying a “suitable set of methodologies” to do so. There is. Examples listed include Q&A sets, benchmarks, red teaming and other adversarial testing methods, human enrichment research, model organisms, simulations, and proxy evaluation of classified materials.

Another supplementary measure regarding “Notification of Significant Systemic Risks” is established under the Act, requiring signatories “if there are strong reasons to believe that a significant systemic risk is likely to materialize.” This obliges the AI ​​Bureau, which is the monitoring and management agency, to be notified of any such incidents.

The Code also sets out measures regarding “reporting of serious incidents.''

“The signatories undertake to identify and track significant incidents arising from general purpose AI models that pose systemic risks, and to document and report relevant information and possible corrective actions to the AI ​​Secretariat without undue delay. appropriate for national competent authorities,” but a related open question asks for opinions on “what constitutes a serious incident.” So it seems like more work needs to be done here to clarify the definition.

The draft code includes further questions regarding “possible corrective actions” that may be taken in response to a serious incident. Among other formulations we asked for feedback, we also asked, “What is an appropriate critical incident response process for an open weight or open source provider?”

“The first draft of the code was developed based on a preliminary review of existing best practices by four expert working groups, stakeholder consultation input from approximately 430 submissions, responses from provider workshops, and international approaches ( the G7 Code of Conduct, the Frontier AI Safety Initiative, the Bletchley Declaration, outcomes from relevant governments and standard-setting bodies) and, most importantly, the AI ​​Act itself,” the authors continued in their conclusion.

“We emphasize that this is only a first draft and therefore the draft code proposals are provisional and subject to change,” they added. “We therefore look forward to your constructive feedback as we further develop and update the content of the Code and work toward a more detailed final form by May 1, 2025.”

TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

A comprehensive list of 2025 tech layoffs

June 17, 2025

Tumblr's content filtering system is incorrectly flagging posts as “mature”, users blame AI

June 17, 2025

Unlock scaling growth in TC at all stages and earn $210 for an additional 6 days

June 17, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.