Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

May 22, 2025

Bluesky begins to check for “notable” users

May 22, 2025

Microsoft says Lumma Password Stealer Malware found on 394,000 Windows PCs

May 22, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Bluesky begins to check for “notable” users

    May 22, 2025

    Mozilla shuts down its Read-It-Later app pocket

    May 22, 2025

    Opening a Social Web Browser Surf makes it easy for anyone to create custom feeds

    May 22, 2025

    Anthropic's new Claude4 AI model can be inferred in many steps

    May 22, 2025

    Strava buys athletic training app – First Runna, and now Breakaway

    May 22, 2025
  • Crypto

    Starting from up to $900 from Ticep, 90% off +1 in 2025

    May 22, 2025

    Early savings for 2025 will end on May 25th

    May 21, 2025

    Coinbase says its data breach will affect at least 69,000 customers

    May 21, 2025

    There are 6 days to save $900 to destroy 2025 tickets

    May 20, 2025

    Save $900 to destroy 2025 tickets before prices rise on May 25th

    May 19, 2025
  • Security

    Microsoft says Lumma Password Stealer Malware found on 394,000 Windows PCs

    May 22, 2025

    Signal's new Windows update prevents the system from capturing screenshots of chat

    May 22, 2025

    Wyden: AT&T, T-Mobile and Verizon did not inform senators of surveillance requests

    May 21, 2025

    US students agree to plead guilty to hacking affecting tens of millions of students

    May 21, 2025

    The people in Elon Musk’s DOGE universe

    May 20, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

    May 22, 2025

    Wild story of how Moxxie-led Intestinal Toilet Startup Sloan was registered as a gut toilet startup throne

    May 22, 2025

    Submitted submission raises $17 million to automate tax preparation dr voyages

    May 21, 2025

    In a busy VC landscape, Elizabeth Weil's graffiti venture shows that networks are still important

    May 21, 2025

    A comprehensive list of 2025 tech layoffs

    May 21, 2025
TechBrunchTechBrunch

EU AI law: Draft guidelines on general AI mark first step for big AI to become compliant

TechBrunchBy TechBrunchNovember 14, 20248 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


The first draft of the Code of Practice applicable to providers of general-purpose AI models under the European Union's AI Law has been published, with a call for feedback (open until 28 November) as the drafting process continues into next year. ) was also carried out. Formal compliance deadlines will begin over the next few years.

Pan-EU legislation that came into force this summer regulates the application of artificial intelligence based on a risk-based framework. But it also covers some metrics in more powerful fundamental (or general purpose) AI models (GPAI). This is where this code of practice comes into play.

Among the companies likely to be included in this bracket are OpenAI, maker of the GPT model (which powers the AI ​​chatbot ChatGPT), Google with Gemini GPAI, Meta with Llama, Anthropic with Claude, and companies such as France's Mistral. If you want to ensure compliance with AI laws and avoid the risk of enforcement for non-compliance, you should follow the Generic AI Code of Practice.

To be clear, this Code is intended to provide guidance for meeting EU AI law obligations. GPAI providers may choose to deviate from best practice recommendations if they believe compliance can be demonstrated through other means.

The first draft of the code is 36 pages long, but the drafters warn that it is light on detail as it is a “high-level drafting plan outlining the code's guidelines and goals.” , which is likely to be quite long.

The draft is littered with box-outs asking “open questions” that have not yet been answered by the working group tasked with developing the code. It is clear that the feedback being sought from industry and civil society will play an important role in shaping the content of specific sub-measures and key performance indicators (KPIs) that have not yet been included.

However, this document shows what will happen (in terms of expectations) to GPAI manufacturers once the relevant compliance deadlines apply.

GPAI's transparency requirements for manufacturers are scheduled to take effect on August 1, 2025.

However, for the strongest GPAIs, which the law defines as having a “systemic risk,” risk assessment and mitigation requirements must be followed 36 months after entry into force (or August 1, 2027). is expected.

It should also be noted that the draft code was devised with only a “few” GPAI manufacturers and GPAIs at systemic risk in mind. “If that assumption turns out to be incorrect, we may need to consider future measures such as introducing a more detailed system of step-by-step measures aimed at primarily focusing on models posing the greatest systemic risk.” “Significant changes to the draft may be necessary,” the drafters warned.

Copyright

In terms of transparency, the Code sets out how GPAI must comply with information provisions, including in the area of ​​copyrighted material.

An example here is “Sub-Measure 5.2”, which currently signs the name of every web crawler used for GPAI development and provides details of the associated robots.txt functionality “including when crawling” I have made a promise to you.

Makers of GPAI models have faced multiple lawsuits from rights holders alleging that the AI ​​companies illegally processed copyrighted information, with questions raised over how they obtained the data to train their models. We continue to face questions.

Another commitment set out in the draft code calls for GPAI providers to have a single point of contact and grievance redress to make it easier for rights holders to air their complaints “directly and quickly” .

Other proposed measures related to copyright include data sources used for “training, testing, and validation” and permissions for access and use of protected content for the development of general-purpose AI, provided by GPAI. Contains documentation of what you are expected to do.

systemic risk

The strongest GPAIs are also subject to rules in EU AI law aimed at mitigating so-called “systemic risks”. These AI systems are currently defined as models trained using more than 10^25 FLOPs of total computational power.

The Code includes a list of risk types that signatories are expected to treat as systemic risks. They include:

Offensive cybersecurity risks (such as vulnerability discovery). Chemical, biological, radiological and nuclear risks. “Loss of control” (here we mean “the inability to control powerful autonomous general-purpose AI”) and the automated use of models for AI research and development. Persuasion and manipulation, including large-scale disinformation/misinformation that can pose risks to democratic processes and lead to a loss of trust in the media. Massive discrimination.

This version of the code also allows GPAI authors to identify other types of system risks not explicitly listed, such as “massive” privacy violations, surveillance, and uses that may pose a risk to public health. It also suggests that it is possible. And one of the open questions this document raises here is which risks should be prioritized in addition to the main categories. Another is how the systemic risk taxonomy should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate images).

The Code also addresses “dangerous model capabilities” (e.g., cyber-attacks or “weapons acquisition or proliferation capabilities”) and “dangerous model tendencies” (e.g., inconsistent with human intentions or values; have; lack of reliability and security, and resistance to goal modification.

While many details are still being worked out as the drafting process continues, the code's authors said that its measures, sub-measures and KPIs should be “proportionate” and, in particular, “consider the size and capacity of the organization.” The company writes that it focuses on “adjusting it to suit the needs of people.” Small and medium-sized enterprises and startups with less financial resources than certain providers, especially those at the forefront of AI development. They add that attention should also be paid to “various distribution strategies (e.g. open source) that, where appropriate, reflect proportionality principles and consider both benefits and risks.”

Many of the open questions raised by this draft concern how specific measures should be applied to open source models.

Safety and security inside the frame

Another measure in the code concerns the “Safety and Security Framework” (SSF). GPAI authors will be required to detail their risk management policies and “continuously and thoroughly” identify systemic risks that may arise from the GPAI.

There's an interesting sub-scale here about “anticipation of risk.” This would require signatories to include in their SSFs a “best effort estimate” of a timeline for when to develop models that trigger systemic risk indicators, such as the capabilities and propensities of risky models mentioned above. This could mean that starting in 2027, leading AI developers will set a timeframe within which model development is expected to exceed a certain risk threshold.

Elsewhere, the draft code says it will focus on GPAIs with systemic risks, by using a “best-in-class assessment” of model capabilities and limitations and applying a “suitable set of methodologies” to do so. There is. Examples listed include Q&A sets, benchmarks, red teaming and other adversarial testing methods, human enrichment research, model organisms, simulations, and proxy evaluation of classified materials.

Another supplementary measure regarding “Notification of Significant Systemic Risks” is established under the Act, requiring signatories “if there are strong reasons to believe that a significant systemic risk is likely to materialize.” This obliges the AI ​​Bureau, which is the monitoring and management agency, to be notified of any such incidents.

The Code also sets out measures regarding “reporting of serious incidents.''

“The signatories undertake to identify and track significant incidents arising from general purpose AI models that pose systemic risks, and to document and report relevant information and possible corrective actions to the AI ​​Secretariat without undue delay. appropriate for national competent authorities,” but a related open question asks for opinions on “what constitutes a serious incident.” So it seems like more work needs to be done here to clarify the definition.

The draft code includes further questions regarding “possible corrective actions” that may be taken in response to a serious incident. Among other formulations we asked for feedback, we also asked, “What is an appropriate critical incident response process for an open weight or open source provider?”

“The first draft of the code was developed based on a preliminary review of existing best practices by four expert working groups, stakeholder consultation input from approximately 430 submissions, responses from provider workshops, and international approaches ( the G7 Code of Conduct, the Frontier AI Safety Initiative, the Bletchley Declaration, outcomes from relevant governments and standard-setting bodies) and, most importantly, the AI ​​Act itself,” the authors continued in their conclusion.

“We emphasize that this is only a first draft and therefore the draft code proposals are provisional and subject to change,” they added. “We therefore look forward to your constructive feedback as we further develop and update the content of the Code and work toward a more detailed final form by May 1, 2025.”

TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

May 22, 2025

Bluesky begins to check for “notable” users

May 22, 2025

Microsoft says Lumma Password Stealer Malware found on 394,000 Windows PCs

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.