Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025

All stages of TechCrunch regain early release prices for limited time

July 3, 2025

Not everyone is excited about DMs on the thread

July 3, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Not everyone is excited about DMs on the thread

    July 3, 2025

    Meta has found another way to engage you: message that message first

    July 3, 2025

    Everything you need to know about Flash, Blueski-based Instagram alternatives

    July 3, 2025

    Substack brings new updates to live streaming as it increases video push

    July 2, 2025

    Amazon shuts down the Freevee app in August

    July 2, 2025
  • Crypto

    Vitalik Buterin reserves for Sam Altman's global project

    June 28, 2025

    Calci will close a $185 million round as rival Polymeruk reportedly seeks $200 million

    June 25, 2025

    Stablecoin Evangelist: Katie Haun's Battle of Digital Dollars

    June 22, 2025

    Hackers steal and destroy millions of Iran's biggest crypto exchanges

    June 18, 2025

    Unique, a new social media app

    June 17, 2025
  • Security

    Ransomware Gang Hunter International says it's shut down

    July 3, 2025

    India's biggest finance says hackers have accessed customer data from insurance units

    July 2, 2025

    Data breaches reveal that Catwatchful's “Stalkerware” is spying on thousands of phones

    July 2, 2025

    Hacking, Leaking, Exposure: Do not use stalkerware apps

    July 2, 2025

    Qantas Hacks lead to theft of personal data for 6 million passengers

    July 2, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Everyone in high tech has an opinion about Soham Parekh

    July 3, 2025

    All stages of TechCrunch regain early release prices for limited time

    July 3, 2025

    Kristen Craft brings fresh fundraising strategies to every stage

    July 3, 2025

    The Y Combinator alumni have launched a new $34 million fund dedicated to YC startups.

    July 3, 2025

    Learn how to tighten a cap table with TC All Stage 2025

    July 3, 2025
TechBrunchTechBrunch

EU AI law: everything you need to know

TechBrunchBy TechBrunchNovember 16, 202410 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


The European Union's risk-based rulebook for artificial intelligence, also known as EU AI law, has been years in the making. However, we expect to hear more about this regulation in the coming months (and years) as important compliance deadlines begin. In the meantime, read on for an overview of this law and its purpose.

So what is the EU trying to achieve? Turn the clock back to April 2021. At the time, the European Commission published its original proposal, which lawmakers were trying to frame as legislation that would strengthen the bloc's ability to innovate in AI by fostering public trust. The EU proposed that this framework would ensure that AI technology remains “human-centric”, while also giving companies clear rules to work their machine learning magic.

The increased adoption of automation across industries and society certainly has the potential to significantly increase productivity in a variety of areas. However, there is also a risk of rapid harm if productivity is low and/or if AI intersects with and fails to respect individual rights.

Therefore, the goal of this block of AI legislation is to foster the adoption of AI and grow regional AI ecosystems by setting conditions aimed at mitigating the risk of things going horribly wrong. That's it. Lawmakers believe that putting guardrails in place will increase public trust in and adoption of AI.

This idea of ​​fostering an ecosystem through trust was largely uncontroversial at the beginning of the decade when this law was being debated and drafted. However, some critics argued that it was too early to regulate AI and that it could harm Europe's innovation and competitiveness.

Of course, few would say it's too early now, considering the technology has exploded into mainstream consciousness thanks to the boom in generative AI tools. However, despite the inclusion of supportive measures such as a regulatory sandbox, there remain opponents who argue that the law will hinder the prospects of homegrown AI entrepreneurs.

Still, how to regulate AI is currently a big debate for many lawmakers, and the EU set the tone with its AI law. The next few years will all depend on Brock executing his plan.

What does the AI ​​Act require?

Most uses of AI fall outside the scope of risk-based regulations and are therefore not regulated at all by the AI ​​Act. (It should also be noted that military uses of AI are completely out of scope, as national security is a legal competence of member states and not at EU level.)

With respect to the scope of use of AI, the Act's risk-based approach provides for a small number of potential use cases (e.g. “harmful subconscious, manipulative and deceptive techniques” or “unacceptable social scoring”). ”) is framed as “unacceptable.” It is prohibited due to the risk. However, the list of prohibited uses is full of exceptions, meaning that even the law's few prohibitions have many caveats.

For example, a ban on law enforcement using real-time remote biometrics in publicly accessible spaces may be limited to specific offenses, rather than a blanket ban as some lawmakers and many civil society groups have called for. The use of is permitted as an exception.

The next layer of unacceptable risk/prohibited use is “high risk” use cases such as AI apps used for critical infrastructure. Law enforcement. Education and vocational training. health management; etc. — app makers must conduct conformance assessments before market deployment and on an ongoing basis (e.g. when making significant updates to a model).

This means developers must be able to demonstrate that they meet the legal requirements in areas such as data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness. I will. A quality and risk management system must be in place so that compliance can be demonstrated if the authorities come for an audit.

High-risk systems implemented by public authorities must also be registered in the EU's public database.

There is also a third “medium risk” category, where transparency obligations apply to AI systems such as chatbots and other tools that can be used to create synthetic media. The concern here is that they can be used to manipulate people. This type of technology therefore requires users to be notified that they are interacting with or viewing AI-generated content.

All other uses of AI are automatically considered low/minimal risk and will not be regulated. This means, for example, that activities such as using AI to categorize and recommend social media content or targeted advertising are not obligated under these rules. However, the block encourages all AI developers to voluntarily follow best practices to increase user trust.

This tiered set of risk-based rules is the bulk of AI law. However, the multifaceted models that underpin generative AI technologies also have some specialized requirements. AI law refers to this as a “general purpose AI” model (or GPAI).

This subset of AI technology, sometimes referred to in the industry as “foundational models,” typically sits upstream of many apps that implement artificial intelligence. Developers leverage GPAI's APIs to deploy the functionality of these models into their own software, often fine-tuned for specific use cases to add value. All of this means that GPAI can quickly gain a strong position in the market and have the potential to have a massive impact on AI outcomes.

GenAI has joined the chat…

The rise of GenAI has not only reshaped the debate around AI law in the EU. The bloc’s lengthy legislative process, combined with the hype around GenAI tools like ChatGPT, led to changes to the rulebook itself. Members of the European Parliament seized the opportunity to respond.

The MEP proposed adding additional rules to the underlying model of GPAI, or GenAI tool. These raised the tech industry's attention to what the EU was doing with its legislation, leading to intense lobbying for a GPAI carve-out.

French AI company Mistral was one of the most vocal, arguing that restrictions on model makers would hinder Europe's ability to compete with AI giants in the United States and China. OpenAI's Sam Altman agreed, and was summoned by the EU after hinting in an aside to journalists that his company's technology could be pulled from Europe if the law proves too onerous. He suggested that he may hasten to return to traditional lobbying against regional power brokers. Deal with this clumsy threat.

Mr. Altman's crash course in European diplomacy is one of the most notable side effects of the AI ​​law.

The result of all this noise was an uphill battle to complete the legislative process. It took several months and a long final negotiation session between the European Parliament, the Council and the European Commission for the proposal to cross the line last year. The political agreement was signed in December 2023, paving the way for the adoption of the final document in May 2024.

The EU touts its AI law as a “world first.” However, being first in the context of this cutting-edge technology means a lot of work, including setting specific standards to which the law applies and creating detailed compliance guidance (codes of conduct) for monitoring and monitoring. It means there are still a lot of details that need to be done. The ecosystem construction system that this law is designed to function as.

Therefore, as far as its success is measured, this law is still a work in progress and will continue to be for a long time to come.

For GPAI, the AI ​​Act continues its risk-based approach and has (only) relaxed requirements for most of these models.

For commercial GPAI, this means transparency rules, including technical documentation requirements and disclosures regarding the use of copyrighted material used to train models. These provisions are intended to help downstream developers comply with their own AI laws.

There is also a second tier for the most powerful (and potentially risky) GPAIs, with the law requiring up-front risk assessment and mitigation for GPAIs that pose “systemic risks.” , obligations for model manufacturers have been strengthened.

Here, the EU is concerned, for example, with very powerful AI models that could pose a risk to human life, or even the risk that technology manufacturers would be unable to control the continued development of self-improving AI.

Lawmakers chose to rely on computational thresholds for model training as classifiers for this systemic risk tier. GPAI falls into this bracket based on the cumulative amount of compute used for training, measured in floating point operations (FLOPs) greater than 1025.

As of now, it is believed that there are no targeted models, but of course that could change as GenAI continues to develop.

AI safety experts involved in overseeing AI laws also have scope to flag concerns about systemic risks that may arise elsewhere. (For more information on the governance structure devised by the Block for the AI ​​Act, including the various roles of the AI ​​Office, please see our previous report.)

As a result of lobbying by Mr. Mistral and others, the GPAI rules have been watered down, reducing requirements for open source providers, for example (Lucky Mistral!). Research and development has also been carved out, so GPAI that has not yet been commercialized will not even be subject to transparency requirements and will be completely outside the scope of the law.

The long march towards compliance

The AI ​​Act officially entered into force across the EU on August 1, 2024. With deadlines set at various intervals for various components to comply from early next year until around mid-2027, this day effectively sounded the gun.

Some of the key compliance deadlines are six months after the Prohibited Use Cases rule goes into effect. Nine months before the Code of Conduct begins to apply. Transparency and governance requirements take 12 months. 24 months for other AI requirements, including obligations for some high-risk systems. 36 months for other high-risk systems.

One of the reasons for this phased approach to legal provisions is to give companies sufficient time to properly carry out their operations. But beyond that, it is clear that regulators need time to figure out what compliance looks like in this cutting-edge situation.

At the time of writing, the bloc is busy developing guidance on various aspects of the law ahead of these deadlines, including a code of practice for GPAI authors. The EU is also negotiating the definition of “AI systems” in the law (i.e. what software will or will not be covered) and clarifications regarding prohibited uses of AI.

The full picture of what the AI ​​Act means for covered companies is still blurring and taking shape. But key details are expected to be finalized in the coming months and early next year.

Another consideration is that as these technologies (and their associated risks) continue to evolve as a result of the pace of development in the AI ​​field, what is required to comply with the law may also continue to change. There is. So this is one rulebook that you might want to keep as a living document.

AI rule enforcement

Supervision of the GPAI is centralized at EU level, with the AI ​​Secretariat playing a key role. The penalties that the European Commission can apply to enforce these rules could amount to up to 3% of a model manufacturer's global turnover.

In other regions, enforcement of the law's rules regarding AI systems is decentralized, with member state-level authorities (with multiple designated supervisory bodies) responsible for assessing and investigating compliance issues for most AI apps. (as there may be cases where there is more than one person). . It remains to be seen how well this structure will work.

In theory, fines for violations of prohibited uses could reach up to 7% of global turnover (or 35 million euros, whichever is greater). Violations of other AI obligations can result in fines of up to 3% of global turnover, or up to 1.5% for providing false information to regulators. As a result, the scale of sanctions that enforcement authorities can impose is on a sliding scale.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025

All stages of TechCrunch regain early release prices for limited time

July 3, 2025

Not everyone is excited about DMs on the thread

July 3, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.