Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

iOS 19: All the rumor changes that Apple could bring to the new operating system

June 7, 2025

The Trump administration is aiming for Biden and Obama's cybersecurity rules

June 7, 2025

WWDC 2025: What to expect from this year's meeting

June 7, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    iOS 19: All the rumor changes that Apple could bring to the new operating system

    June 7, 2025

    WWDC 2025: What to expect from this year's meeting

    June 7, 2025

    Trump Mask feud was perfect for X and jumped on the app store chart

    June 6, 2025

    iOS 19: All the rumor changes that Apple could bring to the new operating system

    June 6, 2025

    WWDC 2025: What to expect from this year's meeting

    June 6, 2025
  • Crypto

    xNotify Polymarket as partner in the official forecast market

    June 6, 2025

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025
  • Security

    The Trump administration is aiming for Biden and Obama's cybersecurity rules

    June 7, 2025

    After data is wiped out, Kiranapro co-founders cannot rule out external hacks

    June 7, 2025

    Humanity appoints national security experts to governing trusts

    June 6, 2025

    Italian lawmakers say Italy used spyware to target immigrant activists' mobile phones, but not for journalists

    June 6, 2025

    Humanity unveils custom AI models for US national security customers

    June 5, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Why investing in a growing AI startup is risky and more complicated

    June 6, 2025

    Startup Battlefield 200: Only 3 days left

    June 6, 2025

    Book all TC Stage Exhibitor Tables before ending today

    June 6, 2025

    Less than 48 hours left until display at TC at all stages

    June 5, 2025

    TC Session: AI will be on sale today at Berkeley

    June 5, 2025
TechBrunchTechBrunch

EU privacy agency considers some difficult questions regarding GenAI legality

TechBrunchBy TechBrunchDecember 18, 202410 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


The European Data Protection Board (EDPB) on Wednesday outlined how AI developers can use personal data to develop and deploy AI models, such as large-scale language models (LLMs), without violating the bloc's privacy laws. A written opinion was published for consideration. Its views are important because the Board plays a key steering role in the application of these laws and issues guidance to support regulatory enforcement.

Areas covered by the EDPB opinion include whether AI models can be considered anonymous (which means privacy laws do not apply); Whether the legal basis of “legitimate interest” can be used to lawfully process personal data for the development and deployment of AI models (this means there is no need to seek individual consent) ). and whether AI models developed using illegally processed data can subsequently be legally deployed.

In particular, the question of what legal basis is appropriate to ensure that AI models comply with the General Data Protection Regulation (GDPR) remains controversial and an open question. is. We've already seen OpenAI's ChatGPT make a big splash here. Failure to comply with privacy rules can also result in fines of up to 4% of global annual revenue and orders to change the way AI tools operate.

Almost a year ago, Italy's data protection authority announced preliminary findings that OpenAI's chatbots violated the GDPR. Complaints have since been filed against the technology, including in Poland and Austria, targeting aspects such as its legal basis for processing people's data, its tendency to fabricate information, and its inability to correct false declarations against individuals. There is.

The GDPR contains both rules about how personal data can be lawfully processed and a set of data access rights for individuals, including the ability to request a copy of the data held about them. Have data about them deleted. and correct any incorrect information about them. But the story of AI chatbots (or “hallucinations,” as the industry calls them) is no simple matter.

But while generative AI tools quickly faced multiple GDPR complaints, so far there has been far less enforcement. EU data protection authorities are clearly struggling with how to apply long-established data protection rules to technologies that require large amounts of data for training. EDPB opinions are intended to assist supervisory authorities in their decision-making.

In response, Ireland's Data Protection Commission (DPC), the regulator that called for the Board's views on the areas covered by this opinion, has announced that OpenAI's GDPR oversight will be implemented following the legal switch late last year. Although it is also the monitoring agency that is supposed to lead the charge, he suggested the following. The EDPB opinion “enables active, effective and consistent regulation” of AI models across the region.

“This will also support the DPC’s engagement with companies developing new AI models before they are brought to the EU market, and the handling of a number of AI-related complaints lodged with the DPC,” Commissioner Dale said.・Mr Sunderland added.

This opinion not only provides guidance to regulators on how to approach generative AI, but also provides some direction to developers on how privacy regulators will address important issues such as legality. Masu. But the main message they should receive is that there is no one-size-fits-all solution to the legal uncertainty they face.

Model anonymity

For example, on the issue of model anonymity, the Board defined this to mean AI models that are “very unlikely to directly or indirectly identify the individuals whose data were used to create the model”; We define it as very unlikely to allow users anonymity. Extract such data from the model through prompted queries. The opinion emphasizes that this must be evaluated “on a case-by-case basis.”

The document includes a “non-normative and non-exhaustive list” of ways model developers can demonstrate anonymity, including source selection for training data that includes steps to avoid or limit the collection of personal data, the board said. Calls are also provided. (excluding “inappropriate” sources). Data minimization and filtering steps during pre-training in the data preparation phase. “Significantly reduce or eliminate the risk of identifiability, such as by selecting “regularization methods'' that aim to improve model generalization and reduce overfitting, and by applying privacy-preserving techniques such as differential privacy.'' Make robust “methodological choices” that can be made. As well as measures added to the model that may reduce the risk of users retrieving personal data from the training data via queries.

This shows that every design and development choice an AI developer makes can impact the regulatory assessment of the extent to which GDPR applies to that particular model. Only truly anonymous data with no risk of re-identification will be exempt from regulation, but in the context of AI models, a hurdle is set at the risk that an individual or their data will be identified with a “very low” probability .

Prior to the EDPB opinion, there had been some discussion among data protection authorities regarding the anonymity of AI models, including the suggestion that the models themselves could not be personal data. The Board makes clear that the anonymity of AI models cannot be taken for granted. A case-by-case evaluation is required.

legitimate interest

The opinion also considers whether the legal basis of legitimate interest can be used for the development and deployment of AI. This is important because, as OpenAI has already discovered through the implementation of the Italian DPA, there are only a few legal bases available under the GDPR, most of which are inappropriate for AI.

Legitimate interest may be the basis for the choices AI developers make when building models, as it is not necessary to obtain consent from every individual whose data is processed to build the technology. (Given the amount of data used to train LLMs, it is clear that a consent-based legal basis is not commercially attractive or scalable.)

Again, the Board of Directors is of the view that the DPA requires an assessment to determine whether legitimate interest is an appropriate legal basis for processing personal data for the development and deployment of AI models. It is necessary to carry out. This refers to the standard three-stage test that monitoring agencies must consider. Verify the purpose and necessity of the processing (i.e. whether the processing is lawful and specific, and whether there were less intrusive alternatives to achieve the intended result) and the impact of the processing on the rights of individuals. Run a balance test to find out.

In the EDPB's opinion, this leaves the door open to the possibility that AI models meet all the criteria for relying on the legal basis of legitimate interest, for example in conversations that assist users. The development of AI models to enhance type agent services is suggested. Introducing improved threat detection in information systems will satisfy the first test (legitimate purpose).

To assess the second test (necessity), you need to assess whether the processing actually achieves a legitimate purpose and whether there is a less intrusive way of achieving that purpose. . We pay particular attention to whether the amount of personal data processed is proportionate to our goals. , keeping in mind the GDPR's data minimization principles.

The third test (balance of individual rights) requires “taking into account the specific circumstances of each case,” according to the opinion. Risks to the fundamental rights of individuals that may arise during development and deployment required special attention.

As part of the balancing test, regulators will examine the “reasonable expectations” of data subjects, i.e. whether individuals whose data were processed for AI could have expected their information to be used in such a way. It is also necessary to consider. Relevant considerations here include whether the data is publicly available, the source of the data and the context of its collection, the relationship between the individual and the processor, and the potential for further use of the model.

If the balancing test fails because the interests of the individual outweigh the interests of the processor, the Commission states that mitigating measures to limit the impact of the processing on the individual may be considered. However, this should be tailored to the circumstances of the case. We will explain the characteristics of AI models, their uses, etc.

Examples of mitigations cited in the opinion include technical measures (such as those listed above in the section on model anonymity). Pseudonymization measures (e.g. checks to prevent combinations of personal data based on personal identifiers). Take steps to mask or replace personal data in the training set with fake personal data. Measures aimed at allowing individuals to exercise their rights (such as opting out). and transparency measures.

The opinion also discusses measures to reduce the risks associated with web scraping, which the Board says poses “particular risks.”

illegally trained model

The opinion also addresses the thorny issue of how regulators should approach AI models trained on data that has not been legally processed, as required by the GDPR.

Again, the Board recommends that regulators consider the “circumstances of each individual case.” So the answer to how the EU privacy watchdog will respond to AI developers who fall into this category of violating the law…it depends.

However, this opinion appears to provide a kind of exclusionary clause for AI models that may be built on shaky (legal) foundations. For example, AI models do not consider the consequences of scraping and retrieving data from wherever it is available. Steps to ensure that personal data is anonymized before the model enters the deployment phase.

In such cases, as long as the developer can demonstrate that the subsequent operation of the model does not involve the processing of personal data, the Council stated that the GDPR does not apply, writing: Subsequent manipulation of the model. ”

Lukasz Olejnik, an independent consultant and affiliate of the KCL Institute for Artificial Intelligence, discusses the importance of this element of the opinion, stating that GDPR complaints against ChatGPT have been under consideration by the Polish DPA for more than a year. However, he warned: To allow organized exploitation schemes. ”

“This is an interesting potential divergence from previous interpretations of data protection law,” he told TechCrunch. “By focusing only on the end state (anonymization), the EDPB could unintentionally or potentially legitimize web data scraping without an adequate legal basis. This could undermine the core principle of the GDPR that personal data must be processed lawfully at every stage from collection to disposal.”

Asked what impact he thought the EDPB's overall opinion would have on his complaints against ChatGPT, Olejnik added: “This opinion does not tie the hands of national DPAs. Having said that, I am confident that PUODO [Poland’s DPA] He also emphasized that his case against OpenAI's AI chatbots “goes beyond training and includes accountability and privacy by design.”

TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

iOS 19: All the rumor changes that Apple could bring to the new operating system

June 7, 2025

The Trump administration is aiming for Biden and Obama's cybersecurity rules

June 7, 2025

WWDC 2025: What to expect from this year's meeting

June 7, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.