The European Data Protection Board (EDPB) on Wednesday outlined how AI developers can use personal data to develop and deploy AI models, such as large-scale language models (LLMs), without violating the bloc's privacy laws. A written opinion was published for consideration. Its views are important because the Board plays a key steering role in the application of these laws and issues guidance to support regulatory enforcement.
Areas covered by the EDPB opinion include whether AI models can be considered anonymous (which means privacy laws do not apply); Whether the legal basis of “legitimate interest” can be used to lawfully process personal data for the development and deployment of AI models (this means there is no need to seek individual consent) ). and whether AI models developed using illegally processed data can subsequently be legally deployed.
In particular, the question of what legal basis is appropriate to ensure that AI models comply with the General Data Protection Regulation (GDPR) remains controversial and an open question. is. We've already seen OpenAI's ChatGPT make a big splash here. Failure to comply with privacy rules can also result in fines of up to 4% of global annual revenue and orders to change the way AI tools operate.
Almost a year ago, Italy's data protection authority announced preliminary findings that OpenAI's chatbots violated the GDPR. Complaints have since been filed against the technology, including in Poland and Austria, targeting aspects such as its legal basis for processing people's data, its tendency to fabricate information, and its inability to correct false declarations against individuals. There is.
The GDPR contains both rules about how personal data can be lawfully processed and a set of data access rights for individuals, including the ability to request a copy of the data held about them. Have data about them deleted. and correct any incorrect information about them. But the story of AI chatbots (or “hallucinations,” as the industry calls them) is no simple matter.
But while generative AI tools quickly faced multiple GDPR complaints, so far there has been far less enforcement. EU data protection authorities are clearly struggling with how to apply long-established data protection rules to technologies that require large amounts of data for training. EDPB opinions are intended to assist supervisory authorities in their decision-making.
In response, Ireland's Data Protection Commission (DPC), the regulator that called for the Board's views on the areas covered by this opinion, has announced that OpenAI's GDPR oversight will be implemented following the legal switch late last year. Although it is also the monitoring agency that is supposed to lead the charge, he suggested the following. The EDPB opinion “enables active, effective and consistent regulation” of AI models across the region.
“This will also support the DPC’s engagement with companies developing new AI models before they are brought to the EU market, and the handling of a number of AI-related complaints lodged with the DPC,” Commissioner Dale said.・Mr Sunderland added.
This opinion not only provides guidance to regulators on how to approach generative AI, but also provides some direction to developers on how privacy regulators will address important issues such as legality. Masu. But the main message they should receive is that there is no one-size-fits-all solution to the legal uncertainty they face.
Model anonymity
For example, on the issue of model anonymity, the Board defined this to mean AI models that are “very unlikely to directly or indirectly identify the individuals whose data were used to create the model”; We define it as very unlikely to allow users anonymity. Extract such data from the model through prompted queries. The opinion emphasizes that this must be evaluated “on a case-by-case basis.”
The document includes a “non-normative and non-exhaustive list” of ways model developers can demonstrate anonymity, including source selection for training data that includes steps to avoid or limit the collection of personal data, the board said. Calls are also provided. (excluding “inappropriate” sources). Data minimization and filtering steps during pre-training in the data preparation phase. “Significantly reduce or eliminate the risk of identifiability, such as by selecting “regularization methods'' that aim to improve model generalization and reduce overfitting, and by applying privacy-preserving techniques such as differential privacy.'' Make robust “methodological choices” that can be made. As well as measures added to the model that may reduce the risk of users retrieving personal data from the training data via queries.
This shows that every design and development choice an AI developer makes can impact the regulatory assessment of the extent to which GDPR applies to that particular model. Only truly anonymous data with no risk of re-identification will be exempt from regulation, but in the context of AI models, a hurdle is set at the risk that an individual or their data will be identified with a “very low” probability .
Prior to the EDPB opinion, there had been some discussion among data protection authorities regarding the anonymity of AI models, including the suggestion that the models themselves could not be personal data. The Board makes clear that the anonymity of AI models cannot be taken for granted. A case-by-case evaluation is required.
legitimate interest
The opinion also considers whether the legal basis of legitimate interest can be used for the development and deployment of AI. This is important because, as OpenAI has already discovered through the implementation of the Italian DPA, there are only a few legal bases available under the GDPR, most of which are inappropriate for AI.
Legitimate interest may be the basis for the choices AI developers make when building models, as it is not necessary to obtain consent from every individual whose data is processed to build the technology. (Given the amount of data used to train LLMs, it is clear that a consent-based legal basis is not commercially attractive or scalable.)
Again, the Board of Directors is of the view that the DPA requires an assessment to determine whether legitimate interest is an appropriate legal basis for processing personal data for the development and deployment of AI models. It is necessary to carry out. This refers to the standard three-stage test that monitoring agencies must consider. Verify the purpose and necessity of the processing (i.e. whether the processing is lawful and specific, and whether there were less intrusive alternatives to achieve the intended result) and the impact of the processing on the rights of individuals. Run a balance test to find out.
In the EDPB's opinion, this leaves the door open to the possibility that AI models meet all the criteria for relying on the legal basis of legitimate interest, for example in conversations that assist users. The development of AI models to enhance type agent services is suggested. Introducing improved threat detection in information systems will satisfy the first test (legitimate purpose).
To assess the second test (necessity), you need to assess whether the processing actually achieves a legitimate purpose and whether there is a less intrusive way of achieving that purpose. . We pay particular attention to whether the amount of personal data processed is proportionate to our goals. , keeping in mind the GDPR's data minimization principles.
The third test (balance of individual rights) requires “taking into account the specific circumstances of each case,” according to the opinion. Risks to the fundamental rights of individuals that may arise during development and deployment required special attention.
As part of the balancing test, regulators will examine the “reasonable expectations” of data subjects, i.e. whether individuals whose data were processed for AI could have expected their information to be used in such a way. It is also necessary to consider. Relevant considerations here include whether the data is publicly available, the source of the data and the context of its collection, the relationship between the individual and the processor, and the potential for further use of the model.
If the balancing test fails because the interests of the individual outweigh the interests of the processor, the Commission states that mitigating measures to limit the impact of the processing on the individual may be considered. However, this should be tailored to the circumstances of the case. We will explain the characteristics of AI models, their uses, etc.
Examples of mitigations cited in the opinion include technical measures (such as those listed above in the section on model anonymity). Pseudonymization measures (e.g. checks to prevent combinations of personal data based on personal identifiers). Take steps to mask or replace personal data in the training set with fake personal data. Measures aimed at allowing individuals to exercise their rights (such as opting out). and transparency measures.
The opinion also discusses measures to reduce the risks associated with web scraping, which the Board says poses “particular risks.”
illegally trained model
The opinion also addresses the thorny issue of how regulators should approach AI models trained on data that has not been legally processed, as required by the GDPR.
Again, the Board recommends that regulators consider the “circumstances of each individual case.” So the answer to how the EU privacy watchdog will respond to AI developers who fall into this category of violating the law…it depends.
However, this opinion appears to provide a kind of exclusionary clause for AI models that may be built on shaky (legal) foundations. For example, AI models do not consider the consequences of scraping and retrieving data from wherever it is available. Steps to ensure that personal data is anonymized before the model enters the deployment phase.
In such cases, as long as the developer can demonstrate that the subsequent operation of the model does not involve the processing of personal data, the Council stated that the GDPR does not apply, writing: Subsequent manipulation of the model. ”
Lukasz Olejnik, an independent consultant and affiliate of the KCL Institute for Artificial Intelligence, discusses the importance of this element of the opinion, stating that GDPR complaints against ChatGPT have been under consideration by the Polish DPA for more than a year. However, he warned: To allow organized exploitation schemes. ”
“This is an interesting potential divergence from previous interpretations of data protection law,” he told TechCrunch. “By focusing only on the end state (anonymization), the EDPB could unintentionally or potentially legitimize web data scraping without an adequate legal basis. This could undermine the core principle of the GDPR that personal data must be processed lawfully at every stage from collection to disposal.”
Asked what impact he thought the EDPB's overall opinion would have on his complaints against ChatGPT, Olejnik added: “This opinion does not tie the hands of national DPAs. Having said that, I am confident that PUODO [Poland’s DPA] He also emphasized that his case against OpenAI's AI chatbots “goes beyond training and includes accountability and privacy by design.”
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.