On Tuesday, the BBC reported that Pa Edrissa Manjan, a black Uber Eats delivery worker, was receiving payments from Uber after a “racist” facial recognition test prevented her from accessing the app. He had been receiving it since November 2019. A job delivering food on the Uber platform.
The news raises questions about how well UK law is adapted to the increasing use of AI systems. In particular, the lack of transparency regarding automated systems that have been rushed to market with the hope of improving user safety and service efficiency is a sign that it will take time to achieve redress for those affected by AI bias. Nevertheless, there is a risk that the damage to individuals will expand explosively. Year.
The lawsuit follows a number of complaints about failed facial recognition checks since Uber introduced its real-time ID checking system in the UK in April 2020. Uber's facial recognition system, based on Microsoft's facial recognition technology, requires account holders to submit a live selfie check. Verify your identity by matching it to a photo on file.
ID check failure
Mangian's complaint alleges that Uber failed an ID check and subsequent automated process due to “continued discrepancies” found in his mugshot taken to access the platform. His account was suspended and then terminated. Mr Mangian brought a legal claim against Uber in October 2021, with support from the Equality and Human Rights Commission (EHRC) and the App Drivers and Couriers Union (ADCU).
After years of litigation, Uber was unable to reverse Mangian's claims or order him to post a bond to continue the lawsuit. This tactic appears to have contributed to the length of the case, with the EHRC describing the case as still at a “preliminary stage” as of autumn 2023, which shows the “complexity of claims dealing with AI technology”. he pointed out. A final hearing was scheduled for 17 days in November 2024.
That hearing will not be held at this time because Uber offered to pay the settlement and Mangian accepted. That means detailed details about what exactly went wrong and why it happened will not be made public. Terms of the financial settlement were also not disclosed. When we asked, Uber didn't provide details or comment on what exactly went wrong.
We also contacted Microsoft for answers on the outcome of the incident, but the company declined to comment.
Despite the settlement with Manjang, Uber has not publicly acknowledged that its systems or processes were flawed. The company's statement regarding the settlement claims that facial recognition checks are thwarted by “strong human review,” denying that a courier account could be suspended as a result of AI evaluation alone.
“Our real-time ID checks are designed to keep everyone who uses our apps safe, ensuring we're not making decisions about someone's livelihood in a vacuum and in a vacuum. “Contains strong human reviews,” the company said in a statement. . “Automatic facial recognition was not the cause of Mr. Manjian's temporary loss of access to his courier account.”
But clearly, in Mangian's case, something went very wrong with Uber's ID check.
Worker Info Exchange (WIE), a digital rights advocacy group for platform workers, also supported Mr Manjang's appeal to obtain all of his selfies from Uber through a subject access request under UK data protection law. We were very successful in showing that all photos were publicly available. It was indeed a photo of himself that he submitted to his facial recognition test.
“After his firing, Mr. Pa sent numerous messages to Uber to fix the problem, specifically asking for a human to review his submission.” “Due to continued disagreements, we have taken the final decision to end our partnership with you,” WIE elaborated in a discussion of his case. There is. A wide-ranging report examining “Data-Driven Leveraging in the Gig Economy.”
Based on the published details of Mangian's complaint, both Uber's facial recognition checks and the human review system the company had in place as a safety net for automated decision-making were tested in this case. It is clear that we have failed.
Equality law and data protection
The case raises questions about how fit for purpose UK law is when it comes to governing the use of AI.
Mangian was ultimately able to obtain a settlement from Uber through legal proceedings under the Equality Act, specifically a discrimination claim under the UK's 2006 Equality Act, which lists race as a protected characteristic.
Baroness Kishwar Faulkner, chair of the EHRC, criticized the fact that the Uber Eats delivery worker had to make a legal claim “to understand the opaque process that affected his work” in a statement. Stated.
“AI is complex and poses unique challenges for employers, lawyers, and regulators. As the use of AI increases, it is important to understand that the technology can lead to discrimination and human rights abuses.” she wrote. “We are particularly concerned that Mr. Mangian was not informed that his account was in the process of being deactivated, nor was he provided with a clear and effective means to contest this technique. More work is needed to ensure that employers are transparent and open to employees about how and when they use AI.”
The UK Data Protection Act is another piece of legislation that is relevant here. In theory, it should provide strong protection against opaque AI processes.
The selfie data relevant to Manjang's claims was obtained using the data access rights contained in the UK's GDPR. If he hadn't been able to obtain clear evidence that Uber's ID checks had failed, the company might not have chosen to settle at all. Platforms with richer resources have an even greater chance of gaining an advantage if they can prove that their own systems are flawed without denying individuals access to relevant personal data.
Enforcement gap
Beyond data access rights, the UK's GDPR powers are supposed to provide individuals with additional safeguards against automated decisions that have legal or similarly significant consequences. The law also requires a legal basis for processing personal data and encourages system implementers to conduct data protection impact assessments to proactively assess potential harm. . This should force further checks against harmful AI systems.
However, enforcement is required for these protections to be effective, such as deterring the deployment of biased AI.
In the UK case, the relevant enforcement agency, the Information Commissioner's Office (ICO), failed to intervene and investigate complaints against Uber, despite complaints about poor ID checks dating back to 2021.
John Baines, senior data protection specialist at law firm Mishcon de Reyer, suggested that a “lack of proper enforcement” by the ICO was undermining legal protections for individuals.
“We should not assume that existing legal and regulatory frameworks cannot address some of the potential harms posed by AI systems,” he told TechCrunch. “In this example, the Information Commissioner certainly has the power to consider both in individual cases, but more broadly, whether the processing that is taking place is lawful under the UK GDPR. left an impression on me.
“Is the processing fair? Is there a legal basis? Are there conditions under Article 9 (given that special categories of personal data are processed)? But, and this is very important, Was a robust data protection impact assessment conducted before the verification app was introduced?”
“Yes, ICOs should definitely be more proactive,” he added, questioning the lack of intervention by regulators.
We contacted the ICO regarding Manjang and asked them to confirm whether they are considering using AI for ID checks by Uber in light of the complaints. A spokesperson for the watchdog group did not directly respond to our questions, but sent a general statement emphasizing the need for organizations to “know how to use biometric technology in ways that do not interfere with people's rights.” Ta.
“Our latest biometric guidance makes clear that organizations must reduce the risks associated with the use of biometric data, such as errors in accurately identifying individuals and bias within systems.” In a statement, he added: Once addressed, he can report these concerns to the ICO. ”
Meanwhile, the government is moving forward with relaxing data protection laws through its post-Brexit data reform bill.
Additionally, the government also confirmed earlier this year that it would not introduce dedicated AI safety legislation at this time, despite Chancellor Rishi Sunak's eye-catching claims that AI safety is a priority area for his government. did.
Instead, the company confirmed the proposals laid out in its March 2023 white paper on AI. The proposal would rely on existing laws and regulatory bodies to expand oversight efforts to cover AI risks that may arise on patches. One of the tweaks to the approach announced in February was a small amount of extra funding (£10m) for regulators, which the government will use to investigate AI risks and develop tools to help investigate AI systems. suggested that it could be used for
No timeline was provided for the disbursement of this small additional fund. There are multiple regulators involved here, so assuming the cash is split equally between bodies such as the ICO, EHRC and the Medicines and Healthcare products Regulatory Agency, of the 13 regulators and departments. To name just three, the British Secretary of State wrote: Companies could receive less than £1m to replenish their budgets to tackle rapidly growing AI risks after last month's request for an update on their 'strategic approach to AI' There is a gender.
Frankly, if AI safety is indeed a government priority, this seems like an incredibly low-level additional resource for already overburdened regulators. This also means that, as critics of the government's approach have previously pointed out, funding for AI harms caught between the UK's existing regulations remains zero or there is no active monitoring. means that it has not been
New AI safety laws could send a stronger signal of priority. Similar to the EU's risk-based AI harm framework, which is accelerating towards adoption as hard-line legislation in the EU. But you also need the will to actually do it. And that signal must come from the top.