The feature Google demonstrated at yesterday's I/O Conference Fab uses generative AI technology to scan voice calls in real time to detect conversation patterns associated with financial fraud, and identify warning signs. A shiver ran down the spines of Ivasi and security experts alike. The feature represents the thin end of the wedge. They warn that once client-side scanning is built into mobile infrastructure, we could usher in an era of centralized censorship.
Google's demonstration of phone fraud detection features, which the company announced will be built into future versions of its Android OS (estimated to run on about three-quarters of the world's smartphones), comes in the form of Google's smallest It is equipped with Gemini Nano. Current generation AI models are intended to run entirely on the device.
This is basically a client-side scan. This nascent technology has sparked significant controversy in recent years related to efforts to detect child sexual abuse material (CSAM) and grooming activity on messaging platforms.
Apple has abandoned plans to introduce client-side scanning for CSAM in 2021 following significant privacy backlash. But policymakers continue to put increasing pressure on the tech industry to find ways to detect illegal activity taking place on its platforms. Therefore, the industry's move to build on-device scanning infrastructure could pave the way for all kinds of content scanning by default, whether government-led or related to specific commercial purposes. .
Respond to Google's call scanning demo Post to X, Meredith Whitaker, president of US-based encrypted messaging app Signal, warned: This builds a path for centralized device-level client-side scanning.
“It's a short step from detecting 'fraud' to detecting commonly associated patterns.”[ith] seeking reproductive health care” or “generally related”[ith] LGBTQ resource provision” or “commonly associated with tech worker whistleblowing.”
Matthew Green, a cryptography expert and professor at Johns Hopkins University, also said: took me to X To sound the alarm. “In the future, AI models will perform inferences on your text messages and voice calls to detect and report illegal activity,” he warned. “To pass the data to the service provider, it must be accompanied by a zero-knowledge proof that the scan was conducted. This will block any open clients.”
Greene suggested that this dystopian future, where censorship is the default, is still several years away from becoming technologically possible. “It will take some time for this technology to become efficient enough, but only a few more years. 10 years at most,” he suggested.
European privacy and security experts also quickly objected.
Reaction to Google's demo on X, Lukasz Olejnik, an independent researcher and consultant on privacy and security issues based in Poland, welcomed the company's anti-fraud capabilities, but warned that the infrastructure could be repurposed for social surveillance. “[T]This also includes, for example, technology for monitoring calls, making texts and documents in order to search for content that is unlawful, harmful, hateful or otherwise undesirable or unlawful with respect to someone's standards. It also means that one's physical abilities have already been developed or are being developed. ” he wrote.
“What's more, such a model could also display a warning, for example, that would otherwise block your ability to continue,” Olejnik continued emphatically. “Or report it somewhere, such as technological adjustments to social behavior. This is not only a huge threat to privacy, but also a threat to a variety of fundamental values and freedoms. That functionality already exists. doing.”
Olejnik further elaborated on his concerns, telling TechCrunch: “I haven't seen the technical details, but Google guarantees that detection happens on the device. This is great for user privacy. But there's much more at stake than privacy. This highlights how AI/LLM embedded in software and operating systems can be leveraged to detect or control various forms of human activity.
This highlights how AI/LLM embedded in software and operating systems can be leveraged to detect or control various forms of human activity.
Łukasz Oleinik
“Fortunately, things are going in a good direction so far. But what happens next, if the technological capabilities exist and are built in? It suggests potential future risks related to the ability to use AI to control society's behavior on a large scale or selectively, perhaps the most dangerous information technology ever developed. And we're getting to that point. How do we manage this? Are we going too far?
Michael Veale, associate professor of technology law at UCL, also raised the chilling fear of feature creep coming from Google's conversation-scanning AI and warned against it. Post to X “They have set up infrastructure for on-device client-side scanning for other purposes, and regulators and legislators want to exploit it.”
European privacy experts have particular reason for concern. A controversial message scanning bill has been under consideration in the European Union since 2022, with critics, including the European Union's data protection supervisor, warning that it represents a turning point for democratic rights in the bloc. ing. This is because it forces the platform to scan private her messages by default.
Although the current bill claims to be technology agnostic, such legislation would allow platforms to respond to so-called “detection orders” requiring clients to discover both known and unknown It is widely expected that side scanning will be introduced. In addition to CSAM, we also capture grooming activity in real time.
Earlier this month, hundreds of privacy and security experts wrote an open letter stating that the plan is 1 It warned that this could lead to millions of false positives per day. , is seriously flawed and vulnerable to attack.
Google was asked for a response over concerns that its conversation-scanning AI could violate people's privacy, but did not respond as of press time.