Whether it's drafting emails, creating concept art, or conning vulnerable people into thinking you're a friend or relative in need, AI is versatile. But we all want to avoid getting scammed, so let's take a moment to talk about what you should be careful of.
Over the past few years, not only has the quality of generated media, from text to audio to images to video, improved dramatically, but it has also become cheaper and easier to create. The same kinds of tools that help concept artists dream up fantasy monsters and spaceships, or help non-native speakers improve their business English, can also be misused.
Don't expect the Terminator to come knocking on your door selling you a Ponzi scheme: these are the same old scams we've been facing for years, but with a generative AI twist they've made easier, cheaper and more convincing.
This is not an exhaustive list – just some of the most obvious tricks that AI can enhance – we'll be sure to add more as new ones emerge and any additional steps you can take to protect yourself.
Clone the voices of your family and friends
Synthetic voices have been around for decades, but only in the past year or two have advances in technology made it possible to generate new voices from just a few seconds of audio. This means that anyone whose voice has ever been made public in a news report, YouTube video, or on social media is at risk of having their voice replicated.
Scammers can and do use this technique to create convincing fakes of loved ones and friends. Of course, they can get the fakes to say anything, but in their scams they're most likely to create audio clips pleading for help.
For example, a parent might receive a voicemail from an unknown number claiming to be their son, saying their luggage was stolen while they were traveling, that someone has borrowed their phone, and asking Mom or Dad to send money to this address, Venmo recipient, company, etc. It's easy to imagine variations on this, like car troubles (“They're not giving my car back until someone pays me”) or health issues (“This treatment isn't covered by insurance”).
This kind of scam has already been perpetrated using President Biden's voice. The perpetrator was caught, but scammers will likely be more careful in the future.
How can we combat voice cloning?
First of all, you don't need to try to spot a fake voice – it's constantly evolving and there are many ways to hide quality issues – even experts are fooled!
Anything coming from a number, email address, or account you don't recognize should automatically be considered suspicious. If someone claims to be your friend or loved one, contact them as you normally would – they'll probably tell you it's okay and that it's (you guessed it) a scam.
Scammers tend not to follow up if ignored, but family members probably will, so it's fine to leave suspicious messages as read while you consider it.
Personalized phishing and spam emails and messages
Everyone receives spam emails from time to time, but text generation AI makes it possible to send mass emails that are tailored to each individual. Data breaches are becoming more common, exposing a lot of personal data.
It's common to receive a very simple scam email with an obviously scary attachment that says “click here to see your invoice.” But when you add a bit of context, recent locations, purchases, habits, etc. to make it seem like a real person or a real problem, it suddenly becomes a lot more believable. Armed with some personal information, language models can customize the general content of these emails and send them to thousands of recipients in a matter of seconds.
So what used to be “Seller, invoice attached” now becomes “Hi Doris! This is the Etsy Promotions team. You're currently getting 50% off an item you recently viewed! Use this link to claim your discount and shipping to your Bellingham address is free.” A simple example, but still. With real names, shopping habits (easy to figure out), and general location (again), the message suddenly becomes much more confusing.
After all, it’s just spam. But this kind of customized spam once had to be done by low-paid people working for overseas content farms. Now it can be done at scale by law graduates with better writing skills than many professional writers.
How can you combat email spam?
As with traditional spam, vigilance is your best weapon, but don't expect to be able to distinguish generated text from human-written text: very few humans can, and (despite what some companies and services claim) no other AI model can.
While the text may have improved, this type of scam remains at its core a challenge: getting you to open a suspicious attachment or link. As always, don't click or open anything unless you're 100% sure of the sender's authenticity and identity. If you have any doubts at all (which is a good feeling to cultivate), don't click. And if you have someone knowledgeable who can forward it to you for a second pair of eyes, do so.
“Fake you” identity fraud
Given the number of data breaches that have occurred over the past few years (thanks Equifax!), it's safe to say that nearly everyone has a significant amount of personal data floating around on the dark web. If you follow good online security practices, changing your passwords and enabling multi-factor authentication will mitigate a lot of the danger. However, generative AI could pose a serious new threat in this space.
With a wealth of data available online about individuals, and often just one or two audio clips of that person, it is becoming increasingly easy to create an AI persona that sounds similar to the person in question and has access to many of the facts used to verify their identity.
Think about it this way: What do you do if you have trouble logging in, can't set up your authenticator app properly, or lose your phone? You probably call customer service, who will then “verify” your identity using trivial details like your date of birth, phone number, and social security number. Even more advanced methods like a “selfie” are easily rigged.
A customer service agent (probably an AI!) may respond to this fake you request, giving it all the powers it would have if you were calling in person. There are a lot of different things it can do from that position, and none of them are good.
As with other attacks on this list, the danger of this impersonation attack is not in how realistic the impersonation is, but in the ease with which a fraudster can carry out this type of attack widely and repeatedly. Until recently, this type of impersonation attack was expensive, time-consuming, and consequently limited to high-value targets such as wealthy individuals or CEOs. Today, workflows can be built to create thousands of impersonation agents with minimal oversight, and these agents can automatically call customer service numbers for an individual's known accounts or even create new accounts. Only a handful of agents need to be successful to justify the cost of the attack.
How can we combat identity fraud?
As before AI began to augment fraudsters' operations, “Cybersecurity 101” is your best bet. Your data is already public and you can't put the toothpaste back in the tube. But you can make sure your accounts are properly protected against the most obvious attacks.
Multi-factor authentication is the most important step anyone can take here. All critical activity on your account is sent directly to your phone, and any suspicious login or password change attempts will appear in your email. Don't ignore these warnings or mark them as spam, even (especially) if you're receiving a lot of them.
AI-generated deepfakes and blackmail
Perhaps the most frightening form of AI fraud is the possibility of being blackmailed using deepfake images of you or a loved one. This futuristic and frightening prospect is thanks to the rapidly evolving world of open image models. People interested in certain aspects of cutting edge image generation have created workflows that not only render nude bodies but also attach them to photogenic faces. There is no need to go into detail about how it is already being used.
But one unintended consequence is that it is an extension of the scam commonly known as “revenge porn,” more accurately described as the non-consensual distribution of intimate images (though, like “deepfakes,” replacing the original term can be difficult). When private images of someone are made public, either through a hack or a vengeful ex-lover, they can be used for blackmail, threatening to make the images publicly available unless a third party pays up.
AI powers this scam by making it so that actual intimate images don't have to exist in the first place: anyone's face can be added to an AI-generated body. The results aren't always convincing, but even if they're pixelated, low-resolution, or partially obscured, they could be enough to fool yourself or others. And that could be enough to scare someone into paying you to keep them secret. As with most blackmail scams, though, the first payment is unlikely to be the last.
How can we combat AI-generated deepfakes?
Unfortunately, the world we're heading towards is one where fake nude images of pretty much anyone are available on demand. It's scary, it's weird, it's gross, but unfortunately, the secret is out now.
Nobody is happy about this situation, except the bad guys. But for all of us potential victims, there are some benefits. It may be of little consolation, but these images are not actually yours, and you don't need actual nude photos to prove that. These image models may generate bodies that are realistic in some sense, but like any generative AI, they only know what they've learned. So, for example, the fake images may lack identifying marks and be obviously wrong in other ways.
The threat is unlikely to ever go away entirely, but there are now more ways for victims to legally compel image hosts to remove images and ban scammers from hosting sites, and as the problem grows, so too will the legal and private tools available to combat it.
TechCrunch is not a lawyer, but if you are a victim, please contact the police. This is not just a scam, it is harassment. You can't expect the police to do the thorough internet sleuthing required to track someone down, but sometimes these cases get solved, or the scammers get scared off by the requests sent to their ISP or forum host.