Would You like a feature Interview?
All Interviews are 100% FREE of Charge
Whether it’s drafting emails, creating concept art, or conning vulnerable people into thinking you’re a friend or relative in need, AI is versatile. But we all want to avoid getting scammed, so let’s take a moment to talk about what you should be careful of.
Over the past few years, not only has the quality of generated media, from text to audio to images to video, improved dramatically, but it has also become cheaper and easier to create. The same kinds of tools that help concept artists dream up fantasy monsters and spaceships, or help non-native speakers improve their business English, can also be misused.
Don’t expect the Terminator to come knocking on your door selling you a Ponzi scheme: these are the same old scams we’ve been facing for years, but with a generative AI twist they’ve made easier, cheaper and more convincing.
This is not an exhaustive list – just some of the most obvious tricks that AI can enhance – we’ll be sure to add more as new ones emerge and any additional steps you can take to protect yourself.
Clone the voices of your family and friends
Synthetic voices have been around for decades, but only in the past year or two have advances in technology made it possible to generate new voices from just a few seconds of audio. This means that anyone whose voice has been made public in news reports, YouTube videos, or on social media is at risk of having their voice replicated.
Scammers can and do use this technique to create convincing fakes of loved ones and friends. Of course, they can get the fakes to say anything, but when used to scam people, they’re most likely to create audio clips pleading for help.
For example, a parent might receive a voicemail from an unknown number claiming to be their son, saying their luggage was stolen while they were traveling, that someone has borrowed their phone, and asking Mom or Dad to send money to this address, Venmo recipient, company, etc. It’s easy to imagine variations on this, like car troubles (“They’re not giving my car back until someone pays me”) or health issues (“This treatment isn’t covered by insurance”).
This kind of scam has already been perpetrated using President Biden’s voice. The perpetrator was caught, but scammers will likely be more careful in the future.
How can we combat voice cloning?
First of all, you don’t need to try to spot a fake voice – it’s constantly evolving and there are many ways to hide quality issues – even experts are fooled!
Anything coming from a number, email address, or account you don’t recognize should automatically be considered suspicious. If someone claims to be your friend or loved one, contact them as you normally would – they’ll probably tell you it’s okay and that it’s (you guessed it) a scam.
Scammers tend not to follow up if ignored, but family members probably will, so it’s fine to leave suspicious messages as read while you consider it.
Personalized phishing and spam emails and messages
Everyone receives spam emails from time to time, but text generation AI makes it possible to send mass emails that are tailored to each individual. Data breaches are becoming more common, exposing a lot of personal data.
It’s common to receive a very simple scam email with an obviously scary attachment that says “click here to see your invoice.” But when you add a bit of context, recent locations, purchases, habits, etc. to make it seem like a real person or a real problem, it suddenly becomes a lot more believable. Armed with some personal information, language models can customize the general content of these emails and send them to thousands of recipients in a matter of seconds.
So what used to be “Seller, invoice attached” now becomes “Hi Doris! This is the Etsy Promotions team. You’re currently getting 50% off an item you recently viewed! Use this link to claim your discount and shipping to your Bellingham address is free.” A simple example, but still. With real names, shopping habits (easy to figure out), and general location (again), the message suddenly becomes much more confusing.
After all, it’s just spam. But this kind of customized spam once had to be done by low-paid people working for overseas content farms. Now it can be done at scale by law graduates with better writing skills than many professional writers.
How can you combat email spam?
As with traditional spam, vigilance is your best weapon, but don’t expect to be able to distinguish generated text from human-written text: very few humans can, and (despite what some companies and services claim) no other AI model can.
While the text may have improved, this type of scam remains at its core a challenge: getting you to open a suspicious attachment or link. As always, don’t click or open anything unless you’re 100% sure of the sender’s authenticity and identity. If you have any doubts at all (which is a good feeling to cultivate), don’t click. And if you have someone knowledgeable who can forward it to you for a second pair of eyes, do so.
“Fake you” identity fraud
Given the number of data breaches that have occurred over the past few years (thanks Equifax!), it’s safe to say that nearly everyone has a significant amount of personal data floating around on the dark web. If you follow good online security practices, changing your passwords and enabling multi-factor authentication will mitigate a lot of the danger. However, generative AI could pose a serious new threat in this space.
With a wealth of data about individuals available online, and often just one or two audio clips of that person, it is becoming increasingly easy to create an AI persona that sounds similar to the person in question and has access to many of the facts used to verify their identity.
Think about it this way: What do you do if you have trouble logging in, can’t set up your authenticator app properly, or lose your phone? You probably call customer service, who will then “verify” your identity using trivial details like your date of birth, phone number, and social security number. Even more advanced methods like a “selfie” are easily rigged.
A customer service agent (probably an AI!) may respond to this fake you request, giving it all the powers it would have if you were calling in person. There are a lot of different things it can do from that position, and none of them are good.
As with other attacks on this list, the danger of this impersonation attack is not in how realistic the impersonation is, but in the ease with which a fraudster can carry out this type of attack widely and repeatedly. Until recently, this type of impersonation attack was expensive, time-consuming, and consequently limited to high-value targets such as wealthy individuals or CEOs. Today, workflows can be built to create thousands of impersonation agents with minimal oversight, and these agents can automatically call customer service numbers for an individual’s known accounts or even create new accounts. Only a handful of agents need to be successful to justify the cost of the attack.
How can we combat identity fraud?
As before AI has augmented scammer operations, your best bet is “Cybersecurity 101.” The data is already out there, and you can’t put the toothpaste back in the tube. But can Make sure your accounts are properly protected against the most obvious attacks.