AI DEEPFAKES AID AND ABET PHONE SCAMMERS

AI DEEPFAKES AID AND ABET PHONE SCAMMERS

Most of us have seen emails purporting to be from a relative or close friend: “I was traveling in a foreign country and I got arrested and I need $3,000 bail to get out of jail! Please, grandma!”

Families who fall for the scam lose an average of $11,000 each time, according to the FBI. These “imposter scams” fleece Americans of an estimated $11.6 billion a year.

More families may be conned now, thanks to the ability of artificial intelligence (AI) to clone voices after hearing just a small sample of a person’s speech and create “deepfakes” of people in distress.

CNN recently told the story of a woman who received a phone call. When she answered, she heard her 15-year-old daughter—who was away training for a ski race—sobbing, saying, “Mom, I messed up!”  

A male voice then said, “I have your daughter. You call the police, you call anybody, I’m gonna pop her so full of drugs. I’m gonna have my way with her then drop her off in Mexico, and you’re never going to see her again.” The threat was followed by a million-dollar ransom demand.

Fortunately, a call to the daughter discovered she was fine, unharmed, and didn’t understand why her mother was so upset.

But the woman was certain it was her daughter’s voice. “A mother knows her child,” she told CNN. “You can hear your child cry across the building and you know it’s yours.” The daughter’s “voice was so real, and her crying and everything was so real.”

“A reasonably good [voice] clone can be created with under a minute of audio and some are claiming that even a few seconds may be enough,” according to computer scientist Hany Farid, part of the University of California Berkeley’s AI research group. “The trend over the past few years has been that less and less data is needed to make a compelling fake.”

Many voice samples are captured FROM people’s social media posts and AI software can make voice cloning as cheap as $5 for anyone with minimal skills, he added.

TRENDPOST: AI deepfakes make it increasingly difficult, if not impossible, to tell the real from the false.

Computer scientists and AI developers are trying to find ways to thwart these kinds of deepfakes, whether in fake kidnap scams, political ads, or other contexts. However, it is unlikely that any such “tell” can be included.

Skip to content