|
For years, software designers and tech-savvy pundits have warned that computer technology and artificial intelligence (AI) are approaching the point at which anyone could invent videos, audios, and photos so realistic they would fool literally anyone.
So-called “deepfakes” have been around for a while, but most have had a “tell”—something giving a hint it was not real, whether an inflection in a voice, a slight blur in an image, or some other clumsy detail that couldn’t be ironed out.
They also were complicated and time-consuming to produce and required expertise to get it right.
But, thanks to the new generation of AI, deepfakes can now be made not only quickly, but also of such high quality that they are routinely indistinguishable from authentic events unless you can dig into the software coding that created them.
We’ve already seen a faked video of Joe Biden denouncing transgender people in a speech and another fake purporting to show children learning satanism in libraries.
Thanks to AI, we also were able to see Trump’s mug shot when he was booked in New York on a charge of falsifying business records. Actually, Trump took no mug shot—but that didn’t stop someone using AI from whipping one up for cheap in a few seconds.
Want to make a video showing Joe Biden groping Melania Trump and her moaning with pleasure? The pope giving the Nazi salute and crying “Sig Heil”? No problem. Not even Donald Trump nor the College of Cardinals could find a reason to deny their validity.
Actually, though, there’s lots of problems.
With a brief sample of a voice or a bit of video, today’s AI can turn out perfect images and audio files in a few seconds, no special skills required.
Loaded onto social media platforms and beamed to a gullible public, “AI can not only rapidly produce targeted campaign emails, texts, or videos; it also could be used to mislead voters, impersonate candidates, and undermine elections on a scale and at a speed not yet seen,” the Associated Press reported in a 14 May investigation.
“We’re not prepared for this,” A.J. Nash, vice president of cybersecurity firm ZeroFox, told the AP. “The big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale and distribute it on social platforms, it’s going to have a major impact.”
Among the kinds of dangers experts have publicly warned about: videos, with audio, of a political candidate speaking fondly of child pornography or expressing extreme views the candidate has never espoused. You might see your local television news anchor reporting that Candidate X has had a stroke or ended the campaign, and yet the entire video could be a work of fiction.
An AI could make robocalls in the voice of Elon Musk or Tucker Carlson, address you by name, chat briefly, then make an impassioned pitch for you to vote for a particular candidate.
“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” former CEO Oren Etzioni of the Allen Institute for AI, asked the AP. “A lot of people would listen. But it’s not him.”
Donald Trump’s presidential campaign has already availed itself of the newer, better deepfake tech. The group doctored a video and used a voice-cloning program to invent a video of CNN anchor Anderson Cooper having a reaction he never really had to Trump’s performance at a CNN town hall earlier this month.
Trump then posted the fake video on his Truth Social platform and blasted it out to his followers.
The Republican National Committee (RNC) also has gotten into the game with an online ad asking, “What if the weakest president we’ve ever had was re-elected?”
There followed a series of nightmare images, created by AI, of Taiwan being attacked, abandoned storefronts in a collapsed American economy, and military troops patrolling city streets as criminals and immigrants run rampant.
The RNC said the ad is “an AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.”
At least the committee acknowledged the images were artificially generated. Not everyone will, warned Petko Stoyanov, chief technology officer at the Forcepoint cybersecurity company.
AI gives foreign governments and espionage agencies a whole new arsenal of tools with which to sow chaos in American election campaigns, he said.
“What happens if an international entity—a cybercriminal or a nation-state—impersonates someone? What is the impact? Do we have any recourse?” Stoyanov said in an AP interview. “We’re going to see a lot more misinformation from international sources.”
In reality, AI already has been at work for years in political operations—not making deepfakes but mining data and handling the tedium of selecting specific voters to target with a social media message or finding the current whereabouts of past donors.
The staff of Authentic, a digital ad agency working with progressive candidates and causes, uses ChatGPT “every single day,” CEO Mike Nellis told the AP. However, any content created by the AI gets a human review before it’s released.
Most tech executives and observers, as well as politicians, agree that a human review is far too little.
Sam Altman, CEO of the company OpenAI that created ChatGPT, urged Congress on 9 May to regulate the technology.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said.
Legislation already proposed in the U.S. House mandates candidates to label campaign ads made using AI. The same bill would require synthetic images to display a watermark to that effect.
“It’s important that we keep up with the technology,” Rep. Yvette Clarke (D-NY), who introduced the bill, told the AP.
“We’ve got to set up some guardrails,” she insisted. “People can be deceived, and it only takes a split second. People are busy and don’t have the time to check every piece of information. AI being weaponized, in a political season, could be extremely disruptive.”
Earlier this month, a trade organization for political consultants decried the use of deepfakes in political ads, saying they are “a deception” with “no place in legitimate, ethical campaigns.”
TRENDPOST: Our 23 April, 2023, update on deepfakes was titled “It’s Official: Social Media is Now Worthless.” The newest generation of AI underscores our point: any photo, audio, or video—or claims from friends or influencers—on social media are as likely to be a lie as to be true.
Whether people will remember that in the excitement of the moment is not something to bet on. The old saying is even more true in the Internet Age: a lie can run around the world before the truth can tie its shoes.
To be informed of facts, there is no alternative but to find news sources that report facts to the best of their ability—yes, there are some—and to know the difference between news stories meant to inform you and those inventing “facts” or events to manipulate your mind.