|
Software designed to flag phony images generated by artificial intelligence let a photo of Elon Musk kissing a humanoid robot pass as genuine; it wasn’t.
Neither was the antique-looking picture of a pair of adults standing with a yeti and barely rising as tall as its waist. However, the AI thought that one was real also.
The detectors, also called “discriminators,” look for technical indicators, such as unusual patterns of pixels or unlikely shifts in contrast among parts of an image.
However, they’re not able to judge the larger context. For example, the AI wasn’t able to make a judgment that a picture of Elon Musk planting a wet one on a sexy-looking humanoid robot was unlikely to be real.
Also, as AI image generators become more complex and sophisticated, those technical red flags are likely to become more and more subtle.
“In general, I don’t think [the detectors] are great and I’m not optimistic they will be” Chenhao Tan, a computer scientist and AI specialist at the University of Chicago, told The Wall Street Journal.
“In the short term, they may be able to perform with some accuracy,” he acknowledged, “but in the long run, anything that humans do with images, AI will be able to recreate as well and it will be very difficult to distinguish the difference.”
The reason: image generators are always being upgraded to foil the latest generation of detectors.
“Every time someone builds a better generator, someone builds a better discriminator and then people use the better discriminator to build a better generator,” computer engineering professor Cynthia Rudin at Duke University explained to the WSJ. “The generators are designed [specifically] to fool a detector.”
Also, detectors struggle to make a determination about images of poor quality or that have been manipulated in some way. Introducing some graininess, even a degree that humans have trouble seeing, can flummox a detector.
“If you distort it, resize it, lower the resolution, by definition you’re altering those pixels and that additional digital signal [of genuineness] is going away,” Kevin Guo, a founder of Hive, an AI image detection software, said in a WSJ interview.
When photos are posted online and then copied from site to site, they may be cropped, changed in size, or otherwise altered, confusing detectors. Adobe’s new Photoshop version incorporating AI often has the same effect, the WSJ noted.
A company named Illuminarty created an AI detector to sort real artwork from fakes. The software identified authentic photos of artwork correctly 100 percent of the time, but let about half of AI-generated pictures pass as genuine also.
The company said it prefers to err on the side of caution so artists are not falsely accused of duplicity.
AI experts are urging artists and others using AI to create images to embed unique watermarks into their creations as a clear signal that they’ve been digitally generated.
TRENDPOST: AI detection by computer programs will never be reliable.
It will remain up to people looking at an image to determine for themselves the likelihood that it’s real. However, as we’ve learned too well, gullible people are eager to retweet or otherwise pass along stories and pictures that are exciting or inflammatory without bothering to assess their credibility.
As we’ve said in past stories such as “It’s Official: Social Media is Now Worthless” (4 Apr 2023), AI’s ability to fake pictures and stories has rendered much of social useless as a place to find accurate information.