SCIENTISTS FOOL DEEPFAKE DETECTORS

Deepfakes – doctored videos that show persons saying things they never actually said or, often, would say – have become a growing scourge of social media. Computer engineers quickly put together “deepfake detector” software to alert viewers whether a video is phony.
Now researchers at the University of California at San Diego have shown how easy it can be to fake out a deepfake detector.
The detectors were tripped up when the researchers inserted something called an “adversarial example” into each frame of the video. An adversarial example is a manipulated bit of code that causes a computer to make a mistake. 
By inserting an adversarial example into each frame, the software designed to detect a deepfake made a mistake in every frame, thereby consistently judging the video to be authentic. 
For example, in deepfakes, it is particularly difficult to replicate natural human eye movements. Therefore, deepfake detectors often focus on eye-blinking in a video to determine its genuineness. 
To defeat the detector, saboteurs could insert code that disrupts the detector’s perception of eye movements.
In tests, the technique defeated the detector 99 percent of the time.
Compressing and then re-expanding a video often can erase adversarial examples. But the San Diego group’s method left the bugs in place 85 percent of the time even after compressing and uncompressing videos.
TRENDPOST: There is less and less reason to trust the authenticity of anything posted on social media. The future will be an expansion of the constant war between hackers who falsify and manipulate data of all kinds and their opponents, with malignant actors always leading by a length.

Comments are closed.

Skip to content