Skip to content
Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

SCIENTISTS FOOL DEEPFAKE DETECTORS

Deepfakes – doctored videos that show persons saying things they never actually said or, often, would say – have become a growing scourge of social media. Computer engineers quickly put together “deepfake detector” software to alert viewers whether a video is phony.
Now researchers at the University of California at San Diego have shown how easy it can be to fake out a deepfake detector.
The detectors were tripped up when the researchers inserted something called an “adversarial example” into each frame of the video. An adversarial example is a manipulated bit of code that causes a computer to make a mistake. 
By inserting an adversarial example into each frame, the software designed to detect a deepfake made a mistake in every frame, thereby consistently judging the video to be authentic. 
For example, in deepfakes, it is particularly difficult to replicate natural human eye movements. Therefore, deepfake detectors often focus on eye-blinking in a video to determine its genuineness. 
To defeat the detector, saboteurs could insert code that disrupts the detector’s perception of eye movements.
In tests, the technique defeated the detector 99 percent of the time.
Compressing and then re-expanding a video often can erase adversarial examples. But the San Diego group’s method left the bugs in place 85 percent of the time even after compressing and uncompressing videos.
TRENDPOST: There is less and less reason to trust the authenticity of anything posted on social media. The future will be an expansion of the constant war between hackers who falsify and manipulate data of all kinds and their opponents, with malignant actors always leading by a length.

Comments are closed.