Sure, GANs are getting better all the time. But for now it’s still easier to detect a fake than it is to produce a convincing one. Today’s GANs are good at faces, Riedl explained. But they get sloppy around complex, moving materials. Look close at the subject’s hair in deepfake video. You should be able to spot telltale distortions.
It’s possible to automate this scrutiny. Social-media companies and users could deploy discriminators to sift through media on a network, looking for the pixelation and other digital fingerprints of GAN-produced fake. In September Google released, like targets at a shooting range, a trove of 3,000 old deepfakes—all in order to boost efforts to identify newer fakes.
Plus, there are methods of countering deepfakes that don’t solely rely on code. Purveyors of deepfakes need social media accounts and unscrupulous media outlets to help spread the malicious content, Chau explained.
And that exposes the purveyor of the deepfake to social-media analysis, Riedl said. “You can analyze that and find out where these things originate from, look at the network around this person and find out if they’re an artificial entity or linked to known groups.”
Join the conversation as a VIP Member