To cleanse the palate, the latest in a continuing series on how “photographic evidence” increasingly isn’t evidence of anything. I recommend the Verge’s piece on this, mainly because the first image at the link illustrates vividly how quickly this technique has progressed in just four years. The 2014 version looks like a crude proof of concept, that AI can be taught to “blend” two human faces to create something new. The 2018 version is completely indistinguishable from reality. And I do mean completely.
Big Think explains the technology, which involves what Nvidia calls “generative adversarial networks.” They take one computer and teach it to blend images into a chimera; they take another and teach it to distinguish chimeras from real images. The first computer learns to get better by trial and error in attempting to fool the second. The Pentagon is hard at work on ways to tell real images from fakes, as being able to expose disinformation of this caliber is obviously a potential national-security priority. But that presents an obvious question, as Big Think notes: If the AI gets better at producing fakes whenever its fakes are sniffed out, won’t it eventually use the Pentagon’s defensive tools to produce a perfect, undetectable fake? What happens when you have supercomputers not only playing chess with each other but improving their skills each time they play?
As I understand the methodology from this piece, the AI takes two photos of real people and scrutinizes their faces for three variables — coarse features (e.g., facial shape), middle ones (shape of facial components), and fine ones (skin tone, hair). Once it has the data, those variables can be adjusted however you like to produce fake but completely plausible-seeming chimeras. You’ll see what I mean in the clip but the third image at the Verge makes it clear at a glance. You can make a matrix of faces, with one row on top and one column on the side, and the AI will plug in a hybrid face for every combination. It’s the technologically ultra-advanced version of the game “if A and B had a baby it would look like this.”
God only knows what Russia and China will do with it. In the meantime, per Tom’s Guide, mundane possible uses range from “paradigm-changing synthetic free-to-use image search pages that may be the end of stock photo services to people accurately previewing hair styling changes. And of course, porn.” The “deep fakes” trend arrived at porn sites months ago, in fact. Huxley forgot the part in “Brave New World” where you get to “build” the virtual stars of your fantasies.
The real saving grace of this technology: If catastrophe should befall us and the world’s cats end up going extinct, Nvidia can generate “new” cat photos for the Internet unto eternity.