It’s hard to imagine a technology with more power to disrupt. I’m already in the position (as many of you soon will be as well) where anyone can produce a believable audio and perhaps video of me saying absolutely anything they want me to say. How can that possibly be fought? More to the point: how are we going to trust anything electronically mediated in the very near future (say, during the next presidential election)? We’re already concerned, rightly or wrongly, with “fake news” — and that’s only news that has been slanted, arguably, by the bias of the reporter or editor or news organization. What do we do when “fake news” is just as real as “real news”? What do we do when anyone can imitate anyone else, for any reason that suits them?

And what of the legality of this process? It seems to me that active and aware lawmakers would take immediate steps to make the unauthorized production of AI deepfakes a felony offence, at least in the case where the fake is being used to defame, damage or deceive. And it seems to be that we should perhaps throw caution to the wind, and make this an exceptionally wide-ranging law. We need to seriously consider the idea that someone’s voice is an integral part of their identity, of their reality, of their person — and that stealing that voice is a genuinely criminal act, regardless (perhaps) of intent. What’s the alternative? Are we entering a future where the only credible source of information will be direct personal contact? What’s that going to do to mass media, of all types? Why should we not assume that the noise to signal ratio will creep so high that all political and economic information disseminated broadly will be rendered completely untrustworthy?