As in the past, the Russian advertisements did not create ethnic strife or political divisions, either in the United States or in Europe. Instead, they used divisive language and emotive messages to exacerbate existing divisions. As in the past, it’s enormously misleading to name “Russia” as the source of the problem. The old KGB had whole departments devoted to the invention of rumors and the creation of fake extremists; the KGB’s institutional descendants simply realized, sooner than most, that social-media campaigns are a cheap way for an impoverished ex-superpower to meddle in other countries’ politics. But in 2016, they were one of many groups — among them the Trump campaign and a whole network of conspiracy-minded and alt-right trolls — who built targeted Facebook groups and bought divisive advertisements aimed at carefully sliced and segmented bits of the population.
The real problem is far broader than Russia: Who will use these methods next — and how? If Russians worked out how to create fake “Black Lives Matter” Twitter accounts, why can’t others? I can imagine multiple groups, many of them proudly American, who might well want to manipulate a range of fake accounts during a riot or disaster to increase anxiety or fear. I can imagine a lot of people who might want to take control of Defense Department accounts, as Russian hackers also tried to do, to send false information during a military conflict. There is no big barrier to entry in this game: It doesn’t cost much, it doesn’t take much time, it isn’t particularly high-tech, and it requires no special equipment. Facebook, Google and Twitter, not Russia, have provided the technology to create fake accounts and false advertisements, as well as the technology to direct them at particular parts of the population. Many other countries and political groups — on the left, the right, you name it — will quickly figure out how to use them.