WaPo: You know that big gov't-media freakout over "disinformation"? Yeah ... never mind

AP Photo/Susan Walsh

Does “disinformation” on the Internet actually do anything — even to persuade? When the freak-out over Russia-generated Facebook and Twitter memes began in late 2016, I repeatedly asked one basic question: where is the evidence that these campaigns changed even one single vote?

Advertisement

The answer: No such evidence exists. The memes of Hillary Clinton fighting Jesus turned out not to be game-changers after all, a new study reported by the Washington Post concludes. Voters made up their own minds independent of such trollery, as anyone with two functioning brain cells should have realized before and since:

Russian influence operations on Twitter in the 2016 presidential election reached relatively few users, most of whom were highly partisan Republicans, and the Russian accounts had no measurable impact in changing minds or influencing voter behavior, according to a study out this morning.

The study, which the New York University Center for Social Media and Politics helmed, explores the limits of what Russian disinformation and misinformation was able to achieve on one major social media platform in the 2016 elections.

“My personal sense coming out of this is that this got way overhyped,” Josh Tucker, one of the report’s authors who is also the co-director of the New York University center, told me about the meaningfulness of the Russian tweets.

“Now we’re looking back at data and we can see how concentrated this was in one small portion of the population, and how the fact that people who were being exposed to these were really, really likely to vote for Trump,” Tucker said. “And then we have this data to show we can’t find any relationship between being exposed to these tweets and people’s change in attitudes.”

Advertisement

Ah, the old correlation-is-causation fallacy! We see that happen plenty of times, only perhaps only recently has it become so pronounced and on so wide a basis. The assumption behind the “disinformation” panic is that a meme has a butterfly effect of sorts; an image of Hillary armwrestling Jesus will somehow set off a network effect that causes her to lose Wisconsin. Never mind that one can point to the fact that Clinton never bothered to personally campaign in Wisconsin can better and more directly explain the loss, or that her overall national narrative of gender entitlement likely turned off a large number of persuadable voters. Or as Hillary called them, deplorable voters.

So why didn’t anyone attempt to answer the question about causation from the beginning? Actually, one prominent figure did at least raise the question — Mark Zuckerberg. The Facebook founder scoffed in 2017 that a handful of trolls and a few hundreds of thousands of dollars in stupid memes had more impact on an election than the candidates and campaigns that spent $2 billion combined on wall-to-wall advertising and massive direct contacts. At the time, Zuckerberg challenged the media and Congress to produce some evidence of causation.

It didn’t take long for Zuckerberg to change his tune. But why? Two years ago to the day, author Joseph Bernstein explained that Zuckerberg realized the financial incentives in joining the “Big Disinfo” government-media industrial complex. Harpers published an excerpt from Bernstein’s book, Disinformed: How We Get Fake News Wrong, and why media has become so invested in it:

Advertisement

The Commission on Information Disorder is the latest (and most creepily named) addition to a new field of knowledge production that emerged during the Trump years at the juncture of media, academia, and policy research: Big Disinfo. A kind of EPA for content, it seeks to expose the spread of various sorts of “toxicity” on social-media platforms, the downstream effects of this spread, and the platforms’ clumsy, dishonest, and half-hearted attempts to halt it. As an environmental cleanup project, it presumes a harm model of content consumption. Just as, say, smoking causes cancer, consuming bad information must cause changes in belief or behavior that are bad, by some standard. Otherwise, why care what people read and watch?

Big Disinfo has found energetic support from the highest echelons of the American political center, which has been warning of an existential content crisis more or less constantly since the 2016 election. To take only the most recent example: in May, Hillary Clinton told the former Tory leader Lord Hague that “there must be a reckoning by the tech companies for the role that they play in undermining the information ecosystem that is absolutely essential for the functioning of any democracy.”

Somewhat surprisingly, Big Tech agrees. Compared with other, more literally toxic corporate giants, those in the tech industry have been rather quick to concede the role they played in corrupting the allegedly pure stream of American reality. Only five years ago, Mark Zuckerberg said it was a “pretty crazy idea” that bad content on his website had persuaded enough voters to swing the 2016 election to Donald Trump. “Voters make decisions based on their lived experience,” he said. “There is a profound lack of empathy in asserting that the only reason someone could have voted the way they did is because they saw fake news.” A year later, suddenly chastened, he apologized for being glib and pledged to do his part to thwart those who “spread misinformation.”

Advertisement

Why has social media combined with the Big Disinfo industrial complex? It suits their narrative, especially when it comes to advertising:

One needn’t buy into Bratich’s story, however, to understand what tech companies and select media organizations all stand to gain from the Big Disinfo worldview. The content giants—Facebook, Twitter, Google—have tried for years to leverage the credibility and expertise of certain forms of journalism through fact-checking and media-literacy initiatives. In this context, the disinformation project is simply an unofficial partnership between Big Tech, corporate media, elite universities, and cash-rich foundations. Indeed, over the past few years, some journalists have started to grouse that their jobs now consist of fact-checking the very same social platforms that are vaporizing their industry.

Ironically, to the extent that this work creates undue alarm about disinformation, it supports Facebook’s sales pitch. What could be more appealing to an advertiser, after all, than a machine that can persuade anyone of anything? This understanding benefits Facebook, which spreads more bad information, which creates more alarm. Legacy outlets with usefully prestigious brands are taken on board as trusted partners, to determine when the levels of contamination in the information ecosystem (from which they have magically detached themselves) get too high. For the old media institutions, it’s a bid for relevance, a form of self-preservation. For the tech platforms, it’s a superficial strategy to avoid deeper questions. A trusted disinformation field is, in this sense, a very useful thing for Mark Zuckerberg.

Advertisement

That set of incentives helps explain why Zuckerberg went from questioning causation to fully buying into the panic. Add to that the political pressure from Congress, which set itself up as a sort of witch-hunt tribunal in 2017 after the election, and the evolution grows more clear. Don’t just blame one political party for that either; Democrats may have created those incentives, but some Republicans in Congress applied the same pressure in service to their own interests. It’s not difficult to interpret how Zuckerberg saw the writing on the wall in 2017.

That still brings us back to today. The disinformation panic that should never have started in the first place is now a fully industrialized aspect of our journalistic platforms as well as our political institutions. How do we draw the venom back out of the systems? That’s the real question, and until these incentives change, we can’t expect anything else to change either.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Ed Morrissey 10:00 PM | November 22, 2024
Advertisement
Advertisement