What do you expect Facebook or WhatsApp to do about fake news in India... or anywhere?

The entire discussion about fake news has soaked into the national consciousness over the past couple of years and it’s rapidly spreading around the globe. In some ways, this was an important and long overdue conversation because of the continually evolving way that everyone consumes news. Some of what you get from the major, established news networks is occasionally just wrong and corrections can be slow in coming and poorly publicized compared to the original claim. Even the articles which are technically accurate are generally infused with spin and the stories that don’t receive any coverage say more about the source than what they do decide to publish.

Moving further afield from traditional sources, there’s a lot of “news” out there which is little more than propaganda and flat-out deception. This isn’t unique to Facebook and Twitter, either. The WaPo has a piece out this week describing issues with messaging apps such as WhatsApp and the problems that people have experienced in India. (Though it’s hardly unique to that nation.)

Americans associate misinformation with Facebook and the ways it shaped debate around the 2016 presidential election. But in other countries, falsities are just as likely to spread on private messaging services — sometimes with deadly consequences.

At least two dozen people have been killed in mob lynchings in India since the start of the year, their deaths fueled by rumors that spread on WhatsApp, the Facebook-owned messaging service. In Brazil, messages on WhatsApp falsely claimed a government-mandated yellow-fever vaccine was dangerous, leading people to avoid it. And as Mexico was heading into its presidential election this month, experts there called WhatsApp the ugly underbelly of the country’s news environment, a place where politically misleading stories, memes and messages can spread unchecked.

On WhatsApp, with 1.5 billion users, information can go viral in minutes as individuals forward messages along to their friends or groups, without any way to determine its origin.

I’m certainly not going to deny that this is happening and it can clearly cause very real (and occasionally deadly) problems for people. This is yet another example of why this was an important discussion to have. Consumers of news need to be savvier and, yes, skeptical when they see something shocking showing up in the news feeds on their phones or laptops.

The problem comes when we begin asking what Facebook (which owns WhatsApp) and Twitter and all the rest are going to do about it. In response to the problems in India and similar ones in Mexico during their recent elections, WhatsApp is “taking steps to root out misinformation.” On Facebook, Mark Zuckerberg has repeatedly pointed to all the resources he’s poured into “combating viral fake news.”

And what does that get us? It’s physically impossible to assign enough human beings to monitor even a tiny fraction of a percent of all the updates loaded on Facebook and WhatsApp in any given moment. That leaves these companies with options of building their own apps to automatically scan posts and make automated decisions based on keywords and such. Of course, the people designing the parameters for those algorithms wind up having their own bias so one side is “monitored” a bit more closely than the other.

You’re combatting a problem for which there is no viable solution and approaching it from the wrong point of view. I’ve brought this up here before but it’s worth mentioning again. Facebook (along with similar social networks) is the digital era equivalent of a corkboard in a college dorm, except the dormatory holds billions of people. If somebody pins a piece of paper to it saying that there’s poison in the yellow fever vaccine, do you go after the kid who wrote the note or the guy who manufactured the corkboard? The obvious villain is the author, but with billions of authors to monitor, your only option at that point is just to take down the corkboard.

The digital genie is out of the bottle and nobody can put it back in. There’s clearly an appetite for these social media platforms, but policing them is simply an impossible job. For the worst offenders who cause actual harm, don’t just suspend their accounts. Take them to court as a lesson to others. But that’s unlikely to slow the flood of fake news. Unless you want to go back to the pre-internet era (which is looking more and more attractive, though impossible), the onus is on the consumer to monitor what they read and pass along. But you should also keep in mind that we have plenty of people who definitely know – or should at least suspect – that they are promulgating fake news and would continue to do it anyway if it serves their agenda.