Why Facebook can't fix fake news

“There’s a real risk this is doing great harm to the brand,” I was told by a Facebook insider who has been part of recent conversations at the top of that behemoth but asked not to be identified because the person didn’t want to alienate the company. This source said the election aftermath might be Facebook’s “Tylenol moment”—a reference to the 1982 poisoning deaths of people who ingested Tylenol capsules laced with cyanide. That crisis nearly crippled its maker, medical giant Johnson & Johnson.

Think back just a couple of years, before the 2016 election cycle and before Facebook set itself up as the world’s newswire. Facebook grew to a billion users by being a social network. It’s where you found old friends and kept up with family. I just looked back at my 2014 Facebook timeline. Almost zero politics! And that’s how most people liked it. Many users back then even beseeched friends to avoid political posts, or muted the violators if they persisted. In real life, most of us don’t want to argue politics with our friends and family, so why would we want to do it online?

Then, over the past two years, Facebook aggressively morphed into a media site. It set up deals with publishers to populate all our timelines with stories. It subtly encouraged users to post stories and to “like” and comment on them. Facebook, of course, did this with its own goals in mind. To maximize profit, Facebook needs to keep users engaged and on the site as long as possible, and to get those users to create or interact with all the content in their feeds. That thrum of activity helps Facebook’s algorithms more deftly target ads to more people, which makes Facebook even more attractive to advertisers.