NY Times: Facebook is making up rules about banning speech as it goes

I suspect the only reason we’re hearing about this is that the left has turned on Facebook in large part over its presumed impact on the 2016 election and has created a market for badmouthing the company. But whatever the case, it’s interesting to get a glimpse behind the curtain and discover that the company is basically making up new rules to control speech on its platform every few weeks. The rules aren’t written into some sort of algorithm but carried out by an army of 15,000 moderators who do their best to follow the rules spelled out in slideshows created by the higher-ups. From the NY Times:

How can Facebook monitor billions of posts per day in over 100 languages, all without disturbing the endless expansion that is core to its business? The company’s solution: a network of workers using a maze of PowerPoint slides spelling out what’s forbidden.

Every other Tuesday morning, several dozen Facebook employees gather over breakfast to come up with the rules, hashing out what the site’s two billion users should be allowed to say. The guidelines that emerge from these meetings are sent out to 7,500-plus moderators around the world. (After publication of this article, Facebook said it had increased that number to around 15,000.)

The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself, The New York Times has found.

The Times was provided with more than 1,400 pages from the rulebooks by an employee who said he feared that the company was exercising too much power, with too little oversight — and making too many mistakes.

An examination of the files revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.

The rules, which are made up of roughly 200 pages of slides, contain definitions of hate speech broken into tiers. There are also lists of problematic emojis and an entire spreadsheet listing all of the banned hate figures around the world:

Moderators are instructed to remove any post praising, supporting or representing any listed figure.

Anton Shekhovtsov, an expert in far-right groups, said he was “confused about the methodology.” The company bans an impressive array of American and British groups, he said, but relatively few in countries where the far right can be more violent, particularly Russia or Ukraine…

For a tech company to draw these lines is “extremely problematic,” said Jonas Kaiser, a Harvard University expert on online extremism. “It puts social networks in the position to make judgment calls that are traditionally the job of the courts.”

The lengthy article considers various examples of banned material around the world, but relatively little space is given to discussing how the rules play out in America where a majority of Times’ readers live. The author does mention that the company has banned the Proud Boys and also a Trump-team-produced ad about the migrant caravan, but that’s about it. We don’t really get a sense of how the rules are applied to the right and left in this country. How many “hate figures” listed in the company’s Excel spreadsheet are left-wing? I’m assuming there must be some but we aren’t told anything about them.

The other problem, apart from the potential for bias, is the idea that all of the decisions about community guidelines are being made by a group of young engineers who probably have as much diversity of opinion as the faculty lounge at the local liberal arts college. As a private company, Facebook is free to set its own guidelines, but we needn’t pretend that those guidelines are being created from anything like the midpoint of the American political spectrum because they almost certainly are not.