Facebook's definition of "hate speech" grows even more bizarre

We’ve touched on this story here already but it’s since spiraled a bit further outside the boundaries of reality. Facebook recently deleted a post from William Gheen, of the Americans for Legal Immigration Political Action Committee (ALIPAC). The post had to do with the murder of Mollie Tibbetts by an illegal alien (who actually was an illegal alien, despite coverage you may have seen in the MSM to the contrary) and how it related to problems with our current immigration law enforcement efforts. The social network giant eventually apologized for doing so, and for suspending Gheen’s account in the process, but their explanation of how the “mistake” took place actually seemed to make things worse. (Washington Times, emphasis added)

Advertisement

The admission came hours after William Gheen, head of Americans for Legal Immigration Political Action Committee, said he’d had his post removed and been slapped with a 24-hour ban for posting what Facebook said was “hate speech.”

Mr. Gheen was surprised by the move because his post had talked about illegal immigration but he didn’t think it crossed any lines.

Facebook now agrees.

We mistakenly removed a comment by William Gheen and restored it as soon as we were able to investigate because it did not violate our community standards,” the company said in a statement to The Washington Times. “Our team processes millions of reports each week, and sometimes we get things wrong. We are sorry this happened.”

Keep those last two sentences in mind for a moment. We might write this off as a simple clerical error (or the digital equivalent thereof) but it would be a mistake to look at this incident in a vacuum. Consider also the question of what happened to Salena Zito’s column at the New York Post.

Within two hours I started receiving a handful of notes from people who are friends on my personal Facebook page that their posting of my piece, entitled “Why Trump’s supporters won’t care about Cohen and Manafort,” telling me it had been removed.

Sometimes the removal was accompanied by a message from Facebook. “Spam” was the most common reason given, but a couple of people were told Facebook removed the post because “it did not follow our Community Standards.” …

The post was gone.

Advertisement

Posts and links to Zito’s column were also later restored after people complained. But in each case, it was reported that these deletions for “violations of community standards” were not carried out by an algorithm, but rather when some other Facebook user flagged the message and an employee of Facebook removed it after determining that it qualified as “hate speech.”

This means that there are two avenues for the removal of “offending” content on Facebook. They’ve already admitted to algorithm alorithm which scans the billions of updates every day looking for certain keywords and phrases. But there are also customer service employees manually looking at items flagged by the community and making the determination as to whether or not the content is offensive and worthy of suppression. Neither of these methods is worth a hill of beans and I’ll tell you why.

Taylor wrote about this alleged algorithm problem on Friday, but I think he was being a bit too generous. True, some of these issues may have been caused by a new automated function, but the high profile articles and links that draw a lot of traffic shouldn’t have been caught in that trap. Taylor also mentioned that human monitors would be better than algorithms. True in theory, but they could only (at most) handle entries which are drawing massive numbers of complaints and flags. The number of new updates on Facebook each day would require roughly 91% of the population of the planet to work for Facebook screening content full time to keep up.

Advertisement

I think we’ve been able to glean enough information from Facebook’s frequently cryptic responses to complaints to verify what I was guessing at above. There are two channels to handle “hate speech.” One is automated and one requires human agents responding to complaints to evaluate the content. But there’s zero question that the amount content being flagged and the number of users being suspended skews wildly toward conservative voices. If you hire coders who build the algorithm to primarily squelch conservatives and you hire liberal “customer service” people to handle complaints, that’s a wholly predictable result.

It’s not that Facebook’s code is flawed. It’s not that their monitors are somehow innocently ignorant. It’s that the company is populated by liberals with an agenda, regardless of what Mark Zuckerberg attempts to claim to the contrary. They’re using content monitoring systems which could, perhaps, work if they could somehow be made ideologically neutral. But under the current leadership, it’s destined to fail from the start. And at the risk of being a bit too harsh, this failure isn’t a bug. It’s a feature.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement