The graph above shows what usually happens whenever Facebook draws a policy line. Posts that clearly fall beyond that line—overt and credible incitements to violence against police or protesters, to choose a hypothetical example—will be taken down. But posts that don’t cross the line will naturally gain more engagement as they tiptoe closer to it. In this default world, a post that uses hateful language targeted at police might stay up, because police are not a protected class, and attract more likes and shares than a post criticizing law enforcement on policy grounds. The more inflammatory post, in other words, would likely land on the upward curve of user engagement. (These examples are speculative, because Facebook does not provide specific details about what “approaching the line” means.)
But platforms can train their systems to recognize this “borderline content” and make engagement look like the graph below...
In this scenario, the more inflammatory a post is, the less distribution it gets. Posts describing police in hateful terms might stay up but be shown to fewer people. According to Zuckerberg, this strategy of reducing the “distribution and virality” of harmful content is the most effective way of dealing with it.
Advertisement
Join the conversation as a VIP Member