Twitter announces more shadow banning for those having a 'negative impact'

Twitter announced today that it will extend the process of shadow banning people who have a “negative impact” on conversations on the social media site. Users won’t be told their tweets are being banned from searches or conversations but they will be restricted until their behavior improves. From Slate:

Twitter is announcing on Tuesday that it will begin hiding tweets from certain accounts in conversations and search results. To see them, you’ll have to scroll to the bottom of the conversation and click “Show more replies,” or go into your search settings and choose “See everything.”…

When Twitter’s software decides that a certain user is “detract[ing] from the conversation,” all of that user’s tweets will be hidden from search results and public conversations until their reputation improves. And they won’t know that they’re being muted in this way; Twitter says it’s still working on ways to notify people and help them get back into its good graces. In the meantime, their tweets will still be visible to their followers as usual and will still be able to be retweeted by others. They just won’t show up in conversational threads or search results by default…

You’ve heard of Twitter jail? Let’s call this Twitter purgatory.

All of this shadow banning will be handled by an algorithm, meaning who gets banned will depend mostly on the reactions of other people.

How will Twitter determine that a user is “detracting from the conversation”? Its software will look at a large number of signals, Harvey said, such as how often an account is the subject of user complaints and how often it’s blocked and muted versus receiving more positive interactions such as favorites and retweets. The company will not be looking at the actual content of tweets for this feature—just the types of interactions that a given account tends to generate. For instance, Harvey said, “If you send the same message to four people, and two of them blocked you, and one reported you, we could assume, without ever seeing what the content of the message was, that was generally a negative interaction.”

Well, it certainly could mean the content was terrible, but it also could mean the people who received the message were ready to block people over any sign of disagreement. I don’t mean to imply that everyone on the left is a special snowflake but I am familiar with the blocking behavior of people on the left. Over the years I’ve seen dozens if not hundreds of examples of people on the left blocking people on the right because they dared to (politely) disagree on some point. That’s not true for everyone on the left. There are some people who will engage in a conversation if you stick to the topic and don’t make things personal, but there are a lot more who block at the drop of a hat.

Shadow banning people on the basis of someone else’s reaction without looking at the content seems like a recipe for giving the most sensitive folks control of the conversation. That’s not necessarily bad if it’s used very sparing, say for people who appear to be creating multiple accounts to harass an individual. Twitter claims this will impact less than 1% of users:

[L]ess than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what’s reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large—and negative—impact on people’s experience on Twitter.

Inevitably, this policy is going to unfairly silence some people who really don’t deserve it. And because they won’t even know it’s happening, they won’t be able to do anything about it or protest their innocence. The worst part of this is, I can’t really imagine Twitter offering any kind of genuine transparency about how far this shadow banning goes, who is getting banned, and any mistakes it makes in applying this. A few people inside the organization will know but everyone else will just have to take their word that they’re applying this policy narrowly and fairly.