Vice did a very good job in drilling down into yesterday’s massive hack on Twitter. In fact, Vice apparently thinks they did too good of a job on it, after conservatives pointed out that their screen shots suggested that Twitter might not have been telling the whole truth about their moderation practices. The images published included nomenclature such as “blacklist” that Twitter never before revealed, and demonstrated a capability they claim never to employ.
Michael Coudrey pointed it out, among others, after Vice published the images from Twitter’s moderation tools:
Oopsie! If you thought that might prove to be an intriguing entrée for some investigative journalism, though, think again. Vice provided a rebuttal to this earlier today with the headline that it’s “another Trump supporter conspiracy,” presumably meaning conspiracy theory. “The bad takes came roaring back” after the blue-check restriction was lifted, and they call this argument “the most tiresome and toxic among them.”
So much for journalistic curiosity. Even though their own screenshots show that Twitter hasn’t exactly described this tool accurately in the past, Vice lectures readers by essentially regurgitating Twitter’s press releases:
Twitter explained this moderation tactic in 2018, after Trump’s tweet. Twitter said it made changes to how it ranks its search results for accounts from what it considers to be “bad-faith actors who intend to manipulate or divide the conversation.” A Twitter spokesperson told Motherboard on the phone that these blacklists are the same ones it explained in 2018 (though it didn’t use the term “blacklist” at the time and has not used that word publicly.)
“We have very clear rules around trends and what we don’t allow to trend,” a Twitter spokesperson told Motherboard when asked about the “Blacklist” tags we see in screenshots of the internal moderation tool. Twitter also directed us to its “Twitter trends FAQs” page, where it makes clear the platform prevents content from trending if it contains profanity or adult/graphic reference, incites hate, or otherwise violates Twitter’s rules.
Reached by phone, a Twitter spokesperson said the blacklist tags are “not new.”
“We do disclose in this FAQ that accounts that violate the rules are prevented from trending,” the spokesperson said. “This isn’t new and it’s not something that has been hidden, but it’s in the help center.”
Two points remain, however. First, as Coudrey says, these screenshots show that Twitter has the capability of shadow-banning on the basis of viewpoint, not that Twitter does so. Conservatives have raised questions about the manner in which Twitter conducts its moderation, not whether it moderates it at all, based on their anecdotal experiences with the platform. Operation of content moderation is certainly a worthy subject of criticism, or at the very least debate, and why Vice thinks it isn’t is never explained in its scornful dismissal of those concerns.
Second, we only have Twitter’s word that it doesn’t employ these tools on the basis of viewpoint. And one reason we can’t confirm this is that Twitter doesn’t disclose who’s on their blacklists or fully disclose who operates them and how, as Vice admits:
People who are buying into a Twitter conspiracy are not completely wrong to be worried, however. While Twitter has explained before that these blacklists exist and the types of accounts that might get put on one, it does not alert users if they’ve been put onto a blacklist. It has not used the term “blacklist” publicly in this context, a term more loaded than “Twitter filters search results for quality tweets and results.” (“Twitter may automatically remove accounts engaging in [rule breaking] behaviors from search” is a bit closer.) It has not explained if the blacklists are automated, administered by human moderators, or a mix of both, and it has not been specific about how or why an account it put on a “Trends Blacklist” or “Search Blacklist.”
And not even Vice has figured out how Twitter uses them:
We can’t say with 100 percent certainty how the “Blacklist” tags work because we don’t have full visibility into Twitter’s moderation mechanism. We can see its public facing policy, but not debates inside the company, and more critically, the technical process by which accounts are suspended, banned, or prevented from appearing in search.
This, by the way, is the second-to-last paragraph in a report that spends several leading paragraphs making fun of conservatives who have raised questions about this practice. Rather than poke fun, shouldn’t Vice’s tech team get answers to those questions, rather than just blithely accepting Twitter’s “carefully crafted blog posts and announcements”? They do make the point that Twitter’s “opaque” approach allows these allegations to fester, but somehow ends up defending the company’s process without ever bothering to force the company to disclose them.
Maybe we can call this brand of corporate press-release regurgitation “shadowflacking.” It ain’t journalism, even if their report on the hack itself was. Meanwhile, let’s see if anyone else gets an answer to this question:
— Micah Rate (@Micah_Rate) July 16, 2020
Update: How could I have missed the whole “Republicans pounce!®” aspect of this report? Mea culpa, mea culpa, mea maxima culpa, but fortunately my friend Adam Baldwin backstopped me on this one: