Combating the flood of abuse is “like trying to put out a fire in a house,” a former trust and safety executive for another tech company told me. “Once you do that, four more fires pop up in its place.” Each day, Harvey and her team investigate countless tweets that have been flagged as spam or abuse and quickly decide whether a user should be suspended or banned. In 2013, for instance, Twitter added a “report abuse” feature after Caroline Criado-Perez, who started a campaign to put Jane Austen on British currency, said that she had received 50 rape threats an hour. Still, the abuse continued: During the height of the 2014 Gamergate controversy, video-game critic Anita Sarkeesian posted 157 examples of death and rape threats, insults, and incitements to suicide that she had received over the course of six days…
Part of the problem, insiders agree, is that Twitter has never set clear guidelines for what kind of language or behavior will get somebody banned. In the case of the anti-Muslim videos that Trump re-tweeted, Twitter offered a series of shifting explanations. First, the company said, the videos were inherently newsworthy; later, it suggested that they had become newsworthy—and thus in the public interest—by virtue of Trump’s involvement. “There may be the rare occasion when we allow controversial content or behavior which may otherwise violate our rules to remain on our service because we believe there is a legitimate public interest in its availability,” a spokesperson said at the time. Nearly three weeks later, Twitter banned the account that Trump had shared, rendering his re-tweet null and void.
Join the conversation as a VIP Member