Great news: Our new Twitter "conversation health" overlords mainly Trump critics

Tired of the shadow-banning of conservatives and the punishment for heterodox opinions on Twitter? The platform has some great news for you! They have brought in outside experts to improve “conversational health” on their social media platform, a six-member panel to strike “incivility and intolerance” from Twitter and break down its “echo chambers.”

Advertisement

And just who are these experts Twitter has hired to restore sanity to its heavy-handed interventions? Fox’s James Rogers didn’t have to do much digging to find out (via Jeff Dunetz):

Twitter’s campaign to foster healthier conversations on its platform with the aid of academics is itself facing an allegation of anti-Trump bias.

In previous tweets from their personal accounts, a number of the academics involved in the high-profile project have repeatedly slammed the Trump administration.

Rogers dug up a number of tweets that Twitter apparently overlooked, but let’s just go for a taste here. Dr. Rebekah Tromble tweeted out a statement of delight about being on hand to “assess the health of conversations on the platform.” And she hopes all of the “white nationalists” who support Trump are just as excited:

https://twitter.com/RebekahKTromble/status/897792260821090304

How about Dr. Patricia Rossini at Syracuse? She seems like an unbiased arbiter:

https://twitter.com/patyrossini/status/826131040867667968

How will her fellow Syracuse researcher Dr. Jenny Stromer-Galley use science? Well, take three guesses:

Speaking of low information, Dr. Stromer-Galley seems a bit gullible when it comes to discovering false news reports:

Advertisement

Another member of the panel, Dr. Nava Tintarev, has locked her Twitter account. So much for unlocking echo chambers, I guess, but Rogers picked up a couple of winners:

Exactly how will assigning a panel of Trump critics result in bridging gaps between communities? Especially when at least one member of the panel won’t even allow anyone to see her tweets without her express permission? Come on, man.

Last week, after Twitter suspended Kat McKinley because of a tweet that was over a year old, I finally decided to throw in the towel and stop engaging on their platform, at least for a while. As Instapundit does, I use it now to tweet out my posts and updates on my Facebook page, but I’m no longer watching my timeline or my notifications. Maybe I’ll be back, if the situation improves, but if anything it’s running even further off the rails.

As I wrote in my column for The Week, both Twitter and Facebook are going in the wrong direction in dealing with trollery. Rather than just offer more sophisticated tools to allow the users to shape their own environments, they’re both imposing top-down controls that threaten to punish users for their viewpoints rather than incivility:

Advertisement

This cycle of mining old tweets and Facebook posts to get people fired or marginalized has begun to dominate social media. It is fueled in part by a reliance on user feedback to determine how the platforms treat their customers. Twitter, for instance, defended itself recently from accusations by Vice News of “shadow banning” by revealing that it deprioritizes search results for certain users based on how many other users block and/or mute them. In other words, Twitter has set up incentives for hecklers’ vetoes to determine who does and does not get full participation on their platform — weaponizing mob mentality even further.

Twitter is doubling down on this approach, too. On the same day Gunn’s colleagues gamely campaigned for his rehabilitation, Twitter announced it would outsource the moderation of speech on its platform. The new effort aims to “bridg[e] gaps between communities on Twitter” by having third parties break up “echo chambers” and “uncivil discourse” via algorithm-based reporting. But this leaves users with even less confidence in the platform, as ever more arbitrary methods get put in place to punish those who simply want to have conversations and debates. …

Instead of these interventions on behalf of pseudotolerance, these platforms should try to defend the principles on which they are based, and exercise actual tolerance. Lay out specific rules about what can and can’t be posted, and then enforce them directly rather than allow mobs to form and force suspensions and shadow bans. Let people form their own judgments about associations and arguments. Rather than “safe spaces,” allow for real speech and debate and force people to defend their ideas. The follow, block, and mute functions on social media allow users to set their own environments, so why not trust them to do so wisely?

We don’t need a ruling body to make those decisions, nor do we need to keep handing torches and pitchforks to mobs to impose “healthy conversations” on us. Until social media platforms put their consumers in control, they will continue to see both their users and their investors walk away.

Advertisement

The selection of these specific experts on “conversational health” make that direction even more clear. It also convinces me further that there’s not much point in engaging any longer on a platform I really enjoyed for most of the last ten years.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement