Lose section 230? Be careful what you wish for

We’ve known for a while now that President Trump has been threatening to veto the NDAA unless it includes a provision to repeal section 230 of the 1969 Communications Decency Act. Personally, this has seemed like a rather ill-conceived notion from the start, particularly since it’s likely to backfire in a big way. There’s enough support for the “must-pass” nature of the NDAA and for section 230 that the veto could very well be overridden, handing a win to the bipartisanship crowd in open defiance of the President’s goals.

Now, to be clear, I do not at all agree with Taylor’s premise that section 230 actually “prevents Big Tech from exercising excessive censorship.” That’s not the purpose of section 230 in the least. Meanwhile, the social media giants are busily censoring whoever they feel like on a daily basis. We can argue over how much censorship should be considered “excessive” another day, though my default answer is “any.”

The two problems I have with this whole push to terminate section 230 are that it wouldn’t do anything to address the current situation and would likely only cause more “problems” than such a move would “cure.” And it could potentially go much harder against conservatives than liberals were it to happen. That’s something that was brought up last week in an article written by Robert H Bork Jr. at Real Clear Politics. Bork notes that the last time this issue was brought up during hearings, it was Ben Sasse who stepped into the fray and pointed out that no matter how much you may fume over Twitter and Facebook’s blackout policies, eliminating section 230 not only won’t fix the issue but could make things worse.

Sasse started by voicing skepticism about the way the two companies moderate content “because I don’t think the standards are very transparent and I don’t think the execution is very consistent.” At first, it seemed as if Sasse was going to follow his partisan colleagues across the goal line. Then this …

“I am more skeptical than a lot of my colleagues on both sides of the aisle about whether or not there is a regulatory fix that will make it better instead of worse,” Sasse said. “I think it is very odd that so many in my party are zealous to do this right now when you have an incoming administration of the other party that would be writing the new rules and regulations.”

Sasse then noted that his Democratic colleague, Sen. Blumenthal, was “giddy” about creating “a new government agency to police online speech.” Republicans and conservatives, Sasse said, “should take pause.” Support for this point came from an odd corner, Twitter’s Jack Dorsey, who drolly said that “a centralized global control moderation system does not scale.”

As Sasse noted, the social media companies have made “content moderation” decisions that seem to fall harder on conservatives than on liberals.

Here’s the shorter version of why the wheels shouldn’t stay on this wagon. Section 230 was put in place to protect information service providers from liability over content that’s generated by other parties, specifically the user community in these cases. In other words, contra Taylor’s argument, the existence of section 230 actually gives the tech giants free rein to ignore the posts of their users entirely, even if they are outrageously offensive. (Though they would still need to cooperate with law enforcement if users are employing the platform to break the law.)

As such, removing section 230 would only make the social media giants even more nervous, likely developing a tendency to blackout even more user content to avoid facing lawsuits over it. That would be pretty much the opposite of the goals of most of their critics, at least as I understand them.

There’s also a fundamental question of responsibility here. I will once again draw on one of my favorite analogies here. Twitter and Facebook and all the rest of the social media platforms are like giant, global versions of an old-fashioned corkboard. (Ask your parents if you’ve never seen one.) The big tech companies put up the corkboard and it’s initially empty. People come along and post their own information, just like students in a dorm pinning messages on slips of paper to the board. If someone pins up a really hateful message or some threat of violence, who should the police come looking for? The student who posted the message or the manufacturer of the corkboard? It’s really as simple as that.

We’re not dealing with a problem of Twitter and Facebook potentially being sued here. We’re dealing with the established fact that they have been censoring posts – almost entirely from conservative users – in an indiscriminate fashion when they can’t really be held accountable for the content of those posts to begin with. So what’s the answer? I’m unsure. Asking the government to come in and start dictating how those private companies determine when they block user content and on what basis is not only problematic from a small-government conservative philosophical standpoint, but it obviously opens up a whole new can of worms. If we allow the government to determine which sort of voices Twitter and Facebook can or can’t silence, at least half of you aren’t going to be very happy with the result.

Perhaps the best cure is once again to be found in the free market. If a sufficient number of people bail out on Twitter in favor of Gab or Parler or some other platform, they may begin to get the message. Or, failing that, perhaps they lose so much market share to less restrictive platforms that they’ll go away and cease to be a problem. But if enough people keep using their services, that means that a sufficient number of people are willing to put up with the censorship. I realize that’s a rather bleak way to look at it, but I’m just talking about reality here, no matter how harsh it may seem.