Repealing Section 230 remains a lose-lose proposition

One of the recurring themes over the past few months has been an increasing clamor (oddly enough, on both sides of the aisle at times) for Congress to repeal Section 230 of the 1996 Communications Decency Act. Curiously, many conservatives seem to be doing this because of the blindingly obvious and successful efforts of Big Tech to squelch conservative voices. This has been done by deleting posts or even entire accounts on platforms like Facebook and Twitter, as well as others. In a more sinister fashion, Apple, Google and Amazon have deplatformed entire applications such as Parler if they are viewed as being insufficiently woke. Conversely, some liberal/socialist groups have called for the repeal of 230 as punishment for what they apparently view as an insufficient amount of censorship of conservative voices.

Writing at Barrons this week, Eric Savitz argues that this is entirely the wrong approach. It’s one of those cases of cautioning people to be careful what they wish for. If you think the social media world is bad now, dumping 230 could actually make the internet worse, if such a thing is possible.

I’m not defending the status quo. The last several years—and the last few weeks in particular—have revealed significant issues wrought by social media. But repealing Section 230 isn’t the solution. In fact, it could make things worse.

[Chris] Cox has noted that before Section 230 New York state courts developed the theory that internet platforms had no liability for illegal user content—unless they moderated the content. “Only if a platform made no effort to enforce rules of online behavior would it be excused from liability for its users’ illegal content,” Cox wrote in August.

“This created a perverse incentive. To avoid open-ended liability, internet platforms would need to adopt what the New York Supreme Court called the ‘anything goes’ model for user-created content.”

Savitz envisions a host of problems coming from the repeal of Section 230, all of which seem both plausible and prescient. At the top of the list is the fact that once 230 is gone, the platforms become responsible for the content that appears on them. That means that they would have to aggressively move to censor even more user-generated content. This is pretty much the opposite of the changes that most conservatives seem to be hoping for.

On top of that, Twitter and Facebook in particular are simply too big to monitor everything that gets posted in real-time. Such a task would be completely impossible, and their algorithms are never going to catch everything. So when offensive or even illegal content sneaks through they could wind up being made to pay for the violations, potentially being sued or fined into obscurity. If that’s your real goal then I suppose you’re fine with it, but then you need to find a “better” replacement for those platforms and the problems begin all over again.

As Savitz notes, other platforms that many of you use and enjoy could be negatively impacted as well. Yelp, Rotten Tomatoes, and even Uber and Lyft could be sued for negative reviews posted on their sites. The removal of 230 could produce a host of unintended consequences.

But there’s another angle to this that the author doesn’t touch on. Notice in the excerpt above where he talks about how the origins of Section 230 envisioned two basic choices. Either the platforms would be held responsible for user-generated content and be required to moderate everyone or they could be completely “hands-off” and choose the anything-goes model. In the latter scenario, they couldn’t be held accountable for anything generated by their users, but they wouldn’t be able to moderate the content either.

This produces one immediate effect. First, someone else would have to take responsibility for eliminating forbidden content. But who? Law enforcement? If Twitter can’t do it on their own system, who from the government could?

But even that’s not the major bug in the system being overlooked here. Under the original, “anything goes” theory, the platforms weren’t supposed to be held accountable, but they also weren’t supposed to be moderating the content created by the users. Somehow, Facebook and Twitter and all the rest were released from responsibility but were left free to moderate anyway. And being private companies, they are under no obligation to moderate “fairly” if they don’t feel like it. And they don’t. That brought us to the impasse we face today. The major social media networks are run by liberals who are happy to unabashedly squelch the voices of conservatives they disagree with while almost entirely ignoring the sins of members of their own tribe.

I don’t know if there’s a solution to that problem aside from fleeing to other platforms with a less biased approach (at least until they are deplatformed), but the answer isn’t going to be repealing 230. That’s not going to fix anything.