One way to do this is through simple opt-in buttons. Mr. Musk could keep in place all of Twitter’s offensive-speech protocols, but give every user the ability to opt in or out of them. If a user doesn’t want to see hate speech, there’s no reason he should have to. The same goes for constitutionally protected sexually explicit material.
A more ambitious option would be to harness artificial intelligence and develop an individualized filtering mode. Each user would decide for himself whether to remove certain posts, and an AI algorithm would learn from his choices, creating a personalized filter. If Michael flags racial epithets or Laura deletes certain images, Twitter’s algorithms would be trained not to show them such epithets in future. They’d be free to change their minds and could adjust their settings accordingly. Mr. Musk could poke fun at other Big Tech platforms for employing an outmoded centralized censorship model that is a relic of broadcast media when the technology now exists to run personalized AI models.
One objection to this approach is that it might exacerbate online echo chambers. But users who wish to see contrary viewpoints could instruct their filter to keep showing them challenging opinions and facts. Users who choose otherwise would be no worse off than cable-news viewers. Such self-siloing may be an inevitable product of 21st-century media and civic culture, beyond any company’s power to counteract. But if we’re stuck with such echo chambers, better that they be ones of our own creation rather than imposed on us by a central authority.
Join the conversation as a VIP Member