Not only has QAnon led to intense online harassment of innocent parties, and not only has it led to physical violence, but Americans also can’t ignore QAnon because adherents to some form of the theory may soon represent them in Congress. More than 60 candidates this fall have expressed their sympathies with the cause. Fourteen have clinched a place on the ballot. Mr. Trump himself has been known to retweet QAnon-adjacent content, and on Friday, when he was asked about the phenomenon, he sidestepped the inquiry. This sent believers into paroxysms.
That QAnon is tiptoeing ever closer to the political mainstream is only one of many challenges for social media sites. These sites can’t ignore QAnon, but neither can they simply ban it — not really. Platforms tend to prefer to focus on behavior rather than content, so they have ready-made recourse in their terms of service when they want to act against manipulation of algorithms or tactics such as “swarming” (systematically attacking targets of the conspiracy theory for, say, being baby-eaters). When platforms do focus on content, they are far more likely to act when there is a risk of real-world harm. QAnon has caused real-world harm, surely. But not every post related to the theory runs that risk, and swinging the moderator’s mallet could needlessly squelch speech — perhaps fueling the same accusations of a scheming liberal conglomerate that are the movement’s raison d’etre. And even if platforms did decide QAnon as a whole was too much of a menace to countenance, they’d run into trouble determining which posts qualified.