The biggest long-term danger in all of this has always centered on the unique situation of media distribution now being concentrated in the hands of a such a relatively small number of companies. Instead of breaking up these oligopolies, or finding more transparent ways of dealing with speech issues, there exists now a temptation for governments to leave the power of these opaque behemoth companies intact, and appropriate their influence for their own sake.
As we’ve seen abroad, a relatively frictionless symbiosis can result: the platforms keep making monster sums, while security services, if they can wriggle inside the tent of these distributors, have an opportunity to control information in previously unheard-of ways. Particularly in a country like the United States, which has never had a full-time federal media regulator, such official leverage would represent a dramatic change in our culture. As one law professor put it to me when I first started writing about the subject two years ago, “What government doesn’t want to control what news you see?”
The sheer scale of the logistical task involved with sorting through billions of pieces of content a day makes any hope at even-handed moderation a fantasy. Once companies go down the road of quashing “harm,” there are really only two possible outcomes: an ever-expanding game of speech Whac-a-Mole, or a double-standard. In the best-case scenario, companies like Facebook will be relying upon a combination of AI and human subject-matter experts to answer such questions as, “What is journalistically true?” and “What is dangerous?” involving too much material to responsibly review.
Join the conversation as a VIP Member