Do Internet platforms need Section 230 to shield them from liability? Should it be taken down as social-media platforms increasingly apply editorial control over content, arguably transforming them from platforms to publishers? Don’t expect explicit answers from two decisions handed down by the Supreme Court today, both unanimously rejecting liability claims without the use of Section 230 protections.
However, that may be its own answer. Both Google and Twitter had reason to sigh with relief, although as it turns out the two cases turned out to be anything but divisive:
The Supreme Court rejected an effort to hold Twitter and other social-media websites liable for an Islamic State attack, in a unanimous decision that clarifies the duties of online platforms to remove terrorist propaganda but avoids larger questions of their liability for posted content.
In a separate brief order, the court sidestepped questions about Section 230, a foundational internet law that shields platforms from liability for user-generated content. In that three-page, unsigned order, the court said it didn’t need to decide whether Section 230 shields Alphabet’s YouTube from potential liability for recommending Islamic State recruitment videos to users.
The decisions represent a significant but limited win for big tech companies in their battle to curb their liability for users’ actions on their platforms. Alphabet’s Google unit—which includes YouTube—had warned that a successful challenge to Section 230 could “upend the internet.”
Neither case proved a successful challenge to Section 230, but neither did they produce an explicit endorsement of it. The court brusquely shrugged off Gonzalez v Google with a three-page per curiam decision that explicitly refused to consider Section 230 issues because of a lack of valid complaint overall:
We need not resolve either the viability of plaintiffs’ claims as a whole or whether plaintiffs should receive further leave to amend. Rather, we think it sufficient to acknowledge that much (if not all) of plaintiffs’ complaint seems to fail under either our decision in Twitter or the Ninth Circuit’s unchallenged holdings below. We therefore decline to address the application of §230 to a complaint that appears to state little, if any, plausible claim for relief. Instead, we vacate the judgment below and remand the case for the Ninth Circuit to consider plaintiffs’ complaint in light of our decision in Twitter.
The real action came in Twitter v Taamneh, a lawsuit alleging that the social media platform bears liability for a terrorist attack on a nightclub in Istanbul. That ISIS attack took place, the plaintiffs allege, thanks to algorithms within Twitter (along with Facebook and Google, two other targets in this litigation) that allow users to exploit the platform for recruitment, radicalization, and organizing attacks. This is the precise area where Section 230 protections for platforms should intercede, allowing the platforms to defend themselves as content-neutral platforms and passing the liability to users conducting illegal and/or liable behavior.
Once again, though, the unanimous court held that such lawsuits fail to properly state a claim under other specific statutes:
Held: Plaintiffs’ allegations that these social-media companies aided and abetted ISIS in its terrorist attack on the Reina nightclub fail to state a claim under 18 U. S. C. §2333(d)(2). Pp. 6–31.
(a) In 2016, Congress enacted the Justice Against Sponsors of Terrorism Act (JASTA) to impose secondary civil liability on anyone “who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.” §2333(d)(2). The question here is whether the conduct of the social-media company defendants gives rise to aiding-and-abetting liability for the Reina nightclub attack. Pp. 6–8.
(b) The text of JASTA begs two questions: What does it mean to “aid and abet”? And, what precisely must the defendant have “aided and abetted”? Pp. 8–21.
The answer, the justices argue unanimously, is that it has to be more overt and intentional than just merely operating a common-use platform that is widely and overwhelmingly used for benign purposes. It is not enough for the platforms to have known that malevolent organizations were also using the platform. Applying Halberstam and other decisions, the justices ruled in Twitter that any liability would have to show actual cooperation with those plots rather than just “passive nonfeasance”:
None of plaintiffs’ allegations suggest that defendants culpably “associate[d themselves] with” the Reina attack, “participate[d] in it as something that [they] wishe[d] to bring about,” or sought “by [their] action to make it succeed.” Nye & Nissen, 336 U. S., at 619 (internal quotation marks omitted). Defendants’ mere creation of their media platforms is no more culpable than the creation of email, cell phones, or the internet generally. And defendants’ recommendation algorithms are merely part of the infrastructure through which all the content on their platforms is filtered. Moreover, the algorithms have been presented as agnostic as to the nature of the content. At bottom, the allegations here rest less on affirmative misconduct and more on passive nonfeasance. To impose aiding-and-abetting liability for passive nonfeasance, plaintiffs must make a strong showing of assistance and scienter. Plaintiffs fail to do so.
This seems like rather obvious wisdom. If merely passive nonfeasance were enough to implicate liability, few businesses or industries could survive the financial implications. The reference to “scienter” in this regard is important; it requires a showing of specific intent and knowledge of providing assistance for wrongdoing, which this case does not provide.
So this case never gets to the point of a Section 230 challenge. In fact, while the opinion in Gonzalez v Google does have six mentions of 230 while deferring to Twitter in the decision, the opinion in Twitter has none at all, not a single time in 38 pages.
The decision in Twitter prompts the question, then: does Section 230 even matter? The line drawn in these two cases suggest that liability for platforms about content would have to clear a very high bar. After all, if any issues involving liability could challenge the big platforms, the life-and-death issues presented in these cases would almost certainly provide that impetus. And yet the nine justices unanimously came together — unusual these days — to erect a high wall around such claims even without addressing the issue of publisher versus platform. To some extent, that may be due to the Section 2333 that specifically deals with terrorism, but some of that language applies in other areas where liability hinges on aiding and abetting damaging behavior.
Perhaps we may yet see a challenge to 230, but it’s getting tougher to imagine how one would get to that stage. And this unanimous decision more than suggests that 230 could be redundant, at least as is.
Join the conversation as a VIP Member