Home Back

What Do the NetChoice Cases Mean for Online Speech?

truthonthemarket.com 2 days ago

With the release of the U.S. Supreme Court’s NetChoice opinion (along with some other boring case people seem to want to talk about), opinions for the October 2023 term appear to be complete. After discussing what Murthy v. Missouri means for online speech, it only feels right to discuss the other big social-media case of the term.

Much as in Murthy, the Court didn’t ultimately decide the merits, but its opinion does establish this much: it will be very hard for Texas and Florida to defend their laws successfully against as-applied First Amendment challenges from social-media companies. 

This is a win for the marketplace of ideas protected by the First Amendment. As we at the International Center for Law & Economics (ICLE) argued in our amicus brief, the marketplace-of-ideas metaphor means that “private actors get to decide what speech is acceptable. It is not the government’s place to censor speech or to require private actors to open their property to unwanted speech. The market process determines speech rules on social-media platforms just as it does in the offline world.” 

The Court largely agreed, finding that:

  1. The First Amendment protects private entities’ ability to curate the speech of others and create an expressive product (which includes the right to exclude);
  2. This remains the case even if most content is included and very little is excluded; and
  3. The government may not overcome this right to editorial discretion through an interest in “better balancing the marketplace of ideas.”

Below, I will analyze the Court’s opinion in more detail, offering my thoughts on what this does and doesn’t mean for online speech going forward.

A Narrow But Consequential Opinion

The Court’s holding is narrow. The unanimous judgment is that the lower courts did not correctly analyze the facial challenge brought by NetChoice. Written by Justice Elena Kagan and joined by a majority of the Court, the opinion also outlines why the 5th U.S. Circuit Court of Appeals got the First Amendment analysis wrong in upholding Texas HB20. The concurrence from Justice Samuel Alito (joined by Justices Clarence Thomas and Neil Gorsuch) only joins the judgment that “NetChoice failed to prove that the Florida and Texas laws they challenged are facially unconstitutional.” 

The Court unanimously agreed that NetChoice failed to make out a facial challenge to the Florida and Texas laws, because lower courts analyzed these laws as if they apply only to social-media platforms’ ability to curate their feeds.

“The first step in the proper facial analysis is to assess the state laws’ scope. What activities, by what actors, do the laws prohibit or otherwise regulate?” The lower courts focused only on the core of what these laws appear to regulate, which is the presentation of content on the feeds of major social-media platforms:

The next order of business is to decide which of the laws’ applications violate the First Amendment, and to measure them against the rest. For the content-moderation provisions, that means asking, as to every covered platform or function, whether there is an intrusion on protected editorial discretion… Curating a feed and transmitting direct messages, one might think, involve different levels of editorial choice, so that the one creates an expressive product and the other does not.

Here, the lower courts failed to consider that there may be “a sphere of other applications—and constitutional ones—that would prevent the laws’ facial invalidation.” In other words, there is a constitutional difference between transmitting private messages and curating content in public-facing social-media feeds. The Court thus concludes:

Neither the Eleventh Circuit nor the Fifth Circuit performed the facial analysis in the way just described. And even were we to ignore the value of other courts going first, we could not proceed very far. The parties have not briefed the critical issues here, and the record is underdeveloped. So we vacate the decisions below and remand these cases. That will enable the lower courts to consider the scope of the laws’ applications, and weigh the unconstitutional as against the constitutional ones. 

What NetChoice Means Going Forward

While the holding itself is narrow, to treat the majority’s opinion in Part III as mere “dicta,” as suggested by Justice Alito’s concurrence, would be a big mistake. The implications of the First Amendment analysis the Court outlined will have an obvious effect on these cases going forward. Namely, that the laws will not be able to survive as-applied challenges by social-media platforms. As the Court put it: 

The Fifth Circuit was wrong in concluding that Texas’s restrictions on the platforms’ selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas’s interest in changing the content of the platforms’ feeds. Explaining why that is so will prevent the Fifth Circuit from repeating its errors as to Facebook’s and YouTube’s main feeds.

The Court outlined three major First Amendment principles that apply in this context:

  1. “First, the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude”;
  2. “Second, none of that changes just because a compiler includes most items and excludes just a few”; and
  3. “Third, the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas.” 

Translating these principles into the language of our amicus brief, the First Amendment protects social-media platforms’ private ordering of speech, even if the platforms necessarily open their property to the speech of others. The government may not treat social-media platforms as “company towns” that must open their property to others to promote the marketplace of ideas.

While the Court didn’t go into detail about why social-media platforms engage in content moderation, it did spend some time detailing what YouTube and Facebook do, concluding that “[t]he platforms thus unabashedly control the content that will appear to users, exercising authority to remove, label or demote messages they disfavor.”

Social-media platforms engage in content moderation to balance the diverse speech interests of their users. Both through matching content to users’ interests and by demoting or removing unwanted content, the platforms attempt to provide a better product. It may, in fact, be the case that the vast majority of content receives very little attention from human moderators. But this, in itself, is a protected editorial decision in the marketplace of ideas.

Much as we argued in our amicus brief (and many times elsewhere), the Court rightly analogizes the situation of social-media platforms to newspapers (Tornillo); parade organizers (Hurley); and cable operators (Turner). Each of those entities takes messages from others but makes them into their own expressive offerings. As the Court states:

The individual messages may originate with third parties, but the larger offering is the platform’s. It is the product of a wealth of choices about whether—and, if so, how—to convey posts having a certain content or viewpoint. Those choices rest on a set of beliefs about which messages are appropriate and which are not (or which are more appropriate and which less so). And in the aggregate they give the feed a particular expressive quality.

Finally, even when individual social-media platforms fail to serve users well, there is no reason to believe the government can do better by compelling carriage of particular speech. As I’ve argued before, even where the marketplace of ideas “fails,” the threat of government failure is worse. As the Court states:

However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others. 

Another interest—like limiting gatekeeping power in Turner—could possibly be asserted in cases like this going forward (see the Alito concurrence). In fact, the majority makes clear that Turner could be read to allow a “[g]overnment interest… relating to competition policy” if it is unrelated to the suppression of speech. But it argues that this is most assuredly not the case for the Texas law, in particular. In language emphasizing a strong distinction between state action and private action, the Court states:

The interest Texas asserts is in changing the balance of speech on the major platforms’ feeds, so that messages now excluded will be included. To describe that interest, the State borrows language from this Court’s First Amendment cases, maintaining that it is preventing “viewpoint discrimination.” Brief for Texas 19; see supra, at 26–27. But the Court uses that language to say what governments cannot do: They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the content-moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose. 

In sum, the First Amendment applies to social-media platforms’ core functions. Because the laws here are designed to interfere with those functions, they will likely fail as applied to those platforms. There is no reason to expect another outcome after the lower courts come to grips with the majority opinion.

What NetChoice Doesn’t Mean

What the case doesn’t mean is that the debate about the First Amendment protection of speech carried by online platforms is completely over. 

For instance, the concurrence by Justice Amy Coney Barrett notes that the use of artificial intelligence to perform content moderation could affect the First Amendment analysis, insofar as it limits how much social-media platforms are trying to express an idea. She also questions whether foreign ownership (probably thinking of TikTok) affects First Amendment rights. 

In her concurrence, Justice Kentanji Brown Jackson emphasizes that the Court didn’t need to preview how the analysis would be applied to Facebook’s News Feed or YouTube’s home page, which should have been left to the lower courts. In more depth, Justice Alito (joined by Justices Thomas and Gorsuch) argues the same, but adds that the Court failed to adequately consider whether social-media platforms are acting as common carriers, which would be subject to a lower standard of First Amendment scrutiny:

Most notable is the majority’s conspicuous failure to address the States’ contention that platforms like YouTube and Facebook—which constitute the 21st century equivalent of the old “public square”—should be viewed as common carriers. See Biden v. Knight First Amendment Institute at Columbia University, 593 U. S. ___, ___ (2021) (Thomas, J., concurring) (slip op., at 6). Whether or not the Court ultimately accepts that argument, it deserves serious treatment. 

It’s possible that these arguments will be further considered as the cases go back to the courts. But the majority’s opinion seems to foreclose the strongest form of this argument, which is that social-media platforms, as common carriers, have no right to editorial discretion in how they design their public-facing feeds. Whether to apply a lower form of scrutiny, like that in Turner, remains an open question, as the Alito concurrence argues. 

It is worth reiterating, however, that most of the Court does not think balancing speech is a valid government interest. This means that laws based on such an interest wouldn’t survive any level of First Amendment scrutiny:

In the usual First Amendment case, we must decide whether to apply strict or intermediate scrutiny. But here we need not. Even assuming that the less stringent form of First Amendment review applies, Texas’s law does not pass. Under that standard, a law must further a “substantial governmental interest” that is “unrelated to the suppression of free expression.” United States v. O’Brien, 391 U. S. 367, 377 (1968). Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects. But the interest Texas has asserted cannot carry the day: It is very much related to the suppression of free expression, and it is not valid, let alone substantial. (emphasis added).

In cases dealing with issues outside of these core social-media platform functions, like private messaging, a lower level of scrutiny may be very important to the First Amendment analysis. Protecting the privacy of such messages would likely be a substantial government interest unrelated to the suppression of speech, which could allow rules stopping social-media platforms from asserting the right to editorial discretion over such messages, assuming there is even such a right at all.

Conclusion

While the Court’s holding in the NetChoice cases is a rejection of the facial challenge, it is still a big win for the First Amendment’s protection of private ordering. In fact, insofar as these cases are about what the parties, lower courts, and most amici thought they were about, the Florida and Texas laws are likely unconstitutional. The open questions have more to do with what the First Amendment would allow in terms of regulation of other online speech, such as instant messages, or what tier of scrutiny would apply in given situations. But this much is clear: the First Amendment does protect social-media platforms’ right to serve their consumers through content-moderation policies.

People are also reading