On October 20, 2023, the Knight Institute will host a closed convening to explore the question of jawboning: informal government efforts to persuade, cajole, or strong-arm private platforms to change their content-moderation policies. Participants in that workshop have written short notes to outline their thinking on this complex topic, which the Knight Institute is publishing in the weeks leading up to the convening. This blog post is part of that series.

***

Courts, applying legal doctrines to novel situations, inadvertently create ambiguities. Scholars and commentators highlight these ambiguities. Ideologues then drive trucks through them. 

Much of the commentary around the issue of jawboning, as well as the First Amendment cases brought by deplatformed speakers against the government usinga jawboning theory, elides a distinction that the relevant legal doctrines recognize: the difference between a government request and a government threat. This elision is accomplished by arguing that government requests are inherently coercive and are thus materially indistinguishable from threats in the eyes of the receiver. In other words—and to use the language of the claim—when the government “significantly encourages” a platform to take down speech that the government disagrees with or thinks is harmful, the platform actually has little choice in the matter. That lack of choice makes the platform’s removal state action, and then a First Amendment violation, by dint of the fact that the government requested it. For a demonstration of this strategy, look to the Missouri v. Biden complaint, where the plaintiffs alleged a constitutional violation by arguing there was “open collusion” between government and social media platforms, claiming government “pressured, cajoled, and colluded with” those platforms, and alleging an “ongoing, close, and continuing collaboration” between platforms and government with respect to the plaintiffs’ speech.

These allegations fail to recognize how government actually works. Two of the government’s primary functions are to are to persuade and protect. Neither is possible without robust protection for government speech, and a “significant encouragement” test is at direct odds. Allegations based on government encouragement alone, even “significant” encouragement, cannot form the basis for a First Amendment claim. As Judge Posner said in 2015 in BackPage v. Dart, and as Ashutosh Bhagwat makes clear in his contribution to this series: “what matters is the distinction between attempts to convince and attempts to coerce.” Or put differently, between a request and a demand. And a demand, even one from the government, means nothing under the First Amendment when there is no power behind it to follow through. When the threat does have official, exercisable power and regulatory authority behind it, then the threat of punishment for speech is real, and to quote Dart again, “the causality [between the threat and compliance] is obvious.” But where real official power behind the threat is lacking, all that is left is a claim that the government violates the First Amendment when it jawbones—simply because it is the government.  

But the Fifth Circuit, even while narrowing the broad injunction entered by the district court in the case, took the bait. After applying the “significant encouragement” test to the government discussions at issue, the court enjoined the White House and CDC from not just “coercing social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech,” but from “significantly encouraging” the platforms from doing so as well. And with respect to the FBI, even though the plaintiffs did not even allege a threat was made, the court found that a request from law enforcement might be “inherently” coercive. But in fact, those requests that the Fifth Circuit found to be coercive were barely threatening at all. It should have been difficult to find that the White House coerced platforms to take down speech when, as TechDirt has shown, the majority of its takedown requests were ignored. Any real or imagined threat of distributor liability would be, by definition, ineffective, given the platforms’ Section 230 immunity for distributing user speech. And as Professor Bhagwat also points out, the White House statements that the plaintiffs claim were veiled threats to revoke Section 230 itself—a threat that the Fifth Circuit said would be inherently coercive, even if it was “unspoken” (to repeat, the governing theory here is that any government request is an implied threat)—were even more hollow, since no one congressperson, let alone one bureaucrat in the executive branch, can change the law alone.

This all demonstrates why a “significant encouragement”-based test, or even a coercion test that relies on a presumption that government requests are inherently coercive, simply cannot be a part of applicable First Amendment doctrine. The 1982 case Blum v. Yaretsky, from which the “significant encouragement” test for state action comes, did not deal with the First Amendment or government speech rights—no expressive interests by either the encourager or the encouraged were at stake. It is hardly out of bounds for commentaries on jawboning to consider a “significant encouragement” test as a possible decision rule, since courts, including the Fifth Circuit in Missouri v. Biden itself, have applied it in that context. But the current debate fails to adequately recognize that the continued application of the “significant encouragement” test to jawboning problems would have significant negative First Amendment-related consequences.

First, the claim that government’s exercise of its own speech rights is inherently coercive, or that private content moderation by distributors becomes unconstitutional when platforms are “significantly encouraged” by the government to moderate, does serious damage to the government’s own speech rights. Speakers need to know where their government stands on contested matters. Finding that the government coerces every time it expresses a view on whether private speech should be distributed—even where the government concedes that the private speech is protected by the First Amendment, as is the case in Missouri v. Biden—would result in less government speech. The government has as much of a right to engage in counterspeech as private citizens, and we want to know when it does. If President Joe Biden believes Facebook is killing people, or if Dr. Anthony Fauci and Dr. Francis Collins believe the herd immunity-based theories advocated in the Great Barrington Declaration are fringe or dangerous, the relevant doctrinal rule should not impose potential First Amendment liability in the way of them saying so, including to the distributors of that speech. These are official positions that the public may want and need to know.

And more importantly, many of these conversations between government and distributors were undertaken in the interest of saving lives. One of the claims that the Fifth Circuit allowed to go forward involved communications between the platforms and the Surgeon General’s office and the CDC about COVID 19 disinformation. In the words of the Fifth Circuit, and in the light most favorable to the plaintiffs, the “CDC’s guidance informed, if not directly affected, the platforms’ moderation decisions,” and “the platforms sought answers from the officials as to whether certain controversial claims were ‘true or false.’” This was enough, according to the Fifth Circuit, to demonstrate “significant encouragement.” To find that it violates the First Amendment when speech distributors collaborate with government disease experts during a pandemic with respect to whether or not the speech they distribute might be harmful to the public is—not to put too fine a word on it—frightening.

Second, if the inherently coercive theory of jawboning continues to be recognized, the extent of the remedy for speakers who have been deplatformed pursuant to government requests is unclear. In Missouri v. Biden, the district court deemed injunctive relief against the government to be the proper remedy, and the Fifth Circuit was right to narrow that relief. But is it so difficult to imagine a future court finding that the remedy should also include court-ordered “replatforming”—ordering the platform to put a removed speech or speaker back up? The Fifth Circuit has already shown itself to be dismissive of the claim that platform moderation of user speech is protected by the First Amendment, and the Supreme Court has just agreed to consider that issue. The ultimate goal for the Missouri v. Biden plaintiffs and their allies is social media platform must-carry; whether they arrive there by getting laws like Texas's and Florida's passed, or obtaining injunctive relief in individual cases, likely doesn’t matter.

Third, as I noted above, insufficient attention is being given to the fact that the current climate around jawboning is highly ideologically charged. We need not look far into the past to see not just examples of soft power, or even soft power combined with hard power, exercised against speech the government disliked. In these cases, the government did not threaten the distributor but the speakers themselves:

  • In October 2017, President Trump said that NBC’s FCC licenses should be revoked for spreading “fake news” about his requests to increase the U.S. nuclear weapons stockpile. Similarly, in September 2018, he said that CNN’s licenses should be revoked as well (CNN does not in fact have any broadcast licenses). And as a candidate in 2023, Trump threatened media outlets again, this time saying that Comcast and NBC would be investigated for “Country Threatening Treason,” and would “pay a big price” upon his reelection—tying his threats against disfavored speakers to a direct exercise of government power.
  • In 2019, the Department of Defense declined to renew a $10 billion contract for cloud computing with Amazon, soon after President Trump made a number of statements about punishing the “Amazon Washington Post.” Amazon challenged the denial.
  • In 2020, Twitter labeled a tweet by President Trump, and shortly thereafter the White House issued an executive order directing the FTC to interpret Section 230 narrowly (whatever that was intended to mean), because Twitter’s decision to label his tweets “clearly reflects political bias.”

And more directly on the jawboning front:

  • During the 2020 protests against police brutality and racism, acting Secretary of Homeland Security Chad Wolf pressured Facebook, YouTube, and Twitter to take down posts encouraging the toppling of Confederate statutes, claiming the platforms were being “misused to promote, incite, and coordinate criminal activity that threatens the security of all Americans.”
  • In March 2017, President Trump took credit for the NFL’s blacklisting of quarterback Colin Kaepernick, whom Trump had relentlessly attacked for kneeling during the National Anthem before NFL games while an active player.

Are these government statements and actions as much of a threat to free speech as anything the government and platforms are alleged to have done in the Missouri v. Biden litigation? Although in all but the Amazon and Colin Kaepernick cases, there was no actual direct punishment for speech or coerced deplatforming, that does not seem like a requirement under the Fifth Circuit’s theory of standing in the Missouri case, where the plaintiffs there had standing because they “continued to face the very real and imminent threat of government censorship.”

Additionally, the larger story being told by the Missouri plaintiffs and their supporters about the current state of free speech, or the courts’ concerns about a public-private Orwellian operation focused on rooting out and censoring conservative speech, simply do not ring true. The notion that speech against vaccine mandates or the efficacy of masking, about the lab-leak theory, or even about Hunter Biden’s laptop are underdiscussed is farcical. Facebook removed lab-leak claims on the advice of the World Health Organization for a grand total of about four months, and as Yoel Roth (who was there and would know) notes, the lab-leak theory was not removed or demoted by Twitter at all. The terms “Hunter Biden” and “laptop” have already appeared together more than 60 times in the Congressional Record, with many dozens more surely to come. And the Great Barrington Declaration has almost a million signatures. Being initially “suppressed” by social media platforms was likely an important factor in the Declaration’s having achieved that reach. These claims of censorship are part of the plaintiffs’ and their allies’ larger project; by painting these viewpoints as suppressed, those advocating them point to that suppression to make a greater claim of truth. Everyone who disagrees with the official story and is punished for doing so is a potential Galileo, to be proven both heroic and right in the longer span of time.

There are important First Amendment issues here worthy of examination. As should be clear by now, my view is that those efforts should focus on a strict definition of government coercion, which requires a close connection between official threats and private action and examines the government’s ability to carry through on the threat. Given that the focus should be on threats that truly coerce, another underexamined issue in this area is the appropriate level of causation (i.e., what a claimant should have to show with respect to whether the threat actually caused the private action). Arguing instead about how government’s “significant encouragement” of a distributor to take down user speech can violate the speech clause is a diversion that is unsupported by the First Amendment. But the most important way to understand the Missouri v. Biden litigation is as an attempt to use the courts to open more fronts in two related wars: the politicization of content moderation, and the populist war on government-produced expertise.