On October 20, 2023, the Knight Institute will host a closed convening to explore the question of jawboning: informal government efforts to persuade, cajole, or strong-arm private platforms to change their content-moderation policies. Participants in that workshop have written short notes to outline their thinking on this complex topic, which the Knight Institute is publishing in the weeks leading up to the convening. This blog post is part of that series.

***

When does government influence over speech edge into coercion? This question is at the core of Missouri v. Biden, a case steadily ascending toward the U.S. Supreme Court. The plaintiffs allege that government communications to platforms about proactive and reactive steps to suppress misinformation affecting public health and electoral processes violated the First Amendment: that they were, in effect, unconstitutional acts of jawboning, the improper use of government authority to coerce intermediaries to limit certain kinds of speech. The case took on a dramatic flair in July 2023, when a federal judge issued an injunction effectively barring most of the federal government from communicating with social media companies. Two months later, the Fifth Circuit Court of Appeals significantly narrowed that ruling, staying it for further review.

What happens next is anyone’s guess. Rather than dwell here on legal questions which other authors in this collection are better equipped to answer, this essay explores the political tightrope platforms struggle to walk between those demanding greater online safeguards against misinformation and propaganda and allegations of censorship from parties aggrieved by those same safeguards. Observations from elsewhere in the world suggest the latter is the graver threat to contemporary democracy. Moreover, platforms have no obvious means of escape: The only way out of this political thicket is through. Until the rise of radical populism and its attendant threats to democracy are addressed, platforms will continue to face these kinds of demands.

It is ironic that Missouri v. Biden has become the focal point for concerns over jawboning because it is not the first, best, or most successful example of U.S. government pressure on platforms over content moderation. To claim otherwise would be to ignore years of complaints by conservative figures that platforms are staffed by biased, left-wing professionals who suppress conservative speech. These critiques are undeterred by misalignment with the facts: Research shows conservative content often outperforms other content in algorithmic amplification and user engagement. Still, the political risk from this line of argument meant that in 2020, platforms needed to demonstrate serious steps to promote public health during a global pandemic and protect election integrity without feeding into conservative grievances. In its investigation into social media policies during the 2020 election, the Select Committee to Investigate the January 6 U.S. Capitol attack found that fear of conservative backlash constrained corporate leaders in their responses to anti-vaccine conspiracies, violent incitement, hate speech, fact-checking of conservative politicians and media, and, most infamously, the treatment of former President Donald Trump’s accounts. (Full disclosure: I was an analyst working on that part of the Committee’s investigation.)

This is not to say they took no action: Trump’s posts were labeled several times. Perhaps the most famous example is the labeling of Trump’s May 2020 post regarding the protests after George Floyd’s murder. But internal company communications show the degree of executive hand-wringing as they searched for excuses and exceptions to avoid taking harder measures, and Trump retaliated for this incident with an “Executive Order on Preventing Online Censorship” reviewing liability protections for social media platforms—a clear case of government coercion.

This tension between conservative grievance and lack of evidence continues today. In an essay for this collection, former Twitter (now “X”) Head of Site Integrity Yoel Roth describes how rulings in Missouri v. Biden have rested on inaccurate portrayals of his interactions with the federal government. Similarly, Stanford University filed an amicus brief in the case, asserting that the initial injunction in July attributed statements to researchers that were never made. Researchers at the University of Washington who work in partnership with Stanford have also publicly disputed prominent conservative portrayals of their work.

The sum of this is that today, government pressure to carry or amplify certain kinds of speech that platforms would otherwise not is at least as great a threat to the public square as pressure to suppress it. This long-running effort is also a form of jawboning, and controversial laws on social media content moderation in Florida and Texas and congressional hearings over unsubstantiated allegations based on the “Twitter Files” are extensions of it. Censorship and propaganda are complementary: Political actors use both to control the public square, and rarely does one exist without the other. This logic is what allows conservatives to decry content moderation and cancel culture even as they pull books from library shelves, or burn them. The two positions are connected, not contradictory, and together they represent a far greater threat to public discourse than anything in the Missouri v. Biden case. This is evident from the consequences that platforms have already faced in practice: Executives involved in the decision to remove Trump from Facebook, X, and YouTube would probably agree, at least in private, that they paid a higher price for that decision than any failure to suppress misinformation in 2020 (especially at X, where many of them were fired after Elon Musk purchased the company). Tellingly, Trump is allowed back on all three platforms today.

Other countries demonstrate the growing danger this trend represents to free expression worldwide, as well as the bind faced by social media companies. Consider that Indian law now requires platforms to appoint in-country “grievance officers” who can be held criminally liable for failure to comply with government takedown requests; X sued the Indian government over the law, and lost. Another rule requires platforms to follow the decisions of government fact-checkers. In Uganda, the government banned Facebook in retaliation for the company’s removal of fake accounts run by the ruling party during an election.

In the past, social media’s economic importance and the relatively small number of major platforms gave companies more leverage, but today the state’s vice grip is harder to escape. Once the sole province of economic and demographic giants like India, China, Brazil, and the U.S., today even governments in smaller nations are emboldened to coerce platforms. Consider that when Meta's Oversight Board recommended the platform suspend Cambodian Prime Minister Hun Sen’s account for six months over incitement to violence against his political opponents, the Prime Minister threatened to ban the platform and move to less-moderated Telegram.

Where does this leave platforms? It is not reasonable to expect them to subject their employees to prison time for routine trust and safety decisions (as I’m sure the employees themselves would agree). They can pull out of markets where political risk is rising; this solves the immediate conundrum but entails a financial hit and does little to improve access to information. Furthermore, it may not be sustainable. How many markets is too many markets to leave? If the situation in the U.S. continues to decline, what would pulling out of the U.S. (or several individual states) even look like for companies staffed primarily by Americans and headquartered in California? Perhaps enough shows of platform resolve over time will dissuade governments from hardball tactics, but it is a gamble.

The truth is that there is no easy logistical or technological fix to the political problem of jawboning. To their chagrin, platforms remain enmeshed in political disputes around the world. Some tech executives hope to eschew politics in the future; when Instagram Head Adam Mosseri launched a new X competitor called Threads, he said it will “not do anything to encourage” politics and “hard news.” Executives also see greater user control of content moderation as a path out; speaking on the occasion of Threads’ launch, Meta Global Affairs President Nick Clegg said that the company hopes to give users more control over what posts they see. But no consequential forum for public communication can avoid politics entirely (just ask TikTok). As for user control, Threads’ commitment to decentralization is interesting, but ultimately pushes responsibility for social media’s negative social impact onto average users who will probably never make use of such tools to fine-tune their feeds. As such, it can be read as capitulation to demands that the company curtail content moderation, not an evolutionary step forward.

To date, the judiciary has been the strongest defense against political efforts to control speech online. But prudent courts are a levy that can break under pressure; informal efforts to encourage platform self-censorship may ultimately succeed through sheer persistence. In a practical sense, this is true because platforms have every incentive to make political problems go away quietly. But over the long term, the independence of the judiciary relies on the health of the rest of our democratic system. Wise judges may gradually be replaced through election and appointment with figures more sympathetic to conservative demands. Decades of judicial capture and high-stakes fights over control of the courts do not make this difficult to imagine. Can anyone name a country where democracy has withered without the courts eventually succumbing to the same fate?

The situation brings to mind George Orwell’s 1945 essay, “Freedom in the Park,” in which he wrote that “the relative freedom which we enjoy depends on public opinion. The law is no protection.” In the long term, social deliberation over content moderation and the rules and norms governing online speech are likely to be the ultimate determinant of contests over content.

To this end, process improvements could marginally improve the situation. One common sense suggestion speaks directly to the concerns in Missouri v. Biden: Governments and platforms should be more transparent about content-moderation related conversations. It is important to drag instances of informal government coercion, to which platforms have every interest in capitulating quietly, into broad daylight for public scrutiny. This would, at minimum, provide journalists, academics, lawyers, and other audiences with evidence that these exchanges are largely anodyne and permissible. Clearer rules about how the government should communicate could help make sure those exchanges stay permissible.

Over the longer term, mechanisms for greater public input about rules and norms online seem essential for providing some sorely needed legitimacy to the exercise of content moderation. More democratic deliberation, not just about individual decisions but over the rules and practices guiding them, could help gradually build consensus about rules and norms. One example is the Oversight Board, which, while rightfully criticized as too slow and too limited in its purview, provides precedent for future experiments. Others have suggested that Reddit’s approach to content moderation—setting a policy “floor” and then allowing discrete communities to self-moderate—could provide a better governance model for the potentially decentralized future.

While worthwhile, procedural improvements like these are unlikely to satiate aggrieved ideologues convinced that platforms are out to get them, nor will they stay the ambitions of populists and autocrats. Will anything? In the long run, finding constitutionally permissible means of reducing the social, cultural, and political influence of bad-faith partisans and conflict entrepreneurs is the best way to safeguard democracy and the rule of law. Everything else is merely buying time.