On October 20, 2023, the Knight Institute will host a closed convening to explore the question of jawboning: informal government efforts to persuade, cajole, or strong-arm private platforms to change their content-moderation policies. Participants in that workshop have written short notes to outline their thinking on this complex topic, which the Knight Institute is publishing in the weeks leading up to the convening. This blog post is part of that series.

***

Over the last few months, I’ve had the somewhat surreal experience of learning that my decisions are not my own.

Specifically, I’ve been repeatedly told—by the authors of the Twitter Files; by members of congress; and by Judge Terry A. Doughty of the United States District Court for the Western District of Louisiana—that the policies I enacted and enforcement decisions I carried out in my role as Twitter’s head of site integrity during the 2020 election and COVID-19 pandemic were the product of an elaborate, multi-year pressure campaign, meant to coerce me (and my counterparts at other technology platforms) to censor content in a manner favorable to the government’s interests.

This comes as some surprise to me, having been there. But, as the Fifth Circuit points out in their decision in the case of Missouri v. Biden, demonstrating unconstitutional jawboning does not require “that the recipient admit that it bowed to government pressure.” In essence, the argument here is that my interactions, and those of other platform staffers, with representatives of the government could have been coercive whether or not we were aware of the coercion happening at the time (or even necessarily saw it as coercion at all). Perhaps I was so caught up in the frenzy of “Russiagate” and the COVID-19 pandemic that I failed to notice just how readily I acquiesced to the government’s demands.

Much has been, and will be, written about the doctrinal conclusions (or ambiguities) stemming from this case; I worry, though, that the factual foundation of those conclusions is flawed. The relationship between technology platforms and the government is fraught and complex; jurisprudence that hopes to draw lines of appropriate conduct must be based on an accurate accounting of what actually happened—an accounting that is, at least in part, missing from the Fifth Circuit’s ruling in Missouri v. Biden.

I’m focused, for the purposes of this article, on the portion of the Fifth Circuit ruling concerning the FBI primarily because I have the most direct knowledge of Twitter’s interactions with the FBI from 2018 onwards. I hope others who are similarly situated with regards to interactions with the White House, CDC, and other parts of the executive branch will also come forward to share their experiences and recollections in service of informing—among other things—the Supreme Court’s potential upcoming review.

I.

Let me state my operating assumptions and analytic biases upfront: 

  • Russia interfered in the 2016 U.S. elections, and that interference was consequential. This shouldn’t be controversial to say, but a key premise of the pushback on government interactions with tech platforms is a recurring argument that the harms of foreign interference are largely illusory, and don’t warrant the broad administrative overreach that has spun up as a reaction. The truth is more complex: Specific elements of Russia’s active measures—like their use of fake accounts on social media to spread divisive messages and idiotic memes—may not have measurably changed any voters’ minds, but the best available social science research indicates that the hack-and-leak campaign carried out by Russian hacking groups had an effect. I say this not to relitigate the election results, but to point out that it wasn’t completely unreasonable for Twitter, Facebook, and the rest of the social media industry to enact aggressive election security measures post-2016. Millions of Americans were deeply outraged at revelations of Russian interference, and their outrage directly impacted the corporate bottom lines of Silicon Valley firms. Setting aside that it was the right thing to do for democracy, we invested in election security because we had a fiduciary duty to do so.
  • Effectively combating malign foreign interference in elections, as well as other collective security challenges, requires some degree of collaboration and information-sharing between elements of the government and private sector. When the U.S. Intelligence Community published its assessment of Russian activity during the 2016 election in January 2017, part of the reason the report landed with such a thud was that it came well past the point when the platforms targeted by Russia’s activities could have actually done anything about it. Real-time information sharing from the government to platforms addresses that gap, and can help empower platforms to disrupt the operations of malign entities, ideally before their activities have an impact on public discourse.
  • Some of the decisions platforms made in service of promoting election security and public health went too far, and resulted in notable content moderation failures. I testified to the House Oversight Committee that Twitter’s decision to restrict the New York Post’s coverage of Hunter Biden’s laptop was a mistake; I thought it was a mistake at the time, and continue to think so. Much hay has similarly been made of Facebook’s decision to restrict the so-called “lab leak” theory of COVID-19’s origins—so much hay, in fact, that the preliminary injunction in Missouri v. Biden incorrectly asserted that Twitter censored discussions of the theory as well. (We didn’t.) These moderation failures matter, and in service of building a safer, more trustworthy internet, we should diagnose why these failures happened. But the fact that platforms failed in their moderation efforts doesn’t require platforms to have erred because the government pressured them to do so; content moderation at scale fails all on its own, all the time.
  • Finally, and most importantly, I agree with the basic premise of Missouri v. Biden and related cases: Government jawboning of social media companies is a clear and present problem, and we should take seriously the task of understanding and preventing it. The “Facebook Files” shared as part of Rep. Jordan’s amicus brief show the coercive effects of the executive branch throwing elbows at a tech company. Apart from the First Amendment concerns stemming from this conduct, I also worry about this coercion because it undermines the openness and diversity of the internet as we know it. Evelyn Douek has written critically about the cartel-like behavior of social media companies; jawboning only exacerbates these tendencies, driving platforms to enforce not their own values, principles, and vision for the internet, but rather a homogeneous one held by outside parties. Put plainly: I think it’s a good thing that Facebook chooses to block some types of content that Twitter doesn’t, and vice versa; consumers should have meaningfully distinct choices when selecting what social media to use. That diversity requires companies to be able to exercise their independent judgment about content moderation policies and actions.

II.

The Fifth Circuit analysis of the FBI’s conduct rests on three core arguments:

  1. In its interactions with platforms, the FBI not only sought to understand platform policies, but acted to influence and shape them as well.
  2. The FBI’s interactions with platforms exceeded merely sharing “strategic information” with platforms, and crossed the line into coercive “requests.”
  3. The stated scope of the FBI’s efforts—combating foreign interference in American elections—crossed into the targeting of “domestically sourced ‘disinformation’” as well, significantly broadening the aperture of platform engagements with law enforcement.

Taken together, the Fifth Circuit concludes that these factors meet the standard of a “close nexus” rendering the government responsible for the actions of a private actor, and that the FBI “likely (1) coerced the platforms into moderating content, and (2) encouraged them to do so by effecting changes to their moderation policies, both in violation of the First Amendment.”

I leave the parsing of exactly where to draw the line between coercion and free-willed agreement to the constitutional lawyers—though, judging by Jameel Jaffer’s comments that the decision is a doctrinal “dog’s breakfast,” early views seem to be that the distinctions here are blurrier than ever. My concerns, however, are with the evidence backing the court’s analysis in this case.

The FBI acted to shape platform policy

The Fifth Circuit asserts that the FBI’s interactions with platforms went beyond merely understanding what platform policies were, and extended to proactively shaping them:

Per their operations, the FBI monitored the platforms’ moderation policies, and asked for detailed assessments during their regular meetings. The platforms apparently changed their moderation policies in response to the FBI’s debriefs. For example, some platforms changed their “terms of service” to be able to tackle content that was tied to hacking operations.

The claim here is a dramatic one, if true: That platforms adopted novel content moderation policies concerning the distribution of hacked materials on the basis of the FBI “entangling themselves in the platforms’ decision-making processes.”

This narrative is not supported by the timeline of when platforms, including Twitter, rolled out their policies concerning the distribution of hacked materials.

The first organized meeting between the tech platforms and law enforcement took place on May 23, 2018, at Facebook’s headquarters in Menlo Park. During the meeting, representatives of platforms spoke only briefly, describing at a high level the measures we were implementing to protect the upcoming midterm elections from Russian meddling. In the afternoon, representatives of the platforms and government went into separate rooms, discussing strategies for sharing information more effectively; the challenges of overclassification of relevant information emerged as a repeated theme. The day concluded with a happy hour held on the roof of Facebook’s offices; a Facebook staffer proudly told me about the family of foxes living on the building’s green roof—a fact that prompted me to speculate that Facebook had spent more money landscaping the roof of their office than Twitter had made in profit over its entire existence as a company. To the best of my recollection, hack-and-leak campaigns were never mentioned by anyone in attendance.

Even if the topic of hack-and-leaks had been discussed in the May 2018 meeting, the timeline presented in Missouri v. Biden doesn’t add up. Judge Doughty’s original opinion asserts that “platforms updated their policies in 2020 to provide that posting ‘hacked materials’ would violate their policies” ; this is, on its face, inaccurate. Facebook’s policy prohibiting posts including “content claimed or confirmed to come from a hacked source” was in place at least as early as April 2018, if not prior. Twitter introduced similar policy language regarding the “distribution of hacked materials” in an update to its rules on October 1, 2018, alongside a host of other election-related changes that had been in the works for months.

Platforms were robustly aware of the risk of hack-and-leak campaigns following the events of 2016—and had policies and procedures for managing them in place well ahead of discussions of the subject with representatives of the government. Claims in Missouri v. Biden that meetings with the government coerced platforms to change their rules don’t withstand scrutiny.

The FBI went beyond strategic information sharing and made direct moderation demands

In the Fifth Circuit’s analysis, the line between permissible information sharing and threat alerting and coercive moderation demands is a porous one. If representatives of the government share indicators of compromise (IOCs) with private sector entities, is a coercive demand to take some kind of action implicit in that sharing? The Fifth Circuit takes this to be the case:

… we do find the FBI’s requests came with the backing of clear authority over the platforms. After all, content moderation requests “might be inherently coercive if sent by . . . [a] law enforcement officer.”

… although the FBI’s communications did not plainly reference adverse consequences, an actor need not express a threat aloud so long as, given the circumstances, the message intimates that some form of punishment will follow noncompliance.

The implications of this argument are striking. The Fifth Circuit appears to hold that any information-sharing from the FBI to platforms is de facto coercive, simply by virtue of who it originated with. Even in situations where platforms actively solicit information from the government—as was regularly the case on election security matters between platforms and the FBI—the resulting communications are taken to be coercive because of the FBI’s standing as a law enforcement entity.

The substance of platforms’ interactions with the FBI paints a somewhat different picture. Drawing on a deposition of FBI Special Agent Elvis Chan, Judge Doughty’s original opinion in Missouri v. Biden offers a broad overview of how “industry working group” meetings typically played out:

At the USG-Industry (“the Industry”) meetings, social-media companies shared disinformation content, providing a strategic overview of the type of disinformation they were seeing. The FBI would then provide strategic, unclassified overviews of things they were seeing from Russian actors.

This description dovetails closely with my own recollections, and with the account of these meetings I provided in a December 2020 declaration to the Federal Election Commission. Across both bilateral (company-FBI) and multilateral (multiple companies and multiple government agencies) meetings, the various public and private sector stakeholders involved in securing American elections voluntarily briefed each other on their efforts, and as appropriate (under privacy regulations and classification regimes) shared information about their findings and concerns. Even in the evidence selectively presented in the Missouri v. Biden opinions and amici, it’s hard to find any signs of coercion—or conduct even beginning to approach coercion—in the government’s no-strings-attached sharing of largely abstract threat information with platforms. While it’s certainly possible that my experience was anomalous and that such coercive conduct took place between the FBI and platforms other than Twitter, none of the conversations I had with any of my peers at any point from 2018 onwards suggested that this was even remotely the case. 

The manner of the government’s sharing of information is likewise meaningful. The Twitter Files report that, between January 2020 and November 2022, I exchanged over 150 emails with representatives of the FBI. In one representative email, the FBI writes:

FBI San Francisco is notifying you of the below accounts which may potentially constitute violations of Twitter’s Terms of Service for any action or inaction deemed appropriate within Twitter policy.

In several others, the FBI passes lists of accounts that they “believe are violating your terms of service” or “may be subject any actions [sic] deemed appropriate by Twitter.” The FBI fastidiously—and I would argue conspicuously, in the evidence presented—avoids both assertions that they’ve found platform policy violations, and requests that Twitter do anything other than assess the reported content under the platform’s applicable policies.

Receiving and acting on external reports is a core function of platform content moderation teams, and the essential nature of this work is an independent evaluation of reported content under the platform’s own policies. The fact, cited in Missouri v. Biden, that platforms only acted on approximately half of reports from the FBI shows clearly that the standards platforms applied were not wholly, or even mostly, the government’s.

Finally, it does not withstand factual scrutiny that platforms were so petrified of adverse consequences from the FBI that they uncritically accepted and acted on information sent to them by the government. The Twitter Files themselves document clearly at least two instances in which, presented with low-quality information or questionable demands, Twitter pushed back on the FBI’s requests. In one case, the FBI passes on a request—seemingly from the NSA—that Twitter “revis[e] its terms of service” to allow an open-source intelligence vendor to collect data from the Twitter APIs to inform the NSA’s activities. This request is arguably as close to jawboning as any interaction between Twitter and the FBI gets; yet, in response, I summarily dismissed not only the request for a meeting to discuss the topic, but the entire premise of the request, writing, “The best path for NSA, or any part of government, to request information about Twitter users or their content is in accordance with valid legal process.” The question was not raised again.

This friction, in fact, was par for the course in interactions between platforms and law enforcement. Even as both sides made efforts to improve working dynamics, the relationships between tech companies and the government—and especially, the FBI—remained frosty through much of the 2020 election cycle. One industry working group meeting, hosted by Facebook on March 18, 2019, was nearly derailed by a confrontation between a Twitter lawyer and a representative of the Office of the Director of National Intelligence over the need for subpoenas for user data. While these interactions ultimately yielded meaningful positive results—the disruption of Russian perception hacking efforts just prior to election day in 2018, and rapid interventions against an Iranian government-backed voter intimidation campaign in 2020—the relationship between platforms and the government, at least in Twitter’s case, remained arms-length and mutually wary.

The FBI exceeded its mandate to address foreign disinformation campaigns and monitored domestic speech

In the aftermath of the 2016 elections, platforms and government officials were primarily focused on the risk of foreign disinformation campaigns—those originating in Russia, Iran, China, and elsewhere. But, over the course of the 2018 midterms, both platforms and the federal government realized that domestic speech represented a potentially serious factor in election integrity efforts. In Twitter’s retrospective review of the 2018 midterms, we observed that the most prominent types of harmful content we identified and removed were domestic attempts at voter suppression—not foreign disinformation campaigns. Building on 2016 meme warfare efforts, platforms identified sustained domestic campaigns to disseminate misleading content ahead of key election milestones, including encouragement for people to vote by text message. Other efforts advanced misleading claims that Immigration and Customs Enforcement officers would be patrolling polling places on election day—a clear voter suppression narrative.

Platforms chose to draw a hard line on this type of content, ruling that even memes posted in jest could not cross into territory of misleading voters about when or how to vote, or the conditions they could expect at polling places. This viewpoint was not uncontroversial, and Twitter’s decisions to enforce this policy faced a range of challenges, including in court in Germany. But, federal prosecutors pointed out the very real harms caused by even joking campaigns: The 2016 campaign by notorious Twitter troll Douglass Mackey (better known as “Ricky Vaughn”) is alleged to have persuaded more than 4,900 people to attempt to vote by text message using misleading memes on Twitter.

The Justice Department’s interest in these kinds of cases was not new, or limited to social media activity. The definition of federal election crimes has long included civil rights violations—including schemes to prevent minorities from voting, and attempts to prevent qualified voters from participating in elections. Following the 2016 election and prominent attempts to weaponize social media to spread overtly voter suppressive content, the DOJ’s attention turned to monitoring platforms for such behavior in the future. The Twitter Files argue that a focus by the FITF on domestic content was a bizarre over-extension of their mandate to focus, eponymously, on foreign influence; but platform engagements with the FBI were never strictly limited to foreign activity (nor to engagements with the FITF), and included engagements covering domestic illegal activities—of which voter suppression was a notable example in the 2018 and 2020 elections.

Despite the DOJ’s long-running interest in these issues, they represented a relatively small proportion of overall requests sent to Twitter by the FBI. And, in spite of the fact that the content in question was potentially illegal under federal election law, the FBI never demanded — or even requested—that Twitter remove it; all reports were phrased as requests for platforms to assess the content against our own, independently determined policies.

As I testified to the House Oversight Committee, it is my view that time spent monitoring social media platforms for low-circulation voter suppression memes is perhaps not the best use of Department of Justice resources. My team at Twitter found the FBI’s escalations of foreign influence activity substantially more valuable and actionable than their flags of policy-violating voter intimidation schemes. Nevertheless, given the DOJ’s scope of responsibilities concerning election crimes, coupled with the overt policies regarding voting misinformation established by virtually every platform following the 2016 elections, I struggle to see these engagements as far outside the realm of reasonable conduct by the FBI and other government actors. The question of whether the FBI is wasting its time is meaningfully distinct from whether its activities run contrary to the First Amendment.

III.

The nontrivial matter of the decision’s factual foundations aside, I do share the overall concern raised in Missouri v. Biden that tech platforms as they exist today are ill-equipped to manage the influence of government actors on their content moderation actions. Despite these concerns, I also believe that platforms and government must, in some fashion, work together to address clear and present threats to the security of elections in the United States. Outright prohibitions on executive branch interactions with platforms are heavy-handed solutions that, in my view, do more harm than good. What are alternative approaches that could leave open the door to collaboration while making platforms more resilient to outside pressure?

I offer three possible solutions:

Separate government relations and trust and safety organizations

The decisions tech companies make are informed by the structure of the teams that make them—and the present organization of most platforms leaves significant room for conflicts of interest to emerge within the teams responsible for arbitrating terms of service violations. At Meta, for example, content moderation decisions fall under the purview of the company’s public policy team, led by executive Joel Kaplan.

The tensions here are clear, and leave significant space for government jawboning to effectively influence platforms. Lobbying teams, like Meta’s public policy organization, are responsible for maintaining positive relationships with government stakeholders to advance company interests; they are not incentivized to take actions that could upset those relationships. Trust and safety teams, in contrast, are typically incentivized to consistently apply their policies as written. If and when these goals come into conflict with each other—as inevitably will happen when platforms have to moderate the speech of political figures—the key question is: whose priorities are more important? For platforms that subordinate trust and safety to public policy, appeasing elected officials may win out in moments of ambiguity.

An alternate model is possible. At Twitter, we maintained a strict separation between the teams responsible for lobbying and government relations and the teams responsible for direct content moderation activities. During my hearing before the House Oversight Committee, when I was asked by Rep. Byron Donalds if anyone at Twitter had contact with the DNC or the Biden campaign team, I was able to honestly answer that I didn’t know, and that those functions were intentionally kept separate from teams like mine that enforced the site’s rules. The Twitter Files paint a similar picture: While members of Twitter’s public policy and legal teams are shown receiving a wide range of reports, their actions in every case are to funnel those reports into operational processes that result in independent review and evaluation.

There are, of course, unavoidable limits to this separation of powers within any corporate environment. Ultimate responsibility for corporate decisions rests with the CEO, and their decisionmaking could well be shaped by jawboning even if the organizations they supervise maintain clear distinctions between the teams responsible for lobbying and those responsible for content moderation. But, as a matter of day-to-day operations, I believe companies would be well-served by establishing and maintaining firm boundaries, and ensuring their moderation teams are charged only with the task of enforcing the company’s rules as written.

Separate speech and non-speech engagements

Arguably, the types of government/private sector collaboration that have proven to be most productive are shared efforts to detect and disrupt platform manipulation and other forms of malign influence in American politics. The government has access to unique information that can help platforms take action against these campaigns—and in turn, platforms have the operational capacity to actually disrupt what troll farms, hacking groups, and coordinated mobs are doing in real time. Remedies to jawboning should take care not to throw out the baby with the bathwater with regards to these interactions.

The original injunction granted by Judge Doughty contemplates these tradeoffs, but offers only muddled recommendations. The injunction explicitly does not prohibit “informing social media companies of postings involving criminal activity or criminal conspiracies” or “contacting and/or notifying social media companies about criminal efforts to suppress voting, to provide illegal campaign contributions, of cyber-attacks against election infrastructure, or foreign attempts to influence elections”—in short, virtually all of the efforts the FBI and other parts of the federal government had been engaged in with platforms from 2018 onwards. Where should the line here properly be drawn? Neither the original ruling nor the Fifth Circuit’s judgment offer a clear perspective.

A more usable framework for parsing government/platform interactions could draw on the “actors, behavior, content” taxonomy developed by Camille François. In place of focusing on permitting or disallowing discussion of the outcomes of malign foreign influence campaigns—undoubtedly, a nebulous category—courts could focus instead on the vectors by which influence campaigns are carried out, and constrain the types of information that the government is permitted to discuss with platforms. As François writes:

While the public debate in the U.S. has been largely concerned with actors (who is a Russian troll online?), the technology industry has invested in better regulating behavior (which accounts engage in coordinated and inauthentic behavior?) while governments have been most preoccupied with content (what is acceptable to post on social media?).

A relatively coherent, First Amendment-protective approach could permit discussion of the actors and behaviors involved in foreign influence, while prohibiting information-sharing about the content of messages. Such an approach is imperfect; as François notes, viral deception plays out across actors, behaviors, and content, and it isn’t possible to fully characterize a malign campaign without touching on all three. But, in the interest of drawing clear lines while ensuring platforms retain access to essential technical indicators and threat intelligence about the activities of troll farms and government-backed hacking groups, such an approach could create a common vocabulary within which to ground permissible information-sharing and collaboration.

Adopt new mechanisms for radical transparency about platform/government engagement

Finally, both platforms and government entities can take meaningful steps to foster public trust in their interactions—trust that is sorely lacking on both sides. The perception of meetings between the FBI and tech platforms as shadowy and secretive drives this distrust. In the depiction of these discussions in the Twitter Files and various amici in Missouri v. Biden, the meetings are characterized as a multi-year, clandestine effort by the Intelligence Community to infiltrate social media content moderation practices at Twitter and other companies. These conspiratorial characterizations aren’t exactly true to the facts—Twitter, Meta, and other companies regularly put out joint statements following election security calls ahead of the 2020 elections—but it’s clear these high-level press releases haven’t gone far enough.

Radical transparency may be one way to rebuild this lost trust. I’ve argued before that platforms ought to create the position of a “public editor” for their content moderation work—establishing a role dedicated to building public awareness of how and why platforms enforce their rules, and the context and influences driving their decisions. An individual in such a role would be well-positioned to describe at an appropriate level of detail what platforms are talking about when they get together with the FBI and other parts of the government. Alternatively, the platforms and government stakeholders could appoint a rapporteur to serve as an independent observer of interactions during each election cycle, releasing a report about their observations after the fact; academic researchers working with Facebook adopted a similar approach to their collaboration to help ensure impartiality. Third-party conveners, like the information sharing and analysis centers (ISACs) common across the cybersecurity industry, could play a similar role.

Ultimately, whatever form this transparency takes, it’s imperative that both platforms and the government recognize that their interactions are broadly mistrusted—and that mistrust has very real, operational consequences. Whether or not the muddled precedents of the Fifth Circuit’s decision in Missouri v. Biden are allowed to stand, the success of future election security efforts requires both the government and technology companies to reassess the scope of the work they’re doing, and how that work is explained to an increasingly apprehensive public.