NewYorkUniversity
LawReview
The Law Review Forum

How Social Media Platforms Can Promote Compliance with the First Amendment

Sarah Ludington, Lauren Smith, Christian Bale

September 30, 2022

Sarah Ludington is a Clinical Professor of Law and the Director of the First Amendment Clinic at the Duke University School of Law. Lauren Smith is a lawyer in Washington, D.C. and a former member of the Duke First Amendment Clinic. Christian Bale is a lawyer in Wilmington, DE and a part-time PhD student at the University of Oxford.


The ongoing Elon Musk-Twitter saga has reignited debate about the First Amendment’s application to social media. While Musk has signaled a preference to protect “free speech” on Twitter, stating that he opposes “censorship that goes far beyond the law,” critics have expressed concern that deregulation of the platform could transform it into a haven for “extremist views.” As a private entity, Twitter, like other social media companies, is not bound by the First Amendment and may adopt its own content moderation policies. The same is not true, however, for a subset of social media users. When public officials choose to engage on social media, the First Amendment imposes constraints on their online behavior.

Over the past several years, prospective clients have reached out to the Duke Law First Amendment Clinic with what is becoming a common complaint: a public official—typically a local government actor such as a mayor, county commissioner, or school board member—has censored their comments on the official’s social media page, or worse, has blocked them from participating on the page entirely. Such behavior by public officials effectively creates two classes of citizens—those who can communicate easily with public officials through social media and those who cannot.

This stratification of access is anathema to the First Amendment, which is premised on the notion that the free exchange of information, ideas, and opinions is crucial to our system of democratic self-governance. As the Supreme Court recognized in 2017, social media websites like Facebook and Twitter are, for many, “the principal sources for knowing current events” and “speaking and listening in the modern public square.” They “provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.” If social media websites have become a modern marketplace of ideas, and an essential medium of communication between the public and their public officials, then burdening access to this channel contravenes the principle that government entities and public officials must not discriminate against speech in public forums based on the content or viewpoint of the ideas expressed (more on this below).

To make matters worse, the number of news deserts in North Carolina—and nationwide—is increasing, meaning there are fewer communities that benefit from “credible and comprehensive news and information” about local issues. Across the country, nearly twenty-five percent of all newspapers that existed in 2004 have disappeared. The people who live in these news deserts rely almost entirely on social media to learn about the policies and activities of their local officials. Being blocked from this source of information impairs the ability of constituents to follow local politics and express their views on important issues, fundamentally undermining our democracy. Fewer news sources also means fewer resources to hold municipal officials accountable for their actions, including the officials’ social media (mis)behavior.

Our clinic plays a significant role in pushing back against these and other First Amendment violations. When we engage with a local official who has—often unknowingly—violated a client’s First Amendment right to free speech, it presents an opportunity to improve constitutional literacy for all parties involved. However, the settings on popular social media platforms do not currently provide the tools that the clinic needs to effectively represent our clients and educate these officials. In this essay, we propose a number of changes that social media platforms can make to their settings that will help the public hold its officials accountable and also help well-intentioned officials comply with their constitutional obligations. It is far from an exhaustive list, but our hope is that the essay will start a dialogue that helps social media platforms better assist government officials, attorneys, clinics like ours, and ultimately the court system in considering and resolving First Amendment disputes online.

Content and Viewpoint Discrimination in Public Forums

Before discussing our proposals, it is important to understand where the social pages of public officials fit in the First Amendment landscape. When evaluating the constitutionality of a government official’s conduct on social media, the place to start is the First Amendment’s public forum doctrine. This is the framework that courts use to determine if and how the government can regulate speech on government property, virtual or otherwise.

In Perry Education Ass’n v. Perry Local Educators’ Ass’n, 460 U.S. 37 (1983), the Supreme Court explained that there are three categories of government property. The first category, known as traditional public forums, are locations that have traditionally been devoted to assembly and debate, such as streets and parks. Nearly a half-century before the public forum doctrine gained a foothold in Supreme Court jurisprudence, Justice Owen J. Roberts articulated the classic rationale for open speech in these areas in Hague v. Committee for Industrial Organization, 307 U.S. 496 (1939):

Wherever the title of streets and parks may rest, they have immemorially been held in trust for the use of the public and, time out of mind, have been used for purposes of assembly, communicating thoughts between citizens, and discussing public questions. Such use of the streets and public places has, from ancient times, been a part of the privileges, immunities, rights, and liberties of citizens.

Traditional public forums enjoy the strongest First Amendment protections: the government may bar neither certain perspectives nor whole topics of conversation. At most, the government can impose time, place, and manner limitations (e.g., limitations designed to protect public safety or reasonable limits on sound amplification).

The list of traditional public forums is static. For instance, in International Society for Krishna Consciousness, Inc. v. Lee, 505 U.S. 672 (1992), the Court held that because airports are relatively new public spaces, they do not have traditional public forum status. In Chief Justice Rehnquist’s words, “Given the lateness with which the modern air terminal has made its appearance, it hardly qualifies for the description of having ‘immemorially . . . time out of mind’ been held in the public trust and used for the purposes of expressive activity.”

On the other end of the spectrum are nonpublic forums. This category covers property that lacks a free speech tradition and has not been designated by the  government as a space for speech and expressive conduct. In these forums, like military bases or airport terminals, the government’s right to control speech is akin to that of an owner of private property. As Justice Byron White explained in Perry Education Ass’n:

In addition to time, place, and manner regulations, the state may reserve [a nonpublic forum] for its intended purposes, communicative or otherwise, as long as the regulation on speech is reasonable and not an effort to suppress expression merely because public officials oppose the speaker’s view.

Justice White added a significant caveat—the government’s decision to disallow speech cannot be a veiled attempt to suppress a particular speaker’s viewpoint. This is a practice known as “viewpoint discrimination” and it is considered the most pernicious speech restriction under the public forum doctrine and is always prohibited.

The last category, and the one most relevant to social media platforms, is that of “designated public forums.” These are forums that, while not traditionally and historically open to the public, have been designated as open to speech. These include, for example, the public comment segment of a meeting held by a county commission or school board. A government body can impose neutral limitations on speech in these forums, such as requiring the public to sign up in advance or limiting the amount of time during which they can speak. It can even impose some subject-matter limitations, such as by limiting topics of discussion to those listed on a meeting agenda.

But as in traditional public forums, subject-matter, or content-based, restrictions in designated public forums are suspect: restrictions must be “narrowly drawn to effectuate a compelling state interest.” And, as is true for all government property, viewpoint discrimination is strictly prohibited. In practice, this means that a speaker cannot be prevented from standing during the designated time for public comments and delivering blistering criticism of the government’s policies.

Emerging Guidance from the Federal Courts

As public officials increasingly turn to social media to communicate information and engage with the public, federal courts have begun to chart a course for how the public forum doctrine applies to the social media accounts of government officials. At this point, the case law is of recent vintage and there are varying analyses across the five circuits that have decided cases on appeal. The Fourth, Second, and Ninth Circuits have signaled receptivity to the idea that social media accounts function as public forums, while the Sixth and Eighth remain skeptical.

In those circuits that have found that an official created a designated public forum on her social media account, the courts conducted a holistic appraisal of the account to determine whether the actions of the public official could be “fairly attributable to the state.” (These cases all arose in actions brought under 42 U.S.C. §1983, which requires the plaintiff to establish that the official was acting “under color of” state law.) Broadly speaking, the analysis focuses on three categories of information: (1) the identification and appearance of the social media site, (2) its interactivity, and (3) its stated purpose and the way that the official actually used the site.

For example, in Davison v. Randall, 912 F.3d 666 (4th Cir. 2019), the Fourth Circuit Court of Appeals held that a county commissioner created a designated public forum on Facebook when she created the page the day after winning the election, draped it with “the trappings of her office” (including her official government contact information), enabled all of the interactive features of the page and invited the public to use it as a means of communicating with her, and ​​used the page to provide information to the public about commission activities, solicit feedback on policy, and keep the public informed about ongoing community events, such as snow removal efforts. Under these circumstances, the Fourth Circuit held that the commissioner’s banning of a profile for posting allegations with which she disagreed constituted a violation of the First Amendment.

Similarly, in Garnier v. O’Connor-Ratcliff, 2022 WL 2963453 (9th Cir. 2022), the Ninth Circuit Court of Appeals held that two school board trustees created a designated public forum when they identified themselves as public officials on their Facebook and Twitter accounts, the content of which was “overwhelmingly geared” towards providing information about the Board’s activities and soliciting input from the public (e.g., by posting interactive polls) on policy issues. In Garnier, the trustees initially enabled all of the interactive features of the sites but then gradually began paring back that interactivity—by deleting or hiding lengthy and repetitive comments, applying word filters, and eventually blocking some users. The Ninth Circuit noted that the Trustees could have limited the interactivity of the forum by establishing “formal rules of decorum or etiquette” to regulate the content posted to the accounts in definite and objectively non-discriminatory ways, such as using word filters to effectively prevent all comments.

Finally, though the judgment was later vacated as moot, in Knight First Amendment Institute at Columbia University v. Trump, 928 F.3d 226 (2d Cir. 2019), the Second Circuit Court of Appeals found that President Donald Trump created a public forum in his Twitter feed. The site was clearly identified as belonging to the “45th President of the United States of America” and—before Twitter deleted the account—was used as an important tool of governance and executive outreach. Trump tweeted almost daily and famously used it to engage with members of the public, to introduce policy initiatives, and to announce the appointments and firings of cabinet-level executives. Trump therefore violated the rights of users when he blocked them for disagreement with critical comments. It bears noting that Trump operated his Twitter account for many years prior to becoming president, during which time he enjoyed the right, as a private citizen, to block and delete other users from his feed.

However, as already stated, federal courts have not uniformly found the social media accounts of public officials to be public forums. The Sixth and Eighth Circuits have expressed skepticism that a previously private account, and therefore a nonpublic forum, can be transformed into a public forum by virtue of its owner’s election or appointment to public office. In a split decision, the Eighth Circuit in Campbell v. Reisch, 986 F.3d 822 (8th Cir. 2021) held that a Missouri state legislator did not violate the First Amendment when she blocked individuals from her Twitter account. Reisch had started the Twitter account when she was a candidate for office and continued to use it after her election. The panel majority found that the account was used “overwhelmingly for campaign purposes,” not for her official duties, and thus was not converted into a public forum—a determination disputed by the dissenting judge. Raisch raises important and undecided questions about balancing the free speech rights of incumbents—who must run reelection campaigns while conducting the duties of their office—with the public’s interest in interacting with its elected officials and in obtaining information about the incumbent’s policy positions and voting record prior to an election.

The Sixth Circuit, in Lindke v. Freed, 37 F.4th 1199 (2022), similarly balked at finding that an official’s social media activity was state action (i.e., that the government was responsible for the action). James Freed operated a public Facebook page with a huge following before being appointed the city manager of Port Huron, Michigan. After the appointment, he updated the page to identify himself as City Manager and provided his government email address as his contact information. His posts thereafter included a “medley” of personal and public information, including policy initiatives and Covid-19 information. Instead of examining the site’s “appearance or purpose” or its interactive features, the Sixth Circuit looked at whether social media activity was part of Freed’s “official duties” and whether he used government employees or resources to run the account, ultimately concluding that Freed maintained his page in his personal capacity—as he did not operate the page to “fulfill any actual or apparent duty of his office,” nor did he “use his governmental authority to maintain it.”

As these cases illustrate, much about the application of the public forum doctrine to social media remains unclear, including vitally important questions such as how to assess whether the social media activity of government officials is state or private action, and the indicia courts should assess to determine whether an official’s social media account is a designated public forum. Perhaps the most complicated questions arise in the context, as in Campbell and Freed, where public officials use their social media sites for mixed purposes. On the one hand, as private citizens, candidates for office presently have maximum free speech rights to use social media for their campaigns as they see fit. On the other hand, allowing incumbents to later delete comments and block users from their campaign pages obscures that official’s record and positions—and the public’s response to them—from the view of potential voters. Courts have not yet defined a clear line between campaign business and official business nor how to weigh and balance the interests at stake.

What is clear, however, is that an official’s decision to prohibit or limit certain users from viewing or interacting with the official’s social media page seriously impairs those users’ abilities to engage with the official and learn about matters of public importance. When the decision is based on the official’s disagreement with a user’s point of view, it amounts to the type of viewpoint discrimination against which the First Amendment should protect.

A Few Proposals to Help Resolve these Contentious Disputes

When a public official deletes comments or blocks individuals from interacting on the official’s page, the threat of litigation is sometimes the only recourse available to users to vindicate their First Amendment rights. Unfortunately, establishing that the official deleted comments or previously blocked users is easier said than done. Various features and settings on social media platforms make it challenging for users to obtain the evidence they need to bring a viable claim in court. What follows is a description of the challenges presented by social media settings and a list of recommendations for social media platforms like Twitter and Facebook that would increase the transparency and accountability of public officials on social media, help our clients to secure their rights, and help the clinic to educate and train elected officials.

Often, it is very difficult to prove that a user’s comment has been deleted. Neither Facebook nor Twitter keeps a record of deleted comments in the publicly accessible archives of the page, and it is not clear if either keeps a keystroke log that records the insertion and deletion of the comment in their files. In other words, once a public official deletes a comment, it does not appear in a download of the page’s data. A commenter must have the foresight to screenshot her own post on an official’s page before it is deleted to later establish that she has been censored. Platforms could easily fix this problem by maintaining a list of deleted comments and other historical information that could be downloaded with the page’s current data.

Facebook presents unique difficulties in monitoring First Amendment violations because of the complexity of its settings. First, a user can create two different types of accounts—a public “page” or a private “profile”—each of which has multiple and differing options for account administrators to block, mute, or filter access to the site or its comments. A user can share content from her profile to her page, and vice versa, and some of the settings from a user’s private profile will flow through to her public page. Moreover, Facebook, unlike Twitter, does not notify users when they have been blocked from a page or profile.

To illustrate the complexity, consider that public officials on Facebook can use the following options to limit the ways that the public can view or interact with their pages:

      • An official can block or remove particular users from the page. Removing users means that they no longer “like” the page (meaning they no longer follow the page and the page’s content won’t show up in their news feeds). A removed person, however, can still see the page and “like” it again. When users are blocked from a page, they can view the page and share content from it, but they are “no longer able to publish to [the] Page, like or comment on [the] Page’s posts, message [the] Page or like [the] Page.”
      • Officials can use “page moderation” to designate a list of words, phrases, or emojis that they want hidden from the page. If a comment contains one of these designated words, it is automatically “hid[d]e[n]” on the post. Confusingly, the commenter and her Facebook friends can still see the comment, but no one else can see it. Moreover, the commenter is not notified that her comment has been hidden. While it may be justifiable for an official to screen comments for profanity, public officials can easily abuse this feature by filtering for words frequently used by potential critics. Texas A&M University, for example, recently settled a lawsuit asserting First Amendment violations after it filtered for words such as “testing,” to automatically hide critical comments from animal rights activists.
      • Officials can manually hide or delete individual comments on each post by clicking on the ellipses to the right side of the comment, which reveals a dropdown menu that permits the hiding or deletion of the comment. Deleted comments are removed from the post completely. Hidden comments are still visible to the commenter and her friends, but they are no longer visible publicly to anyone else. Commenters are not notified when their comments have been hidden or deleted.
      • Officials can control which users are permitted to comment on a post. After posting, an official can permit comments from: the public (meaning that anyone can comment), pages followed, or pages and profiles mentioned (tagged) in the post. To effectively prevent anyone from commenting on a post, the official can select the last category and simply not tag any other pages or profiles in the post.
      • Officials can also prevent users from seeing posts and comments on their public Facebook pages by originating posts or comments on their private Facebook profiles and then sharing the content to their public page. In one case that the clinic handled, a town official maintained a private profile, from which he blocked a significant number of his constituents, and a public page, which he believed that everyone could view. The official frequently posted about town business on his private profile and shared it to his public page, not understanding that the content would still not be visible to anyone blocked from his profile because the blocking flowed through to the public page.

Finally, the options available to page administrators are significantly different when accessed through the mobile app as opposed to through a desktop browser, making it harder for officials to maintain the appropriate settings and for users to understand their status vis-a-vis the official’s page. Adding to this complexity, Facebook can—and frequently does—change its settings and controls.

To increase transparency and accountability for public officials online, Facebook and other social media platforms should consider the following proposals:

      • Facebook should notify individuals who have been blocked or removed from a public page. This would enable users to take prompt action to reinstate their access to the page if they were wrongfully removed. Likewise, individuals whose comments are hidden or deleted from a post on a public page should be notified of this action. This would go a long way towards easing confusion: Currently, it is difficult to know when comments have been hidden because the commenter and her friends—but no one else—can still see the comment on the page. It would also lessen the need for users to constantly monitor a public official’s page for potential censorship.
      • Facebook should simplify the options for administrators of public pages, allowing them to toggle comments on or off for all users and making those controls consistent whether accessed through the browser or the mobile app. While the clinic prefers that officials allow comments in order to maximize speech and interaction with government officials, if an official prefers to not create a public forum for comments on her page, turning off commenting ability should be made simpler so that officials do not need to use workarounds like extensive word filters.
      • Facebook and other social media platforms should adopt features that increase the transparency of public pages. Currently, it is extremely difficult for users to assemble evidence that they have been removed from a page or that their comments have been hidden and/or deleted. Instead, they are forced to engage in constant monitoring of the page, take multiple screenshots, or even create additional accounts that allow them to interact with pages after they have been blocked. Facebook could adopt a number of features to improve transparency, such as by revealing the settings chosen by the administrators of public pages.
      • Finally, Facebook and other social media platforms should maintain more detailed records about the administration of pages, so that this data is available upon request (either in the form of a FOIA request or court subpoena). Recorded data should include a historical record of users who have been blocked (and unblocked) or removed, and a record of the content of posts and comments that have been deleted or hidden—all of which should also have the relevant timestamps. A similar log should also be created to track historic setting selections for interactive features. Currently, a download of a Facebook page reveals the current setting options, but it does not maintain a history of previous settings. This makes it easy for page administrators to simply unblock a user upon request, and re-block that user at a future time, without leaving an easily accessible data trail.

Though the focus of our proposals is on public pages (and not private profiles) run by public officials, we acknowledge that implementation of these proposals is not without complication. To implement sweeping settings changes, it is likely that some of the features mentioned above would invariably be applied to public pages run by clearly private persons. Indeed, the purpose of these proposals is in part to help identify who is a state actor and who is not, so certain transparency features would necessarily be applied to all pages.

Conclusion

One of the benefits of social media is that it has increased our capacity to make connections and communicate with one another about all sorts of subject matter. It is unsurprising, then, that so many government entities and public officials have turned to popular platforms to engage with their constituents on matters of public interest. In theory, this means that We the People now have more venues to engage with our government.  But government use of social media raises complications about the extent of expressive freedom in the digital space. As the internet’s role as a communal gathering space continues to expand, social media platforms must help to hold public officials accountable by adopting proposals like those discussed above that make it easier for officials to comply with the First Amendment and for citizens to enforce these rights.