NewYorkUniversity
LawReview

Topics

Statutory Interpretation

Results

Textual Gerrymandering: The Eclipse of Republican Government in an Era of Statutory Populism

William N. Eskridge, Jr., Victoria F. Nourse

We have entered the era dominated by a dogmatic textualism—albeit one that is fracturing, as illustrated by the three warring original public meaning opinions in the blockbuster sexual orientation case, Bostock v. Clayton County. This Article provides conceptual tools that allow lawyers and students to understand the deep analytical problems faced and created by the new textualism advanced by Justice Scalia and his heirs. The key is to think about choice of text—why one piece of text rather than another—and choice of context—what materials are relevant to confirm or clarify textual meaning. Professors Eskridge and Nourse apply these concepts to evaluate the new textualism’s asserted neutrality, predictability, and objectivity in its canonical cases, as well as in Bostock and other recent textual debates.

The authors find that textual gerrymandering—suppressing some relevant texts while picking apart others, as well as cherry-picking context—has been pervasive. Texts and contexts are chosen to achieve particular results—without any law-based justification. Further, this Article shows that, by adopting the seemingly benign “we are all textualists now” position, liberals as well as conservatives have avoided the key analytic questions and have contributed to the marginalization of the nation’s premier representative body, namely, Congress. Today, the Supreme Court asks how “ordinary” populist readers interpret language (the consumer economy of statutory interpretation) even as the Court rejects the production economy (the legislative authors’ meaning).

Without returning to discredited searches for ephemeral “legislative intent,” we propose a new focus on legislative evidence of meaning. In the spirit of Dean John F. Manning’s suggestion that purposivists have improved their approach by imposing text-based discipline, textualists can improve their approach to choice of text and choice of context by imposing the discipline of what we call “republican evidence”—evidence of how the legislative authors explained the statute to ordinary readers. A republic is defined by law based upon the people’s representatives; hence the name for our theory: “republican evidence.” This Article concludes by affirming the republican nature of Madisonian constitutional design and situating the Court’s assault on republican evidence as part of a larger crisis posed by populist movements to republican democracies today.

Cracking the Whole Code Rule

Anita S. Krishnakumar

Over the past three decades, since the late Justice Scalia joined the Court and ushered in a new era of text-focused statutory analysis, there has been a marked move towards the holistic interpretation of statutes and “making sense of the corpus juris.” In particular, Justices on the modern Supreme Court now regularly compare or analogize between statutes that contain similar words or phrases—what some have called the “whole code rule.” Despite the prevalence of this interpretive practice, however, scholars have paid little attention to how the Court actually engages in whole code comparisons on the ground.

This Article provides the first empirical and doctrinal analysis of how the modern Supreme Court uses whole code comparisons, based on a study of 532 statutory cases decided during the Roberts Court’s first twelve-and-a-half Terms. The Article first catalogues five different forms of whole code comparisons employed by the modern Court and notes that the different forms rest on different justifications, although the Court’s rhetoric has tended to ignore these distinctions. The Article then notes several problems, beyond the unrealistic one-Congress assumption identified by other scholars, that plague the Court’s current approach to most forms of whole code comparisons. For example, most of the Court’s statutory comparisons involve statutes that have no explicit connection to each other, and nearly one-third compare statutes that regulate entirely unrelated subject areas. Moreover, more than a few of the Court’s analogies involve generic statutory phrases—such as “because of” or “any”—whose meaning is likely to depend on context rather than some universal rule of logic or linguistics.

This Article argues that, in the end, the Court’s whole code comparisons amount to judicial drafting presumptions that assign fixed meanings to specific words, phrases, and structural choices. The Article critiques this judicial imposition of drafting conventions on Congress—noting that it is unpredictable, leads to enormous judicial discretion, reflects an unrealistic view of how Congress drafts, and falls far outside the judiciary’s institutional expertise. It concludes by recommending that the Court limit its use of whole code comparisons to situations in which congressional drafting practices, rule of law concerns, or judicial expertise justify the practice—e.g., where Congress itself has made clear that one statute borrowed from or incorporated the provisions of another, or where judicial action is necessary to harmonize two related statutes with each other.

Restoring the Historical Rule of Lenity as a Canon

Shon Hopwood

In criminal law, the venerated rule of lenity has been frequently, if not consistently, invoked as a canon of interpretation. Where criminal statutes are ambiguous, the rule of lenity generally posits that courts should interpret them narrowly, in favor of the defendant. But the rule is not always reliably used, and questions remain about its application. In this article, I will try to determine how the rule of lenity should apply and whether it should be given the status of a canon.

First, I argue that federal courts should apply the historical rule of lenity (also known as the rule of strict construction of penal statutes) that applied prior to the 1970s, when the Supreme Court significantly weakened the rule. The historical rule requires a judge to consult the text, linguistic canons, and the structure of the statute and then, if reasonable doubts remain, interpret the statute in the defendant’s favor. Conceived this way, the historical rule cuts off statutory purpose and legislative history from the analysis, and places a thumb on the scale in favor of interpreting statutory ambiguities narrowly in relation to the severity of the punishment that a statute imposes. As compared to the modern version of the rule of lenity, the historical rule of strict construction better advances democratic accountability, protects individual liberty, furthers the due process principle of fair warning, and aligns with the modified version of textualism practiced by much of the federal judiciary today.

Second, I argue that the historical rule of lenity should be deemed an interpretive canon and given stare decisis effect by all federal courts. If courts consistently applied historical lenity, it would require more clarity from Congress and less guessing from courts, and it would ameliorate some of the worst excesses of the federal criminal justice system, such as overcriminalization and overincarceration.

An Empirical Study of Statutory Interpretation in Tax Law

Jonathan H. Choi

A substantial academic literature considers how agencies should interpret statutes. But few studies have considered how agencies actually do interpret statutes, and none has empirically compared the methodologies of agencies and courts in practice. This Article conducts such a comparison, using a newly created dataset of all Internal Revenue Service (IRS) publications ever released, along with an existing dataset of court decisions. It applies natural language processing, machine learning, and regression analysis to map methodological trends and to test whether particular authorities have developed unique cultures of statutory interpretation. 

It finds that, over time, the IRS has increasingly made rules on normative policy grounds (like fairness and efficiency) rather than merely producing rules based on the “best reading” of the relevant statute (under any interpretive theory, like purposivism or textualism). Moreover, when the IRS does focus on the statute, it has grown much more purposivist over time. In contrast, the Tax Court has not grown more normative and has followed the same trend toward textualism as most other courts. But although the Tax Court has become more broadly textualist, it prioritizes different interpretive tools than other courts, like Chevron deference and holistic-textual canons of interpretation. This suggests that each authority adopts its own flavor of textualism or purposivism. 

These findings complicate the literature on tax exceptionalism and the judicial nature of the Tax Court. They also inform ongoing debates about judicial deference and the future of doctrines like Chevron and Skidmore deference. Most broadly, they provide an empirical counterpoint to the existing theoretical literature on statutory interpretation by agencies.