NewYorkUniversity
LawReview

Topics

Judicial Process

Results

Choosing Interpretive Methods: A Positive Theory of Judges and Everyone Else

Alexander Volokh

In this Article, I propose a theory of how rational, ideologically motivated judges might choose interpretive methods, and how rational, ideologically motivated laymen—legislators, litigation organizations, lobbyists, scholars, and citizens—might respond. I assume, first, that judges not only have ideological preferences but also want to write plausible opinions. Second, I assume that every method of statutory or constitutional interpretation has a “most plausible point” along a spectrum of possible decisions in a given case. As a result, if a judge decides to use any particular interpretive method, that method will pull him towards its “most plausible point,” possibly making him deviate from his own ideal point.

When a judge can choose an interpretive method, he selects the one that (taking these deviations into account), among other things, allows him to stay as close as possible to his favored outcome. Thus, any given method is chosen only by judges whose ideal points, roughly speaking, are not too distant from that method’s most plausible point. This behavior creates a selection bias. An interpretive method’s political valence under a regime of free interpretive choice thus differs systematically from what it would look like if that method were mandatory. As a result, one might favor mandating an interpretive method even though one is politically closer to the current practitioners of a different method.

A judge can choose not only which interpretive method to use but also whether to use the same method from case to case. This Article argues that an individual judge’s choice of interpretive method does not usually substantially affect the methods that other judges use. Therefore, even though ideologically motivated judges (or litigation groups) might want to make the method they prefer in most cases mandatory for everyone, it can often be rational for these judges to deviate from that preferred method in instances where a different method would produce a more appealing outcome.

Toward One America: A Vision in Law

The Honorable J. Harvie Wilkinson III

Madison Lecture

In his Madison Lecture, Judge Wilkinson urges a new purpose for American law: the explicit promotion of a stronger sense of national cohesion and unity. He argues that the judicial branch should actively seek to promote this nationalizing purpose and suggests seven different ways for federal courts to do so. He contends further that a nationalizing mission for law is needed at this moment in American history to counteract the demographic divisions and polarizing tendencies of our polity. This purpose need not entail the abdication of traditional values of judicial restraint, should not mean the abandonment of the traditional American credo of unity through pluralism, and must not require the sacrifice of the law’s historic commitment to the preservation of order and the protection of liberty. But the need for a judicial commitment to foster a stronger American identity is clear. The day when courts and judges could be indifferent to the dangers of national fragmentation and disunion is long gone.

Accuracy Counts: Illegal Votes in Contested Elections and the Case for Complete Proportionate Deduction

Kevin J. Hickey

Contested elections in which the number of illegal votes exceeds the purported winner’s margin of victory present courts with difficult choices. Simply certifying the result risks denying the true winner his victory, while ordering a new election leaves the choice to a changed electorate. Adjusting the results is also problematic, as it may create a perception that judges, and not voters, have decided the election. This Note argues that courts should be more willing to use statistical techniques to resolve this type of election dispute. It critiques the various remedial measures that courts have employed, as well as the rejection of statistical methods in existing case law and legal commentary. The author concludes that a statistics-based remedy—termed “complete proportionate deduction”—best balances the values of accuracy, finality, and public faith in the democratic process.

The Costs of “Discernible and Manageable Standards” in Vieth and Beyond

Joshua S. Stillman

This Note argues against the use of the prudential political question doctrine (PPQD), as exemplified by the Vieth v. Jubelirer plurality opinion. In Vieth, the Supreme Court avoided formulating a standard for adjudicating the constitutionality of partisan gerrymandering due to a claimed lack of a “discernible and manageable standard.” This meant, according to the plurality, that no proposed doctrinal test was both concrete enough to be workably deployed by lower courts and discernible enough in the constitutional text, history, and structure, inter alia. Although the Vieth plurality opinion presents itself as based on universally applicable metadoctrine determining what is and is not a discernible and manageable doctrinal test, this Note argues the Court’s use of the PPQD is ultimately based on a gestalt prudential judgment about the wisdom of intervention in the particular area of partisan gerrymandering.

This Note then argues that the PPQD leads to negative consequences for future litigants and judicial legitimacy. The PPQD sends litigants on a wild goose chase for a perfect doctrinal standard, when it seems clear that no standard will satisfy the Vieth plurality. It also invites litigants to argue about what a discernible and manageable doctrinal test is in the abstract, rather than to address the particular legal issue at hand. These diversions insulate the judiciary from legitimate criticism of the grounds of its decisions. This Note then compares the PPQD to another option for judicial avoidance: a merits standard that is almost impossible for plaintiffs to meet in practice, such as rational basis review. This Note concludes that a stringent merits standard is a superior mechanism for judicial avoidance because it does not carry the same high costs for litigants and judicial legitimacy as the PPQD. Additionally, it allows the Court to exit from active adjudication of an issue while still preserving its ability to intervene in egregious cases.

Securing Fragile Foundations: Affirmative Constitutional Adjudication in Federal Courts

The Honorable Marsha S. Berzon

Madison Lecture

In this speech, delivered as the annual James Madison Lecture, Judge Marsha Berzon discusses the availability of judicial remedies for violations of the Constitution. Judge Berzon reflects on the federal courts’ tradition of allowing litigants to proceed directly under the Constitution—that is, without a statutorily based cause of action. This is a tradition that extends much further than the mid-twentieth century cases most commonly associated with affirmative constitutional litigation— Brown, Bolling, & Bivens, for example—and has its roots in cases from the nineteenth and early twentieth centuries. Against this long historical tradition of courts recognizing nonexpress causes of action for violations of the Constitution, Judge Berzon surveys the modern Supreme Court’s jurisprudence, a jurisprudence that sometimes requires constitutional litigants to base their claims on the same sort of clear congressional intent to permit judicial redress now required before courts will recognize so-called “implied” statutory causes of action. Judge Berzon suggests that requiring litigants seeking to enforce constitutional norms to point to evidence of congressional intent regarding the availability of judicial redress misapplies separation-of-powers concerns.

Categoricalism and Balancing in First and Second Amendment Analysis

Joseph Blocher

The least discussed element of District of Columbia v. Heller might ultimately be the most important: the battle between the majority and dissent over the use of categoricalism and balancing in the construction of constitutional doctrine. In Heller, Justice Scalia’s categoricalism essentially prevailed over Justice Breyer’s balancing approach. But as the opinion itself demonstrates, Second Amendment categoricalism raises extremely difficult and still-unanswered questions about how to draw and justify the lines between protected and unprotected “Arms,” people, and arms-bearing purposes. At least until balancing tests appear in Second Amendment doctrine—as they almost inevitably will—the future of the Amendment will depend almost entirely on the placement and clarity of these categories. And unless the Court better identifies the core values of the Second Amendment, it will be difficult to give the categories any principled justification.

Heller is not the first time the Court has debated the merits of categorization and balancing, nor are Justices Scalia and Breyer the tests’ most famous champions. Decades ago, Justices Black and Frankfurter waged a similar battle in the First Amendment context, and the echoes of their struggle continue to reverberate in free speech doctrine. But whereas the categorical view triumphed in Heller, Justice Frankfurter and the First Amendment balancers won most of their battles. As a result, modern First Amendment doctrine is a patchwork of categorical and balancing tests, with a tendency toward the latter. The First and Second Amendments are often presumed to be close cousins, and courts, litigants, and scholars will almost certainly continue to turn to the First Amendment for guidance in developing a Second Amendment standard of review. But while free speech doctrine may be instructive, it also tells a cautionary tale: Above all, it suggests that unless the Court better identifies the core values of the Second Amendment, the Second Amendment’s future will be even murkier than the First Amendment’s past.

This Article draws the Amendments together, using the development of categoricalism and balancing tests in First Amendment doctrine to describe and predict what Heller’s categoricalism means for the present and future of Second Amendment doctrine. It argues that the Court’s categorical line drawing in Heller creates intractable difficulties for Second Amendment doctrine and theory and that the majority’s categoricalism neither reflects nor enables a clear view of the Amendment’s core values, whatever they may be.

In Goodridge’s Wake: Reflections on the Political, Public, and Personal Repercussions of the Massachusetts Same-Sex Marriage Cases

The Honorable Roderick L. Ireland

Brennan Lecture

In the Sixteenth Annual Justice William J. Brennan, Jr. Lecture on State Courts and Social Justice, Roderick L. Ireland, Senior Associate Justice of the Massachusetts Supreme Judicial Court, discusses the seminal case Goodridge v. Department of Public Health and a judge’s role in controversial decisions. Justice Ireland explains
the rationale behind his majority vote in Goodridge, as well as his dissent in Cote-Whitacre v. Department of Public Health, and the extreme public backlash that followed the same-sex marriage cases. Through the personal lens of his own experience dealing with the extreme reaction to Goodridge, Justice Ireland addresses how judges should handle such controversial cases while remaining true to the role of the judiciary.

Safety in Numbers? Deciding when DNA Alone is Enough to Convict

Andrea Roth

Fueled by police reliance on offender databases and advances in crime scene recovery, a new type of prosecution has emerged in which the government’s case turns on a match statistic explaining the significance of a “cold hit” between the defendant’s DNA profile and the crime-scene evidence. Such cases are unique in that the strength of the match depends on evidence that is almost entirely quantifiable. Despite the growing number of these cases, the critical jurisprudential questions they raise about the proper role of probabilistic evidence, and courts’ routine misapprehension of match statistics, no framework—including a workable standard of proof—currently exists for determining sufficiency of the evidence in such a case. This Article is the first to interrogate the relationship between “reasonable doubt” and statistical certainty in the context of cold hit DNA matches. Examining the concepts of “actual belief” and “moral certainty” underlying the “reasonable doubt” test, I argue that astronomically high source probabilities, while fallible, are capable of meeting the standard for conviction. Nevertheless, the starkly numerical nature of “pure cold hit” evidence raises unique issues that require courts to apply a quantified threshold for sufficiency purposes. I suggest as a starting point—citing recent juror studies and the need for uniformity and systemic legitimacy—that the threshold should be no less favorable to the defendant than a 99.9% source probability.

The Trial of Alberto Fujimori: Navigating the Show Trial Dilemma in Pursuit of Transitional Justice

Christina T. Prusak

Alberto Fujimori is the first democratically elected leader to be tried and convicted of human rights violations in the domestic courts of his own country. As satisfaction with foregoing prosecution and granting amnesty in exchange for more peaceful democratic transition has fallen increasingly out of favor, Fujimori’s trial comes at an opportune time to reevaluate the role of criminal trials in national reconciliation and transitional justice. In this Note, I argue that Fujimori’s human rights trial demonstrates that head-of-state trials, particularly domestic ones, can valuably contribute to larger transitional justice projects, despite their inherent limitations and challenges. Situating my analysis within the transitional justice and show trial literature, I analyze both procedurally and substantively how effectively Fujimori’s human rights trial has navigated its “constitutive paradox,” or tension between strict adherence to the rule of law and the extrajudicial objective of delivering a coherent moral message, inherent in transitional criminal proceedings. I conclude that the trial demonstrates that courts can effectively navigate these paradoxes, even in the midst of institutional weakness and societal cleavages. Moreover, I suggest that domestic tribunals may be particularly well suited to navigate the constitutive paradox of transitional trials.

Partial Unconstitutionality

Kevin C. Walsh

Courts often hold legislation unconstitutional, but nearly always only part of the statute offends. The problem of partial unconstitutionality is therefore pervasive and persistent. Yet the exclusive doctrinal tool for dealing with this problem—severability doctrine—is deeply flawed. To make matters worse, severability doctrine is purportedly necessary for any workable system of judicial review. The accepted view is that severance saves: A court faced with a partially unconstitutional law must sever and excise the unconstitutional provisions or applications so that the constitutional remainder can be enforced going forward. Absent severance and excision, a law must fall in its entirety. This excision-based understanding of judicial review is supposedly traceable to Marbury v. Madison. In fact, this attribution is anachronistic. Moreover, the prevailing view is wrong about the distinctive function
of modern severability doctrine, which is not to save, but to destroy. This Article retrieves the original approach to partial unconstitutionality and develops a proposal for implementing a version of that approach. The proposal, displacement without inferred fallback law, is simultaneously ambitious and modest. It is ambitious because it proposes a shift in the general framework for judicial review in every case; it is modest because the proposed shift would change case outcomes in only a small set of highly consequential cases.

1 3 4 5 6 7 10