Empirical Legal Studies

Crystal S. Yang

Few decisions in the criminal justice process are as consequential as the determination of bail. Indeed, recent empirical research finds that pre-trial detention imposes substantial long-term costs on defendants and society. Defendants who are detained before trial are more likely to plead guilty, less likely to be employed, and less likely to access social safety net programs for several years after arrest. Spurred in part by these concerns, critics of the bail system have urged numerous jurisdictions to adopt bail reforms, which have led to growing momentum for a large-scale transformation of the bail system. Yet supporters of the current system counter that pre-trial detention reduces flight and pre-trial crime—recognized benefits to society—by incapacitating defendants. Despite empirical evidence in support of both positions, however, advocates and critics of the current bail system have generally ignored the real trade-offs associated with detention.

This Article provides a broad conceptual framework for how policymakers can design a better bail system by weighing both the costs and benefits of pre-trial detention—trade-offs that are historically grounded in law, but often disregarded in practice. I begin by presenting a simple taxonomy of the major categories of costs and benefits that stem from pre-trial detention. Building from this taxonomy, I conduct a partial cost-benefit analysis that incorporates the existing evidence, finding that the current state of pre-trial detention is generating large social losses. Next, I formally present a framework that accounts for heterogeneity in both costs and benefits across defendants, illustrating that detention on the basis of “risk” alone can lead to socially suboptimal outcomes.

In the next part of the Article, I present new empirical evidence showing that a cost-benefit framework has the potential to improve accuracy and equity in bail decision-making, where currently bail judges are left to their own heuristics and biases. Using data on criminal defendants and bail judges in two urban jurisdictions, and exploiting variation from the random assignment of cases to judges, I find significant judge differences in pre-trial release rates, the assignment of money bail, and racial gaps in release rates. While there are any number of reasons why judges within the same jurisdiction may vary in their bail decisions, these results indicate that judges may not be all setting bail at the socially optimal level.

The conceptual framework developed in this Article also sheds light on the ability of recent bail reforms to increase social welfare. While the empirical evidence is scant, electronic monitoring holds promise as a welfare-enhancing alternative to pre-trial detention. In contrast, application of the conceptual framework cautions against the expanding use of risk-assessment instruments. These instruments, by recommending the detention of high-risk defendants, overlook the possibility that these high-risk defendants may also be “high-harm” such that they are most adversely affected by a stay in jail. Instead, I recommend that jurisdictions develop “net benefit” assessment instruments by predicting both risk and harm for each defendant in order to move closer toward a bail system that maximizes social welfare.

 

Adam M. Samaha

Observers have suggested that adding sources of interpretation tends to increase interpreter discretion. The idea is embedded in a quip, attributed to Judge Harold Leventhal, that citing legislative history is like “looking over a crowd and picking out your friends.” Participants in debates over interpretive method have applied the idea to the proliferation of other sources as well, including canons of construction and originalist history. But the logic of “more sources, more discretion” has escaped serious testing. And predicting the effect of source proliferation is not a matter of logic alone. The empirical study of how information loads affect behavior has grown dramatically in recent decades, though almost without notice in legal scholarship on interpretive method.

This Article tests the logic and evidence for “more sources, more discretion.” The idea turns out to be incorrect, without more, as a matter of logic. Adding sources tends to reduce the chance of discretion using a simple model of interpretation. This starter model depicts judges as aggregators of source implications, and it draws on basic probability theory and computer simulations to illustrate. The analysis does change if we allow judges to “spin” or “cherry pick” sources, but without much hope for limiting discretion by limiting sources. Of course, judges will not always behave like machines executing instructions or otherwise follow the logic of these models. Thus the Article goes on to spotlight provocative empirical studies of information-load effects, develop working theories of interpreter behavior, and present new evidence.

After emphasizing that interpreters might ignore additional information at some point, the Article tests three other theories. First, an extended dataset casts doubt on an earlier study that linked a growing stock of precedents to increased judicial discretion. Adding to the pile of precedents seems to have no simple pattern of effect on discretion. Second, existing studies indicate that increasing information loads might prompt judges to promote the status quo, and new data suggest that this effect depends on the type of information added. The number of sources cited in appellant briefs appears to have no effect on judges’ willingness to affirm—in contrast with the number of words and issues presented, which may have opposing effects. Third, an expanded dataset supports an earlier finding that judges who face a large number of doctrinal factors might weight those factors in a quasi-legal fashion. This time-saving prioritization does not seem to follow conventional ideological lines.

With simple intuitions in doubt, thoughtful work remains to be done on the effects of source proliferation. Observers interested in judicial discretion have good reason to look beyond source proliferation to find it. And observers interested in institutional design have good reason to rethink the range of consequences when information is added to our judicial systems. 

Wendy Wagner, William West, Thomas McGarity, and Lisa Peters

In administrative law, it is generally assumed that once an agency promulgates a final rule, its work on that project—provided the rule is not litigated—has come to an end. In order to ensure that these static rules adjust to the times, therefore, both Congress and the White House have imposed a growing number of formal requirements on agencies to “look back” at their rules and revise or repeal ones that are ineffective.

Our empirical study of the rulemaking process in three agencies (N = 462 revised rules to 183 parent rules) reveals that—contrary to conventional wisdom—agencies face a variety of incentives to revise and update their rules outside of such formal requirements. Not the least of these is pressure from those groups that are affected by their regulations. There is in fact a vibrant world of informal rule revision that occurs voluntarily and through a variety of techniques. We label this phenomenon “dynamic rulemaking.” In this Article, we share our empirical findings, provide a conceptual map of this unexplored world of rule revisions, and offer some preliminary thoughts about the normative implications of dynamic rulemaking for regulatory reform.

Wendy Wagner, William West, Thomas McGarity & Lisa Peters

In administrative law, it is generally assumed that once an agency promulgates a final rule, its work on that project—provided the rule is not litigated—has come to an end. In order to ensure that these static rules adjust to the times, therefore, both Congress and the White House have imposed a growing number of formal requirements on agencies to “look back” at their rules and revise or repeal ones that are ineffective.

Our empirical study of the rulemaking process in three agencies (N = 462 revised rules to 183 parent rules) reveals that—contrary to conventional wisdom—agencies face a variety of incentives to revise and update their rules outside of such formal requirements. Not the least of these is pressure from those groups that are affected by their regulations. There is in fact a vibrant world of informal rule revision that occurs voluntarily and through a variety of techniques. We label this phenomenon “dynamic rulemaking.” In this Article, we share our empirical findings, provide a conceptual map of this unexplored world of rule revisions, and offer some preliminary thoughts about the normative implications of dynamic rulemaking for regulatory reform.

J.J. Prescott & Kathryn E. Spier

A settlement is an agreement between parties to a dispute. In everyday parlance and in academic scholarship, settlement is juxtaposed with trial or some other method of dispute resolution in which a third-party factfinder ultimately picks a winner and announces a score. The “trial versus settlement” trope, however, represents a false choice; viewing settlement solely as a dispute-ending alternative to a costly trial leads to a narrow understanding of how dispute resolution should and often does work. In this Article, we describe and defend a much richer concept of settlement, amounting in effect to a continuum of possible agreements between litigants along many dimensions. “Fully” settling a case, of course, appears to completely resolve a dispute, and if parties to a dispute rely entirely on background default rules, a “naked” trial occurs. But in reality virtually every dispute is “partially” settled. The same forces that often lead parties to fully settle—joint value maximization, cost minimization, and risk reduction—will under certain conditions lead them to enter into many other forms of Pareto-improving agreements while continuing to actively litigate against one another. We identify three primary categories of these partial settlements: award-modification agreements, issue-modification agreements, and procedure-modification agreements. We provide real-world examples of each and rigorously link them to the underlying incentives facing litigants. Along the way, we use our analysis to characterize unknown or rarely observed partial settlement agreements that nevertheless seem theoretically attractive, and we allude to potential reasons for their scarcity within the context of our framework. Finally, we study partial settlements and how they interact with each other in real-world adjudication using new and unique data from New York’s Summary Jury Trial Program. Patterns in the data are consistent with parties using partial settlement terms both as substitutes and as complements for other terms, depending on the context, and suggest that entering into a partial settlement can reduce the attractiveness of full settlement. We conclude by briefly discussing the distinctive welfare implications of partial settlements.

Charlotte S. Alexander, Zev J. Eigen & Camille Gear Rich

In recent years, antidiscrimination scholars have focused on the productive possibilities of the “universal turn,” a strategy that calls on attorneys to convert particularist claims, like race discrimination claims, into broader universalist claims that secure basic dignity, liberty, and fairness rights for all. Scholars have urged litigators to employ universalist strategies in constitutional and voting rights cases, as well as in employment litigation. Thus far, however, arguments made in favor of universalism have largely been abstract and theoretical and therefore have failed to fully consider the second-order effects of universalist strategies on the ground. In this Article, we challenge the prevailing arguments in favor of universalism by exploring the market consequences as lawyers shift from particularist Title VII race discrimination claims to universalist Fair Labor Standards Act claims. Drawing on a review of case filing statistics and an inductive, purposeful sample of attorney interviews, we describe a phenomenon we call “post-racial hydraulics,” which are a set of non-ideological, economic, and pragmatism-based drivers produced by the trend toward universalism. Post-racial hydraulics must be understood as key but previously unexplored factors in racial formation. Left unchecked, these non-ideological drivers will have substantive ideological effects, as they threaten to fundamentally reshape the employment litigation market and alter our understanding of race discrimination.

Raphael Holoszyc-Pimentel

Traditionally, rational-basis scrutiny is extremely deferential and rarely invalidates legislation under the Equal Protection Clause. However, a small number of Supreme Court cases, while purporting to apply rational-basis review, have held laws unconstitutional under a higher standard often termed “rational basis with bite.” This Note analyzes every rational-basis-with-bite case from the 1971 through 2014 Terms and nine factors that appear to recur throughout these cases. This Note argues that rational basis with bite is most strongly correlated with laws that classify on the basis of an immutable characteristic or burden a significant right. These two factors are particularly likely to be present in rational-basis-with-bite cases, which can be explained on both doctrinal and prudential grounds. This conclusion upends the conventional wisdom that animus is the critical factor in rational basis with bite and reveals that other routes to rational basis with bite exist. Finally, this Note observes that applying at least rational basis with bite to discrimination against gay, lesbian, bisexual, and transgender individuals is consistent with the pattern of cases implicating immutability and significant rights.

Nicholas O. Stephanopoulos

There is a hole at the heart of equal protection law. According to long-established doctrine, one of the factors that determine whether a group is a suspect class is the group’s political powerlessness. But neither courts nor scholars have reached any kind of agreement as to the meaning of powerlessness. Instead, they have advanced an array of conflicting conceptions: numerical size, access to the franchise, financial resources, descriptive representation, and so on.

My primary goal in this Article, then, is to offer a definition of political powerlessness that makes theoretical sense. The definition I propose is this: A group is relatively powerless if its aggregate policy preferences are less likely to be enacted than those of similarly sized and classified groups. I arrive at this definition in three steps. First, the powerlessness doctrine stems from Carolene Products’s account of “those political processes ordinarily to be relied upon to protect minorities.” Second, “those political processes” refer to pluralism: the idea that society is divided into countless overlapping groups, from whose shifting coalitions public policy emerges. And third, pluralism implies a particular notion of group power— one that (1) is continuous rather than binary, (2) spans all issues, (3) focuses on policy enactment, and (4) controls for group size, and (5) type. These are precisely the elements of my suggested definition.

But I aim not just to theorize but also to operationalize in this Article. In the last few years, datasets have become available on groups’ policy preferences at the federal and state levels. Merging these datasets with information on policy outcomes, I am able to quantify my conception of group power. I find that blacks, women, and the poor are relatively powerless at both governmental levels; while whites, men, and the non-poor wield more influence. These results both support and subvert the current taxonomy of suspect classes.

Margaret H. Lemos & Kevin M. Quinn

An important strain of federalism scholarship locates the primary value of federalism in how it carves up the political landscape, allowing groups that are out of power at the national level to flourish—and, significantly, to govern—in the states. On that account, partisanship, rather than a commitment to state authority as such, motivates state actors to act as checks on federal power. Our study examines partisan motivation in one area where state actors can, and do, advocate on behalf of state power: the Supreme Court. We compiled data on state amicus filings in Supreme Court cases from the 1979–2013 Terms and linked it up with data on the partisanship of state attorneys general (AGs). Focusing only on merits-stage briefs, we looked at each AG’s partisan affiliation and the partisanship of the AGs who either joined, or explicitly opposed, her briefs. If partisanship drives amicus activity, then we should see a strong negative relationship between the partisanship of AGs opposing each other and a strong positive relationship between those who cosign briefs.

What we found was somewhat surprising. States agreed far more often than they disagreed, and—until recently—most multistate briefs represented bipartisan, not partisan, coalitions of AGs. Indeed, for the first twenty years of our study, the cosigners of these briefs were generally indistinguishable from a random sampling of AGs then in office. The picture changes after 2000, when the coalitions of cosigners become decidedly more partisan, particularly among Republican AGs. The partisanship picture is also different for the 6% of cases in which different states square off in opposing briefs. In those cases, AGs do tend to join together in partisan clusters. Here, too, the appearance of partisanship becomes stronger after the mid-1990s.

Daniel E. Ho & Frederick Schauer

Oliver Wendell Holmes's notion of the marketplace of ideas—that the best test of truth is the power of an idea to get itself accepted in the competition of the market— is a central idea in free speech thought. Yet extant social science evidence provides at best mixed support for the metaphor's veracity, and thus for the view that the truth of a proposition has substantial explanatory force in determining which propositions will be accepted and which not. But even if establishing an open marketplace for ideas is unlikely to produce a net gain in human knowledge, it may have other consequences. We illustrate how to empirically study the consequences of establishing or restricting a communicative domain. Our focus is on time, place, and manner restrictions, and we examine two potential natural experiments involving speech buffer zones around polling places and health care facilities providing abortions. Using a regression discontinuity design with geocoded polling information for over 1.3 million voters in two high-density jurisdictions (Hudson County and Manhattan), we provide suggestive evidence that speech restrictions in Hudson County reduced turnout amongst voters in the buffer zone. By failing to cue voters of the election, speech restrictions may have unanticipated costs. And using difference-in-differences and synthetic control matching with state-level data from 1973 to 2011, we illustrate how one might study the impact of speech restrictions around health care facilities. Although the evidence is limited, Massachusetts’s restrictions were accompanied, if anything, by a decrease in the abortion rate. Buffer zones might channel speech toward more persuasive forms, belying the notion that the cure for bad speech is plainly more speech.

Pages