A settlement is an agreement between parties to a dispute. In everyday parlance and in academic scholarship, settlement is juxtaposed with trial or some other method of dispute resolution in which a third-party factfinder ultimately picks a winner and announces a score. The “trial versus settlement” trope, however, represents a false choice; viewing settlement solely as a dispute-ending alternative to a costly trial leads to a narrow understanding of how dispute resolution should and often does work. In this Article, we describe and defend a much richer concept of settlement, amounting in effect to a continuum of possible agreements between litigants along many dimensions. “Fully” settling a case, of course, appears to completely resolve a dispute, and if parties to a dispute rely entirely on background default rules, a “naked” trial occurs. But in reality virtually every dispute is “partially” settled. The same forces that often lead parties to fully settle—joint value maximization, cost minimization, and risk reduction—will under certain conditions lead them to enter into many other forms of Pareto-improving agreements while continuing to actively litigate against one another. We identify three primary categories of these partial settlements: award-modification agreements, issue-modification agreements, and procedure-modification agreements. We provide real-world examples of each and rigorously link them to the underlying incentives facing litigants. Along the way, we use our analysis to characterize unknown or rarely observed partial settlement agreements that nevertheless seem theoretically attractive, and we allude to potential reasons for their scarcity within the context of our framework. Finally, we study partial settlements and how they interact with each other in real-world adjudication using new and unique data from New York’s Summary Jury Trial Program. Patterns in the data are consistent with parties using partial settlement terms both as substitutes and as complements for other terms, depending on the context, and suggest that entering into a partial settlement can reduce the attractiveness of full settlement. We conclude by briefly discussing the distinctive welfare implications of partial settlements.
Empirical Legal Studies
In recent years, antidiscrimination scholars have focused on the productive possibilities of the “universal turn,” a strategy that calls on attorneys to convert particularist claims, like race discrimination claims, into broader universalist claims that secure basic dignity, liberty, and fairness rights for all. Scholars have urged litigators to employ universalist strategies in constitutional and voting rights cases, as well as in employment litigation. Thus far, however, arguments made in favor of universalism have largely been abstract and theoretical and therefore have failed to fully consider the second-order effects of universalist strategies on the ground. In this Article, we challenge the prevailing arguments in favor of universalism by exploring the market consequences as lawyers shift from particularist Title VII race discrimination claims to universalist Fair Labor Standards Act claims. Drawing on a review of case filing statistics and an inductive, purposeful sample of attorney interviews, we describe a phenomenon we call “post-racial hydraulics,” which are a set of non-ideological, economic, and pragmatism-based drivers produced by the trend toward universalism. Post-racial hydraulics must be understood as key but previously unexplored factors in racial formation. Left unchecked, these non-ideological drivers will have substantive ideological effects, as they threaten to fundamentally reshape the employment litigation market and alter our understanding of race discrimination.
When it wants to be, the federal government is good at counting things. It tracks average daily caffeine intake (300 milligrams per adult older than twenty-two in 2008), weekly instances of the flu (875 reported by public health laboratories in the week ending January 14, 2017), monthly production of hens’ eggs (8.97 billion in December 2016), and annual bicycle thefts (204,984 in 2015). But it currently cannot provide a comprehensive count of how often police officers use lethal force against its citizens. The deaths of Michael Brown, Walter Scott, Tamir Rice, Laquan McDonald—all unarmed, black, and shot by police officers—and far too many others have forced the issue of lethal police use of force into the national consciousness. But while many recent reports have focused on the unreliability of current data, there has been relatively little consideration of how, exactly, the federal government might go about getting it. This Note seeks to fill this gap by laying out the contours within which the federal government can act to incentivize states to collect more and better data. After highlighting the need for robust data collected at the federal level and describing various issues with the current state of federal collection of law enforcement data, this Note outlines the legal landscape legislators considering such a policy must grapple with: the combination of federalism concerns that are particularly acute in the sphere of state and local law enforcement, and the Supreme Court’s somewhat ambiguous conditional spending jurisprudence. Finally, it explains how the federal government might incentivize data collection without running afoul of the law, proposing a legislative scheme for federal collection of law enforcement data that combines national guidelines, conditional spending requirements, and competitive grant funding.
Few decisions in the criminal justice process are as consequential as the determination of bail. Indeed, recent empirical research finds that pre-trial detention imposes substantial long-term costs on defendants and society. Defendants who are detained before trial are more likely to plead guilty, less likely to be employed, and less likely to access social safety net programs for several years after arrest. Spurred in part by these concerns, critics of the bail system have urged numerous jurisdictions to adopt bail reforms, which have led to growing momentum for a large-scale transformation of the bail system. Yet supporters of the current system counter that pre-trial detention reduces flight and pre-trial crime—recognized benefits to society—by incapacitating defendants. Despite empirical evidence in support of both positions, however, advocates and critics of the current bail system have generally ignored the real trade-offs associated with detention.
This Article provides a broad conceptual framework for how policymakers can design a better bail system by weighing both the costs and benefits of pre-trial detention—trade-offs that are historically grounded in law, but often disregarded in practice. I begin by presenting a simple taxonomy of the major categories of costs and benefits that stem from pre-trial detention. Building from this taxonomy, I conduct a partial cost-benefit analysis that incorporates the existing evidence, finding that the current state of pre-trial detention is generating large social losses. Next, I formally present a framework that accounts for heterogeneity in both costs and benefits across defendants, illustrating that detention on the basis of “risk” alone can lead to socially suboptimal outcomes.
In the next part of the Article, I present new empirical evidence showing that a cost-benefit framework has the potential to improve accuracy and equity in bail decision-making, where currently bail judges are left to their own heuristics and biases. Using data on criminal defendants and bail judges in two urban jurisdictions, and exploiting variation from the random assignment of cases to judges, I find significant judge differences in pre-trial release rates, the assignment of money bail, and racial gaps in release rates. While there are any number of reasons why judges within the same jurisdiction may vary in their bail decisions, these results indicate that judges may not be all setting bail at the socially optimal level.
The conceptual framework developed in this Article also sheds light on the ability of recent bail reforms to increase social welfare. While the empirical evidence is scant, electronic monitoring holds promise as a welfare-enhancing alternative to pre-trial detention. In contrast, application of the conceptual framework cautions against the expanding use of risk-assessment instruments. These instruments, by recommending the detention of high-risk defendants, overlook the possibility that these high-risk defendants may also be “high-harm” such that they are most adversely affected by a stay in jail. Instead, I recommend that jurisdictions develop “net benefit” assessment instruments by predicting both risk and harm for each defendant in order to move closer toward a bail system that maximizes social welfare.
Observers have suggested that adding sources of interpretation tends to increase interpreter discretion. The idea is embedded in a quip, attributed to Judge Harold Leventhal, that citing legislative history is like “looking over a crowd and picking out your friends.” Participants in debates over interpretive method have applied the idea to the proliferation of other sources as well, including canons of construction and originalist history. But the logic of “more sources, more discretion” has escaped serious testing. And predicting the effect of source proliferation is not a matter of logic alone. The empirical study of how information loads affect behavior has grown dramatically in recent decades, though almost without notice in legal scholarship on interpretive method.
This Article tests the logic and evidence for “more sources, more discretion.” The idea turns out to be incorrect, without more, as a matter of logic. Adding sources tends to reduce the chance of discretion using a simple model of interpretation. This starter model depicts judges as aggregators of source implications, and it draws on basic probability theory and computer simulations to illustrate. The analysis does change if we allow judges to “spin” or “cherry pick” sources, but without much hope for limiting discretion by limiting sources. Of course, judges will not always behave like machines executing instructions or otherwise follow the logic of these models. Thus the Article goes on to spotlight provocative empirical studies of information-load effects, develop working theories of interpreter behavior, and present new evidence.
After emphasizing that interpreters might ignore additional information at some point, the Article tests three other theories. First, an extended dataset casts doubt on an earlier study that linked a growing stock of precedents to increased judicial discretion. Adding to the pile of precedents seems to have no simple pattern of effect on discretion. Second, existing studies indicate that increasing information loads might prompt judges to promote the status quo, and new data suggest that this effect depends on the type of information added. The number of sources cited in appellant briefs appears to have no effect on judges’ willingness to affirm—in contrast with the number of words and issues presented, which may have opposing effects. Third, an expanded dataset supports an earlier finding that judges who face a large number of doctrinal factors might weight those factors in a quasi-legal fashion. This time-saving prioritization does not seem to follow conventional ideological lines.
With simple intuitions in doubt, thoughtful work remains to be done on the effects of source proliferation. Observers interested in judicial discretion have good reason to look beyond source proliferation to find it. And observers interested in institutional design have good reason to rethink the range of consequences when information is added to our judicial systems.
In administrative law, it is generally assumed that once an agency promulgates a final rule, its work on that project—provided the rule is not litigated—has come to an end. In order to ensure that these static rules adjust to the times, therefore, both Congress and the White House have imposed a growing number of formal requirements on agencies to “look back” at their rules and revise or repeal ones that are ineffective.
Our empirical study of the rulemaking process in three agencies (N = 462 revised rules to 183 parent rules) reveals that—contrary to conventional wisdom—agencies face a variety of incentives to revise and update their rules outside of such formal requirements. Not the least of these is pressure from those groups that are affected by their regulations. There is in fact a vibrant world of informal rule revision that occurs voluntarily and through a variety of techniques. We label this phenomenon “dynamic rulemaking.” In this Article, we share our empirical findings, provide a conceptual map of this unexplored world of rule revisions, and offer some preliminary thoughts about the normative implications of dynamic rulemaking for regulatory reform.
Testing the Constitution
We live in the age of empiricism, and in that age, constitutional law is a relative backwater. Although quantitative methods have transformed entire fields of scholarly inquiry, reshaping what we ask and what we know, those who write about the Constitution rarely resort to quantitative methodology to test their theories. That seems unfortunate, because empirical analysis can illuminate important questions of constitutional law. Or, at least, that is the question to be tested in this Symposium.
We brought together a terrific group of scholars with a unique assignment. We paired distinguished constitutional thinkers with equally accomplished empiricists. We asked the law scholars to identify a core question, assumption, or doctrine from constitutional law, and we asked the empiricist to take a cut at answering it, or at least at figuring out how one might try to answer it. We understood that their answers might be preliminary at best, that the questions might be resistant to easy answers. This is so, in part, because empiricism is as much a means of refining questions as it is a way of answering them.
The balance of this Foreword is, in a sense, an introduction to the idea that more serious empirical analysis can further both constitutional law scholarship and constitutional law decisionmaking. Hence our title: Testing the Constitution.
Attorneys General as Amici
An important strain of federalism scholarship locates the primary value of federalism in how it carves up the political landscape, allowing groups that are out of power at the national level to flourish—and, significantly, to govern—in the states. On that account, partisanship, rather than a commitment to state authority as such, motivates state actors to act as checks on federal power. Our study examines partisan motivation in one area where state actors can, and do, advocate on behalf of state power: the Supreme Court. We compiled data on state amicus filings in Supreme Court cases from the 1979–2013 Terms and linked it up with data on the partisanship of state attorneys general (AGs). Focusing only on merits-stage briefs, we looked at each AG’s partisan affiliation and the partisanship of the AGs who either joined, or explicitly opposed, her briefs. If partisanship drives amicus activity, then we should see a strong negative relationship between the partisanship of AGs opposing each other and a strong positive relationship between those who cosign briefs.
What we found was somewhat surprising. States agreed far more often than they disagreed, and—until recently—most multistate briefs represented bipartisan, not partisan, coalitions of AGs. Indeed, for the first twenty years of our study, the cosigners of these briefs were generally indistinguishable from a random sampling of AGs then in office. The picture changes after 2000, when the coalitions of cosigners become decidedly more partisan, particularly among Republican AGs. The partisanship picture is also different for the 6% of cases in which different states square off in opposing briefs. In those cases, AGs do tend to join together in partisan clusters. Here, too, the appearance of partisanship becomes stronger after the mid-1990s.
Oliver Wendell Holmes’s notion of the marketplace of ideas—that the best test of truth is the power of an idea to get itself accepted in the competition of the market— is a central idea in free speech thought. Yet extant social science evidence provides at best mixed support for the metaphor’s veracity, and thus for the view that the truth of a proposition has substantial explanatory force in determining which propositions will be accepted and which not. But even if establishing an open marketplace for ideas is unlikely to produce a net gain in human knowledge, it may have other consequences. We illustrate how to empirically study the consequences of establishing or restricting a communicative domain. Our focus is on time, place, and manner restrictions, and we examine two potential natural experiments involving speech buffer zones around polling places and health care facilities providing abortions. Using a regression discontinuity design with geocoded polling information for over 1.3 million voters in two high-density jurisdictions (Hudson County and Manhattan), we provide suggestive evidence that speech restrictions in Hudson County reduced turnout amongst voters in the buffer zone. By failing to cue voters of the election, speech restrictions may have unanticipated costs. And using difference-in-differences and synthetic control matching with state-level data from 1973 to 2011, we illustrate how one might study the impact of speech restrictions around health care facilities. Although the evidence is limited, Massachusetts’s restrictions were accompanied, if anything, by a decrease in the abortion rate. Buffer zones might channel speech toward more persuasive forms, belying the notion that the cure for bad speech is plainly more speech.
An Empirical Study of the Roberts Court
Constitutional law casebooks, generations of constitutional lawyers, and the Justices themselves say that the Court is more likely to depart from precedent in constitutional cases than in other types. We test this assumption in cases decided by the Roberts Court and find, at odds with earlier studies, that the data provide inconclusive support for it. Other factors, especially criticism of precedent by lower courts and lawyers, are more consistent and stronger predictors of the Court’s decisions to depart from precedent. These findings have interesting implications for lawyering, teaching, and judging in the constitutional law context.