Empirical Legal Studies
We live in the age of empiricism, and in that age, constitutional law is a relative backwater. Although quantitative methods have transformed entire fields of scholarly inquiry, reshaping what we ask and what we know, those who write about the Constitution rarely resort to quantitative methodology to test their theories. That seems unfortunate, because empirical analysis can illuminate important questions of constitutional law. Or, at least, that is the question to be tested in this Symposium.
We brought together a terrific group of scholars with a unique assignment. We paired distinguished constitutional thinkers with equally accomplished empiricists. We asked the law scholars to identify a core question, assumption, or doctrine from constitutional law, and we asked the empiricist to take a cut at answering it, or at least at figuring out how one might try to answer it. We understood that their answers might be preliminary at best, that the questions might be resistant to easy answers. This is so, in part, because empiricism is as much a means of refining questions as it is a way of answering them.
The balance of this Foreword is, in a sense, an introduction to the idea that more serious empirical analysis can further both constitutional law scholarship and constitutional law decisionmaking. Hence our title: Testing the Constitution.
The Federal Sentencing Guidelines were promulgated in response to concerns of widespread disparities in sentencing. After almost two decades of determinate sentencing, the Guidelines were rendered advisory in United States v. Booker. How has greater judicial discretion affected interjudge disparities, or differences in sentencing outcomes that are attributable to the mere happenstance of the sentencing judge assigned? This Article utilizes new data covering almost 400,000 criminal defendants linked to sentencing judges to undertake the first national empirical analysis of interjudge disparities after Booker.
The results are striking: Interjudge sentencing disparities have doubled since the Guidelines became advisory. Some of the recent increase in disparities can be attributed to differential sentencing behavior associated with judge demographic characteristics, with Democratic and female judges being more likely to exercise their enhanced discretion after Booker. Newer judges appointed post-Booker also appear less anchored to the Guidelines than judges with experience sentencing under the mandatory Guidelines regime.
Disentangling the effects of various actors on sentencing disparities, I find that prosecutorial charging is likely a prominent source of disparities. Rather than charging mandatory minimums uniformly across eligible cases, prosecutors appear to selectively apply mandatory minimums in response to the identity of the sentencing judge, potentially through superseding indictments. Drawing on this empirical evidence, this Article suggests that recent sentencing proposals calling for a reduction in judicial discretion in order to reduce disparities may overlook the substantial contribution of prosecutors.
This Article empirically examines an issue central to judicial and scholarly debate about civil rights damages actions: whether law enforcement officials are financially responsible for settlements and judgments in police misconduct cases. The Supreme Court has long assumed that law enforcement officers must personally satisfy settlements and judgments, and has limited individual and government liability in civil rights damages actions—through qualified immunity doctrine, municipal liability standards, and limitations on punitive damages—based in part on this assumption. Scholars disagree about the prevalence of indemnification: Some believe officers almost always satisfy settlements and judgments against them, and others contend indemnification is not a certainty. In this Article, I report the findings of a national study of police indemnification. Through public records requests, interviews, and other sources, I have collected information about indemnification practices in forty-four of the largest law enforcement agencies across the country, and in thirty-seven small and mid-sized agencies. My study reveals that police officers are virtually always indemnified: During the study period, governments paid approximately 99.98% of the dollars that plaintiffs recovered in lawsuits alleging civil rights violations by law enforcement. Law enforcement officers in my study never satisfied a punitive damages award entered against them and almost never contributed anything to settlements or judgments—even when indemnification was prohibited by law or policy, and even when officers were disciplined, terminated, or prosecuted for their conduct. After describing my findings, this Article considers the implications of widespread indemnification for qualified immunity, municipal liability, and punitive damages doctrines; civil rights litigation practice; and the deterrence and compensation goals of 42 U.S.C. § 1983.
Stark racial disparities define America’s relationship with the death penalty. Though commentators have scrutinized a range of possible causes for this uneven racial distribution of death sentences, no convincing evidence suggests that any one of these factors consistently accounts for the unjustified racial disparities at play in the administration of capital punishment. We propose that a unifying current running through each of these partial plausible explanations is the notion that the human mind may unwittingly inject bias into the seemingly neutral concepts and processes of death penalty administration.
To test the effects of implicit bias on the death penalty, we conducted a study on 445 jury-eligible citizens in six leading death penalty states. We found that jury-eligible citizens harbored two different kinds of the implicit racial biases we tested: implicit racial stereotypes about Blacks and Whites generally, as well as implicit associations between race and the value of life. We also found that death-qualified jurors—those who expressed a willingness to consider imposing both a life sentence and a death sentence—harbored stronger implicit and self-reported (explicit) racial biases than excluded jurors. The results of the study underscore the potentially powerful role of implicit bias and suggest that racial disparities in the modern death penalty could be linked to the very concepts entrusted to maintain the continued constitutionality of capital punishment: its retributive core, its empowerment of juries to express the cultural consensus of local communities, and the modern regulatory measures that promised to eliminate arbitrary death sentencing.
This Article identifies how the current spate of state and local regulation is changing the way elected officials, scholars, courts, and the public think about the constitutional dimensions of immigration law and governmental responsibility for immigration enforcement. Reinvigorating the theoretical possibilities left open by the Supreme Court in its 1875 Chy Lung v. Freeman decision, state and local officials characterize their laws as unavoidable responses to the policy problems they face when they are squeezed between the challenges of unauthorized migration and the federal government’s failure to fix a broken system. In the October 2012 term, in Arizona v. United States, the Court addressed, but did not settle, the difficult empirical, theoretical, and constitutional questions necessitated by these enactments and their attendant justifications. Our empirical investigation, however, discovered that most state and local immigration laws are not organic policy responses to pressing demographic challenges. Instead, such laws are the product of a more nuanced and politicized process in which demographic concerns are neither necessary nor sufficient factors and in which federal inactivity and subfederal activity are related phenomena, fomented by the same actors. This Article focuses on the constitutional and theoretical implications of these processes: It presents an evidence-based theory of state and local policy proliferation; it cautions legal scholars to rethink functionalist accounts for the rise of such laws; and it advises courts to reassess their use of traditional federalism frameworks to evaluate these subfederal enactments.
The reliability of eyewitness identification has been increasingly questioned in recent years. Despite acknowledgment that such evidence is not only unreliable, but also overly emphasized by judicial decisionmakers, in some cases, antiquated procedural rules and lack of guidance as to how to properly weigh identification evidence produce unsettling results. Troy Anthony Davis was executed in 2011 amidst public controversy regarding the eyewitness evidence against him. At trial, nine witnesses identified Davis as the perpetrator. However, after his conviction, seven of those witnesses recanted. Bogged down by procedural restrictions and long-held judicial mistrust of recantation evidence, Davis never received a new trial and his execution produced worldwide criticism.
On the 250th anniversary of Bayes’ Theorem, this Note applies Bayesian analysis to Davis’s case to demonstrate a potential solution to this uncertainty. By using probability theory and scientific evidence of eyewitness accuracy rates, it demonstrates how a judge might have included the weight of seven recanted identifications to determine the likelihood that the initial conviction was made in error. This Note demonstrates that two identifications and seven nonidentifications results in only a 31.5% likelihood of guilt, versus the 99% likelihood represented by nine identifications. This Note argues that Bayesian analysis can, and should, be used to evaluate such evidence. Use of an objective method of analysis can ameliorate cognitive biases and implicit mistrust of recantation evidence. Furthermore, most arguments against the use of Bayesian analysis in legal settings do not apply to post-conviction hearings evaluating recantation evidence. Therefore, habeas corpus judges faced with recanted eyewitness identifications ought to consider implementing this method.
Every year, medical error kills and injures hundreds of thousands of people and costs billions of dollars in lost income, lost household production, disability, and healthcare expenses. In recent years, hospitals have implemented multiple systems to gather information about medical errors, understand the causes of these errors, and change policies and practices to improve patient safety. The effect of malpractice lawsuits on these patient safety efforts is hotly contested. Some believe that the fear of malpractice liability inhibits the kind of openness and transparency needed to identify and address the root causes of medical error. Others believe that malpractice litigation brings crucial information about medical error to the surface and creates financial, political, and institutional pressures to improve. Yet neither side in this debate offers much evidence to support its claims.
Drawing on a national survey of healthcare professionals and thirty-five in-depth interviews of those responsible for managing risk and improving patient safety in hospitals across the country, I find reason to believe that malpractice litigation is not significantly compromising the patient safety movement’s call for transparency. In fact, the opposite appears to be occurring: The openness and transparency promoted by patient safety advocates appear to be influencing hospitals’ responses to litigation risk. Hospitals, once afraid of disclosing and discussing error for fear of liability, increasingly encourage transparency with patients and medical staff. Moreover, lawsuits play a productive role in hospital patient safety efforts by revealing valuable information about weaknesses in hospital policies, practices, providers, and administration. These findings should inform open and pressing questions about medical malpractice reform and the best ways to continue improving patient safety.
Standard-form contracting is the engine of the mass-market economy, yet we know little about what drives it and what factors are associated with its evolution. Understanding change and innovation of the substance, length, and complexity of fine print in the consumer context can help regulators identify sources of potential intervention as well as help them evaluate the effectiveness of mandatory disclosure regimes, which are commonly used as consumer protection tools. This Article studies the rate, direction, and determinants of change in consumer standard-form contracting. We examine what changed between 2003 and 2010 in the terms of 264 mass-market consumer software license agreements. Thirty-nine percent of contracts materially changed at least one term, and some changed as many as fourteen terms. The average contract became more pro-seller as well as several hundred words longer. The increase in length is not due to the use of simpler language. Contract readability has been constant: The average contract is as readable as an article in a scientific journal. The variance of contract length has grown, as has the variance in overall pro-seller bias, resulting in reduced contract standardization over time. Firms that were younger, larger, or growing, as well as firms with inhouse counsel, were more likely to change existing terms and to introduce new terms to take advantage of technological and market developments. Contracts appear to respond to litigation outcomes: Terms that were increasingly enforced by courts were more frequently used in contracts, and vice-versa. The results indicate that software license agreements are relatively dynamic and shaped by multiple factors over time. We discuss potential consumer protection implications as a result of the increased length and complexity of contracts over time.
Scholars have catalogued rigidities in contract design. Some have observed that boilerplate provisions are remarkably resistant to change, even in the face of shocks such as adverse judicial interpretations. Empirical studies of debt contracts and collateral, in contrast, suggest that covenant and collateral terms are customized to the characteristics of the borrower and evolve in response to changes in market conditions, such as expansion and contraction in credit supply. Building on the adverse selection and moral hazard theories of covenants and collateral, we demonstrate that an expansion (contraction) of credit will lead not only to a decrease (increase) in the interest rate but also a reduction (expansion) of covenants and collateral through lessening (worsening) adverse selection and moral hazard problems. We conclude with some empirical implications of this analysis.
Contract scholarship has given little attention to the production process for contracts. The usual assumption is that the parties will construct the contract ex nihilo, choosing all the terms so that they will maximize the surplus from the contract. In fact, parties draft most contracts by slightly modifying the terms of contracts that they have used in the past, or that other parties have used in related transactions. A small literature on boilerplate recognizes this phenomenon, but little empirical work examines the process. This Article provides an empirical analysis by drawing on a dataset of sovereign bonds. We show that exogenous factors are key determinants in the evolution of these contracts. We find an evolutionary pattern that roughly separates into three stages: stage one when a particular standard form dominates in the absence of external shocks; stage two when there are external shocks and marginal players experimenting with deviations from the standard form; and stage three when a new standard emerges. We find that more marginal law firms are likely to be leaders in innovation at early stages of the innovation cycle but that dominant law firms are leaders at later stages.