NewYorkUniversity
LawReview
Current Issue

Volume 88, Number 5

November 2013
Articles

Judges and Their Papers

Kathryn A. Watts

Who should own a federal judge’s papers? This question has rarely been asked. Instead, it has generally been accepted that the Justices of the U.S. Supreme Court and other federal judges own their working papers, which include papers created by judges relating to their official duties, such as internal draft opinions, confidential vote sheets, and case-related correspondence. This longstanding tradition of private ownership has led to tremendous inconsistency. For example, Justice Thurgood Marshall’s papers were released just two years after he left the bench, revealing behind-the-scenes details about major cases involving issues such as abortion and flag burning. In contrast, Justice David Souter’s papers will remain closed until the fiftieth anniversary of his retirement, and substantial portions of Justice Byron White’s papers, including files relating to the landmark case of Miranda v. Arizona, were shredded. In addition, many collections of lower federal court judges’ papers have been scattered in the hands of judges’ families. Notably, this private ownership model has persisted despite the fact that our country’s treatment of presidential records shifted from private to public ownership through the Presidential Records Act of 1978. Furthermore, private ownership of judicial papers has endured even though it has proven ill-equipped to balance the many competing interests at stake, ranging from calls for governmental accountability and transparency on the one hand, to the judiciary’s independence, collegiality, confidentiality, and integrity on the other.

This Article is the first to give significant attention to the question of who should own federal judges’ working papers and what should happen to the papers once a judge leaves the bench. Upon the thirty-fifth anniversary of the enactment of the Presidential Records Act, this Article argues that judges’ working papers should be treated as governmental property—just as presidential papers are. Although there are important differences between the roles of president and judge, none of the differences suggest that judicial papers should be treated as a species of private property. Rather than counseling in favor of private ownership, the unique position of federal judges, including the judiciary’s independence in our constitutional design, suggests the advisability of crafting rules that speak to reasonable access to and disposition of judicial papers. Ultimately, this Article—giving renewed attention to a long-forgotten 1977 governmental study commissioned by Congress—argues that Congress should declare judicial papers public property and should empower the judiciary to promulgate rules implementing the shift to public ownership. These would include, for example, rules governing the timing of public release of judicial papers. By involving the judiciary in implementing the shift to public ownership, Congress would enhance the likelihood of judicial cooperation, mitigate separation of powers concerns, and enable the judiciary to safeguard judicial independence, collegiality, confidentiality, and integrity.

Convenient Facts: Nken v. Holder, the Solicitor General, and the Presentation of Internal Government Facts

Nancy Morawetz

In April 2012, facing a court order to disclose internal Justice Department e-mails, the Office of the Solicitor General (OSG) wrote to the United States Supreme Court to admit that it had made a factual statement to the Court three years earlier in Nken v. Holder about agency policy and practice that was not accurate. The statement had been based on e-mail communications between Justice Department and agency lawyers. In fact, the statement neither reflected the content of the e-mails nor the actual policy and practice of the relevant government agencies. The letter promised remedial measures and concluded by assuring the Court that the OSG took its responsibility of candor seriously. The underlying factual representation by the OSG in the Nken case was unusual because it attracted attention and lengthy Freedom of Information Act (FOIA) litigation that led to the disclosure of the communications that served as the basis of the statement. But it is not at all unusual as an example of unsupported factual statements by government lawyers that are used to support legal arguments. Indeed, unsupported statements appear in OSG briefs on a wide range of issues. These statements benefit from the unusual position of the government: It has access to information not available to other litigants, and it benefits from a presumption of candor that endows its statements with a claim of self-evident authority that no private litigant could match.

The Nken case provides a unique opportunity to explore the consequences of judicial acceptance of fact statements provided by the OSG. Because of FOIA litigation, we have an opportunity to examine how the OSG gathered information as well as the role played by government counsel at the Justice Department and the interested agencies. This examination shows multiple dangers with unsupported statements about internal government facts. It also demonstrates the difficulty of relying on lawyers representing the government to seek out and offer information that will undermine the government’s litigation position. Finally, it shows that it is dangerous to rely on the party that has misled the Court to develop an appropriate remedy.

Prevention of misleading statements could be pursued through greater self-regulation, prohibition of extra-record factual statements, or through a model of disclosure and rebuttal. This Article argues that the experience in Nken reflects the grave danger in presuming that self-regulation is an adequate safeguard against erroneous statements. It further argues that despite the appeal of a rigid rule that prohibits such statements, such an approach ignores the Court’s interest in information about real world facts that are relevant to its decisions. The Article concludes by arguing that the best proactive approach is to adopt a formal system of advance notice combined with access to the basis of government representations of fact. It further argues that courts should refuse to honor statements in court decisions that are based on untested and erroneous statements of fact by the government.

Targeted Warfare: Individuating Enemy Responsibility

Samuel Issacharoff, Richard H. Pildes

Legitimacy of the use of military force is undergoing a fundamental but insufficiently appreciated moral and legal transformation. Whereas the traditional practices and laws of war defined enemy forces in terms of categorical, group-based judgments that turned on status—a person was an enemy not because of any specific actions he himself engaged in but because he was a member of an opposing military force—we are now moving to a world that, implicitly or explicitly, requires the individuation of enemy responsibility of enemy persons in order to justify the use of military force. Increasingly, the legitimate use of military force is tied to quasi-adjudicative judgments about the individual acts and roles of specific enemy figures; this is the case whether the use of force involved is military detention or targeted killing. This transformation transcends conventional debates about whether terrorist actions should be treated as acts of war or crime and is more profound in its implications.

This readjustment in the basic premises underlying the justified use of military force will have, and is already having, implications for all the institutions involved in the use of military force and in the processes by which decisions are made to use force. For the military, this change will generate pressures to create internal, quasi-adjudicative processes to ensure accurate, credible judgments about the individual responsibility of particular enemy fighters. For the executive, these changes will propel greater engagement in decisions that had previously been exclusively within the province of the military. For the courts, this transformation toward individuated judgments of responsibility will inevitably bring about a greater judicial role in assessing wartime judgments than in the past; this expansion has begun to occur already. These changes are not yet directly reflected (or at least fully reflected) in the formal laws of war, but we anticipate that as these changes embed themselves in the practices of states, especially dominant states, these changes in practice will also eventually be embodied in the legal frameworks that regulate the use of force. This Article will identify this fundamental transformation as the central factor driving struggles over the proper boundaries of military force and then explore the ramifications of this change for issues like military detention and targeted killings.

Notes

Accounting for Punishment in Proportionality Review

Julia L. Torti

The Eighth Amendment has been interpreted to demand proportionality between an offender’s crime and his punishment. However, the current proportionality standard is widely regarded as meaningless. In weighing the severity of the crime against the harshness of the punishment, modern courts do not consider any aspect of the sentence beyond the number of years listed. This Note argues that a more comprehensive analysis of the features of a sentence that contribute to its severity has the potential to reinvigorate the proportionality principle by giving courts a fuller picture of the harshness of modern sentences. Although there are some hurdles to conducting this more robust analysis, this Note proposes methods by which courts could consider the true length of carceral sentences, the prison conditions in which the sentences are served, and the collateral consequences that accompany many criminal convictions. In so doing, this Note demonstrates that some methods of accounting more accurately for the harshness of punishments are neither impracticable nor in tension with other areas of Eighth Amendment doctrine.

Unscrambling the Egg: Social Constructionism and the Antireification Principle in Constitutional Law

Natasha J. Silber

Since the mid-twentieth century, the Court’s developing view on the social construction of identity has driven some of the most fundamental changes in modern equal protection jurisprudence. One of these transformations has been the development of what I call the “antireification principle” in the Court’s affirmative action cases. Under this principle, an important function of constitutional law is to regulate social meaning in accordance with the view that social categories like race are mere constructs. Guided by the antireification norm, the Court has used judicial review to block state action that, in its estimation, treats false constructs as real, important, or enduring. The Court, however, has been highly selective in its application of the principle outside of the race context. Where gender and sexuality are at issue, the Court has been more than willing to cast existing categories as real and even celebrate them.

This Note describes and questions the Court’s selective use of antireification, suggesting that there is no reason, per se, why antireification could not further the goal of social equality in the realms of gender and sexuality. By denying their bases in reality, the Court could—according to the logic of antireification destabilize all such identity constructs and decrease the harms they cause. This Note proceeds to hypothesize a set of explanations for the Court’s selective application of the principle, but ultimately finds each unsatisfying. Finally, it suggests that selective deployment of antireification is symptomatic of inherent contradictions embedded in the structure of contemporary equal protection doctrine, which relies upon fixed identity categories at the same time that it seeks to destroy them.

Wrapped in Ambiguity: Assessing the Expressiveness of Bareback Pornography

Christopher A. Ramos

Contrary to popular belief, pornography has not won the culture war. Far from enjoying the spoils of victory, pornography instead faces legislative ire up to the point of absolute prohibition. On November 6, 2012, close to fifty-six percent of voters approved the County of Los Angeles Safer Sex in the Adult Film Industry Act (“Measure B”), completely prohibiting “bareback”—or condom-free—pornography production. An intuitive response to such an imposition is to raise a First Amendment claim. However, bareback pornography has yet to receive explicit protection by any legislature or court. This Note takes a step toward assessing bareback pornography’s First Amendment status by first arguing that bareback pornography is sufficiently expressive to merit First Amendment protection under traditional theoretical justifications, doctrine, and emerging arguments for an expanded interpretation of First Amendment protection. This Note then argues that Measure B is a content-based restriction on protected expression and, therefore, should receive the Court’s most demanding scrutiny. Under such a test, Measure B should be deemed unconstitutional.

U.S. Agency Independence and the Global Democracy Deficit

Paul E. Hubble

Critics have accused transnational regulatory networks (TRNs) such as the Basel Committee on Banking Supervision of being undemocratic, but they rarely step back and ask if democracy is the right criterion for evaluating regulatory networks. Such critics often point to the seemingly robust checks of domestic administrative law and argue that similar mechanisms should constrain TRNs. However, the Federal Reserve Board of Governors, a significant banking regulator in the United States, is not democratic. Using the Federal Reserve Board as a case study, this Note challenges critics’ claims that there is such a wide gulf between domestic and global procedures.

Toward a Bayesian Analysis of Recanted Eyewitness Identification Testimony

Kristy L. Fields

The reliability of eyewitness identification has been increasingly questioned in recent years. Despite acknowledgment that such evidence is not only unreliable, but also overly emphasized by judicial decisionmakers, in some cases, antiquated procedural rules and lack of guidance as to how to properly weigh identification evidence produce unsettling results. Troy Anthony Davis was executed in 2011 amidst public controversy regarding the eyewitness evidence against him. At trial, nine witnesses identified Davis as the perpetrator. However, after his conviction, seven of those witnesses recanted. Bogged down by procedural restrictions and long-held judicial mistrust of recantation evidence, Davis never received a new trial and his execution produced worldwide criticism.

On the 250th anniversary of Bayes’ Theorem, this Note applies Bayesian analysis to Davis’s case to demonstrate a potential solution to this uncertainty. By using probability theory and scientific evidence of eyewitness accuracy rates, it demonstrates how a judge might have included the weight of seven recanted identifications to determine the likelihood that the initial conviction was made in error. This Note demonstrates that two identifications and seven nonidentifications results in only a 31.5% likelihood of guilt, versus the 99% likelihood represented by nine identifications. This Note argues that Bayesian analysis can, and should, be used to evaluate such evidence. Use of an objective method of analysis can ameliorate cognitive biases and implicit mistrust of recantation evidence. Furthermore, most arguments against the use of Bayesian analysis in legal settings do not apply to post-conviction hearings evaluating recantation evidence. Therefore, habeas corpus judges faced with recanted eyewitness identifications ought to consider implementing this method.

Randomizing Immigration Enforcement: Exploring a New Fourth Amendment Regime

Cynthia Benin

This Note draws upon immigration law to analyze a new Fourth Amendment regime put forth by criminal law scholars Bernard Harcourt and Tracey Meares. In Randomization and the Fourth Amendment, Harcourt and Meares propose a model for reasonable searches and seizures that dispenses with individualized suspicion in favor of random, checkpoint-like stops. Randomization, the authors contend, will ensure that enforcement is evenhanded and will alleviate burdens that result from discriminatory targeting. This Note explores the possibility of randomization in immigration enforcement, a useful context to test the Harcourt-Meares model because it exemplifies the ills the authors seek to address. Though analysis demonstrates that randomization falls far short of its goals, its failures are instructive. Indeed, the lens of immigration enforcement illuminates essential conditions that must exist in order for randomization to be viable.