Volume 92, Number 5


Lectures

Articles

The Scope of Strong Marks: Should Trademark Law Protect the Strong More than the Weak?

Barton Beebe & C. Scott Hemphill

At the core of trademark law has long been the blackletter principle that the stronger a trademark is, the greater the likelihood that consumers will confuse similar marks with it and thus the wider the scope of protection the mark should receive. The relation between trademark strength and trademark scope is always positive. The strongest marks receive the widest scope of protection.

In this article, we challenge this conventional wisdom. We argue that as a mark achieves very high levels of strength, the relation between strength and confusion turns negative. The very strength of such a superstrong mark operates to ensure that consumers will not mistake other marks for it. Thus, the scope of protection for such marks ought to be narrower compared to merely strong marks. If we are correct, then numerous trademark disputes involving the best-known marks should be resolved differently—in favor of defendants. Our approach draws support from case law of the Federal Circuit—developed but then suppressed by that court—and numerous foreign jurisdictions.

As we show, some courts justify the conventional wisdom on the alternative ground that, whatever the likelihood of confusion, defendants with similar marks should not reap where they have not sown. This misplaced concern with free riding suffers from multiple analytical flaws and is contrary to trademark policy. These flaws are compounded where the mark owner sues a competitor, claiming expansive scope over similar but non-confusing marks. The fundamental change in trademark doctrine that we propose not only conforms to the empirical realities of consumer perception, but also advances the overarching policy goal of trademark law, which is not to enable the strongest to grow even stronger, but rather to promote effective competition. 

Toward an Optimal Bail System

Crystal S. Yang

Few decisions in the criminal justice process are as consequential as the determination of bail. Indeed, recent empirical research finds that pre-trial detention imposes substantial long-term costs on defendants and society. Defendants who are detained before trial are more likely to plead guilty, less likely to be employed, and less likely to access social safety net programs for several years after arrest. Spurred in part by these concerns, critics of the bail system have urged numerous jurisdictions to adopt bail reforms, which have led to growing momentum for a large-scale transformation of the bail system. Yet supporters of the current system counter that pre-trial detention reduces flight and pre-trial crime—recognized benefits to society—by incapacitating defendants. Despite empirical evidence in support of both positions, however, advocates and critics of the current bail system have generally ignored the real trade-offs associated with detention.

This Article provides a broad conceptual framework for how policymakers can design a better bail system by weighing both the costs and benefits of pre-trial detention—trade-offs that are historically grounded in law, but often disregarded in practice. I begin by presenting a simple taxonomy of the major categories of costs and benefits that stem from pre-trial detention. Building from this taxonomy, I conduct a partial cost-benefit analysis that incorporates the existing evidence, finding that the current state of pre-trial detention is generating large social losses. Next, I formally present a framework that accounts for heterogeneity in both costs and benefits across defendants, illustrating that detention on the basis of “risk” alone can lead to socially suboptimal outcomes.

In the next part of the Article, I present new empirical evidence showing that a cost-benefit framework has the potential to improve accuracy and equity in bail decision-making, where currently bail judges are left to their own heuristics and biases. Using data on criminal defendants and bail judges in two urban jurisdictions, and exploiting variation from the random assignment of cases to judges, I find significant judge differences in pre-trial release rates, the assignment of money bail, and racial gaps in release rates. While there are any number of reasons why judges within the same jurisdiction may vary in their bail decisions, these results indicate that judges may not be all setting bail at the socially optimal level.

The conceptual framework developed in this Article also sheds light on the ability of recent bail reforms to increase social welfare. While the empirical evidence is scant, electronic monitoring holds promise as a welfare-enhancing alternative to pre-trial detention. In contrast, application of the conceptual framework cautions against the expanding use of risk-assessment instruments. These instruments, by recommending the detention of high-risk defendants, overlook the possibility that these high-risk defendants may also be “high-harm” such that they are most adversely affected by a stay in jail. Instead, I recommend that jurisdictions develop “net benefit” assessment instruments by predicting both risk and harm for each defendant in order to move closer toward a bail system that maximizes social welfare.

 

Notes

Was I Speaking to You?: Purely Functional Source Code as Noncovered Speech

Mark C. Bennett

This Note asks whether computer source code, when developed as a means to an end—as distinct from source code intended for third-party review—is covered speech under the First Amendment. I argue it is not. My argument has two parts. First, I describe case law treating First Amendment challenges to regulations of source code to demonstrate courts’ failure to address the status of purely functional source code. Second, I describe how courts should address such a question, by referencing an array of theories used to explain the scope of the First Amendment. I conclude no theory alone or in combination with others justifies the constitutional coverage of purely functional source code. I thereby undermine a key constitutional argument by technology manufacturers contesting, in the context of criminal investigations, the government-compelled creation of software to circumvent encryption technologies. 

Trial Judges and the Forensic Science Problem

Stephanie L. Damon-Moore

In the last decade, many fields within forensic science have been discredited by scientists, judges, legal commentators, and even the FBI. Many different factors have been cited as the cause of forensic science’s unreliability. Commentators have gestured toward forensic science’s unique development as an investigative tool, cited the structural incentives created when laboratories are either literally or functionally an arm of the district attorney’s office, accused prosecutors of being overzealous, and attributed the problem to criminal defense attorneys’ lack of funding, organization, or access to forensic experts.

But none of these arguments explain why trial judges, who have an independent obligation to screen expert testimony presented in their courts, would routinely admit evidence devoid of scientific integrity. The project of this Note is to understand why judges, who effectively screen evidence proffered by criminal defendants and civil parties, fail to uphold their gatekeeping obligation when it comes to prosecutors’ forensic evidence, and how judges can overcome the obstacles in the path to keeping bad forensic evidence out of court.

 

Standing, Legal Injury Without Harm, and the Public/Private Divide

William S. C. Goldstein

Legal injury without harm is a common phenomenon in the law. Historically, legal injury without harm was actionable for at least nominal damages, and sometimes other remedies. The same is true today of many “traditional” private rights, for which standing is uncontroversial. Novel statutory claims, on the other hand, routinely face justiciability challenges: Defendants assert that plaintiffs’ purely legal injuries are not injuries “in fact,” as required to establish an Article III case or controversy. “Injury in fact” emerges from the historical requirement of “special damages” to enforce public rights, adapted to a modern procedural world. The distinction between public and private rights is unstable, however, with the result that many novel statutory harms are treated as “public,” and thus subject to exacting justiciability analysis, when they could easily be treated as “private” rights for which legal injury without harm is sufficient for standing. Public and private act as rough proxies for “novel” and “traditional,” with the former subject to more judicial skepticism. Applying “injury in fact” this way is hard to defend as a constitutional necessity, but might make sense prudentially, depending on the novelty and legal source of value for the harm. Taxonomizing these aspects of “harm” suggests that, even with unfamiliar harms, judicial discretion over value lessens the need for exacting injury analysis.

 

Compliant Subversion

Jacob Hutt

Compliance and subversion are not mutually exclusive. Police officers can comply with Miranda requirements while subverting their purpose through creative workarounds; individuals facing deportation can comply with immigration procedures while clogging them up with frivolous claims; anti-death penalty activists can avoid violation of Eighth Amendment doctrine while undermining the executions it approves. These and other deliberate actions to obstruct judicial protections of rights and powers fall in a gray area between compliance and noncompliance. This Note articulates a transsubstantive legal theory underlying these actions, referred to as “compliant subversion”: attempts to make judicial protections of rights or powers unworkable while maintaining facial compliance with the law. After defining this concept and exploring its manifestations across different areas of law, the Note examines how courts constrain compliant subversion with reference to the subversive intent underlying it. Finally, the Note presents a normative critique of when judicial consideration of compliant subversion is inappropriate.

 

Final Agency Action in the Administrative Procedure Act

Stephen Hylas

Under section 704 of the Administrative Procedure Act, courts can only review agency actions when they are “final.” In Bennett v. Spear, the Supreme Court put forth a seemingly simple two-part test for assessing final agency action. However, the second prong of that test—which requires agency actions to “create rights or obligations from which legal consequences flow” to be final—poses several problems. Most importantly, because it overlaps with the legal tests for whether a rule is a legislative rule or a nonbinding guidance document, it seems to effectively bar courts from reviewing nonlegislative rules before agencies have taken enforcement action. Because of this overlap, the Bennett test conflicts with—and thus undercuts—other principles of administrative law that seem to promote a pragmatic, flexible approach for courts to use in determining whether, when, and how to review agency rules. The result is a confusing standard of review that can prevent plaintiffs from challenging agency rules in court, especially when those plaintiffs are beneficiaries of regulation who will never be subject to enforcement action down the road. At the same time, however, courts should not be able to review every single agency rule before it is enforced. Agencies should be able to experiment, but should not be permitted to indefinitely shield potentially dangerous deregulatory programs from judicial review, as Bennett seems to allow. Accordingly, this Note argues that to be faithful to the Court’s commitment to “pragmatic” interpretation of the finality requirement, lower courts should follow a two-pronged approach to analyzing questions of final agency action. When courts can compel an agency to finalize its allegedly temporary action because of “unreasonable delay,” they should interpret Bennett’s second prong formally, holding that only truly legally binding action can be final. If this bars some plaintiffs from suing now, they will be able to challenge the rule later when the agency’s process is finished. But when courts cannot force agencies to finalize their rules, they should construe Bennett functionally, conceptualizing the agency’s allegedly temporary action under a “practically binding” standard. Under this framework, if the agency’s “temporary” action in practice consistently follows certain criteria, it should be viewed as binding and final under Bennett, and thus subject to judicial review, regardless of what the agency or its employees are legally required to do. This two-pronged approach would help to strike the right balance between the private party and the agency in a practical manner that depends upon the context.

 

A Qualified Defense of the Insular Cases

Russell Rennie

The Insular Cases have, since 1901, granted the political branches significant flexibility in governing U.S. territories like American Samoa and Puerto Rico—flexibility enough, indeed, to ignore certain constitutional provisions that are not “fundamental” or which would be “impractical” to enforce in the territories. Long maligned as judicial ratification of empire, predicated on racist assumptions about territorial peoples and a constitutional theory alien to the United States, the Insular Cases had a curious renaissance in the late twentieth-century. As local territorial governments began to exercise greater self-rule, newly-enacted local laws in the territories began to pose constitutional issues, but courts generally acquiesced in these constitutional deviations. This Note argues that this accommodationist turn in Insular doctrine complicates the legacy of the cases—that their use to enable local peoples to govern themselves as they desire, and to protect their cultures, means the Insular doctrine is not merely defensible but perhaps even necessary, and finds support in arguments from political theory. Moreover, the Note contends, such constitutional accommodation has a long pedigree in the American constitutional system.

 

The Right to Remain a Child: The Impermissibility of the Reid Technique in Juvenile Interrogations

Ariel Spierer

Police interrogations in the United States are focused on one thing: getting a confession from the suspect. The Reid Technique, a guilt-presumptive nine-step method and the most common interrogation technique in the country, is integral to fulfilling this goal. With guidance from the Reid Technique, interrogators use coercion and deceit to extract confessions—regardless of the costs. When used with juvenile suspects, this method becomes all the more problematic. The coercion and deception inherent in the Reid Technique, coupled with the recognized vulnerabilities and susceptibilities of children as a group, has led to an unacceptably high rate of false confessions among juvenile suspects. And, when a juvenile falsely confesses as the result of coercive interrogation tactics, society ultimately suffers a net loss.

In the Eighth Amendment context, the Supreme Court has recognized that children are different from adults and must be treated differently in various areas of the criminal justice system. The Court’s recent Eighth Amendment logic must now be extended to the Fifth Amendment context to require that juveniles be treated differently in the interrogation room, as well. This Note suggests that the Reid Technique be categorically banned from juvenile interrogations through a constitutional ruling from the Court. Doing so would not foreclose juvenile interrogation; rather, a more cooperative and less coercive alternative could be utilized, such as the United Kingdom’s PEACE method. Nonetheless, only a categorical constitutional rule that prohibits the use of the Reid Technique in all juvenile interrogations will eliminate the heightened risk of juvenile false confessions and truly safeguard children’s Fifth Amendment rights.