Antitrust and Competition Law

Jennifer L. Graber

There is growing concern over the pharmaceutical industry’s ability to set and raise drug prices as it sees fit. The price of a drug that has not been protected by a patent for decades can suddenly increase—or “ratchet”—as much as 10,000%. This Note identifies the problem of ratcheting drug prices and considers whether these abrupt changes in drug prices derive from a longstanding problem inherent in the United States’ pharmaceutical regulatory regime. It then considers the most commonly suggested mechanism for countering high drug prices—stimulating competition in the pharmaceutical market—but ultimately concludes that focusing solely on increasing competition constructs an overly simplistic view of ratcheting drug prices. In order to find an effective solution to unexpected increases in drug prices, this Note evaluates a small subset of pharmaceuticals that have recently undergone a sudden price increase and separates the ratcheting events into two categories: (1) those that occur as a result of natural deviations in the market, and (2) those that occur due to business tactics that take advantage of vulnerabilities in the drug market. It concludes that under this categorization, antitrust law may provide an effective solution specifically directed at ratcheting events of the second category— those driven by anticompetitive behavior.

Monica L. Smith
The cost of prescription drugs, a function of the nexus of patent law and antitrust law, has recently been thrust into the spotlight. In the shadow of the Federal Trade Commission’s vigorous challenges to anticompetitive agreements between branded manufacturers and their potential generic competitors, a new player entered the administrative patent invalidity arena—noncompetitors, such as hedge fund managers, who, despite their reputation for seeking profit at all costs, asserted a seemingly puzzling altruistic interest in invalidating certain patents that prevent generic competitors from entering the market. In light of “abuse of process” accusations and calls for sanctions, this Note suggests that corporate law may facilitate an understanding of the role of noncompetitors in patent invalidation. Using the corporate law phenomenon of greenmail as an analogy, this Note argues that noncompetitors may actually facilitate competition and, as such, should be permitted to continue filing administrative patent challenges.
Shaun E. Werbelow

Accountable Care Organizations (ACOs), a major component of the Affordable Care Act, seek to provide patients with better quality health care at a lower cost and have been praised for their ability to help repair our country’s broken health care system. Despite their potential benefits, however, ACOs also raise significant antitrust concerns—concerns that may pit consumer surplus and total surplus against one another. In an attempt to address these concerns, the Department of Justice and Fair Trade Commission announced that they will use market share screens and rule of reason treatment to evaluate ACOs participating in the Medicare Shared Savings Program. The use of market share screens and rule of reason treatment allows the antitrust agencies to avoid prioritizing either consumer surplus or total surplus in the first instance but leaves open two critical questions: What will the rule of reason treatment afforded to ACOs look like? And how will the antitrust agencies ultimately determine whether ACOs benefit or harm consumers? In order to address these questions, this Note proposes that the antitrust agencies use the “big data” collected under the Affordable Care Act to conduct a structured rule of reason review of ACOs that takes into account both the consumer surplus and total surplus through a burden-shifting framework.

Joanna Warren

In its en banc decision in LePage's Inc. v. 3M, the Third Circuit held that a 3M loyalty rebate program, which provided above-cost price discounts to customers who purchased multiple 3M product lines, violated section 2 of the Sherman Act. Prior to this decision, many practitioners and scholars understood the antitrust case law to hold that a strategic pricing scheme would not violate section 2 so long as the discounted prices remained above cost. The Third Circuit found that this test applies only to predatory pricing cases, and ruled that claims alleging exclusionary conduct other than predatory pricing—as it characterized 3M's loyalty rebate program—are cognizable under section 2 even without a showing of below-cost pricing. The Supreme Court recently denied certiorari in LePage's, leaving the issue in the hands of the lower courts. In this Comment, Joanna Warren criticizes the Third Circuit's decision as lacking sufficient economic analysis of the rebate scheme and providing unclear guidance for addressing future claims. She argues for the adoption of a test that would recognize above-cost pricing as generally legitimate while invalidating schemes that threaten to eliminate equally efficient competitors from the marketplace.

Amy Marshak

With its creation of a statutory mandate to ban all “unfair methods of competition,”
Congress granted the Federal Trade Commission broad power to reach antitrust
violations, as well as conduct that violates the “spirit” of the antitrust laws and
conduct that is against public policy more broadly. The breadth of the
Commission’s use of this statutory authority has ebbed and flowed over time, and
recent indications signal that the FTC may be entering a period of expansion in
attacking new forms of anticompetitive conduct. In light of this development, a
renewed debate over the appropriate use of section 5 of the FTC Act has arisen—
how can the Commission best use its broad section 5 authority to protect against
consumer harm while avoiding the risk of deterring procompetitive conduct
through arbitrary and standardless enforcement? This Note argues that the FTC
should focus on tackling “frontier cases”—cases that meet all the legal requirements
of the Sherman Act but involve new forms of anticompetitive conduct that fall
outside traditional categories of antitrust law such that there may be little precedent
to guide the Commission’s analysis. This Note then expands the frontier rationale
beyond the selection of cases involving new forms of anticompetitive conduct, as
previously advocated, to include efforts to integrate developing models of economic
thought in order to influence the theoretical underpinnings of antitrust law and to
insert the Commission’s voice in developing the contours of evolving Sherman Act

Alison M. Hashmall

The goal of any financial regulatory system should be to enable well-functioning markets. Meeting this goal requires reducing the impact and frequency of financial institution failures that cause systemic risk. Any regulatory structure, however, inevitably involves tradeoffs. A policy that effectively reduces systemic risk and its associated costs might also increase moral hazard. Similarly, a policy that seeks to reduce moral hazard and maintain market discipline—for example, by allowing a large interconnected institution such as Lehman Brothers to fail—might also create uncertainty, which can harm markets by creating panic. In this Note, I argue that our current regulatory structure is suboptimal in its regulation of systemic risk. A different regulatory structure could more effectively reduce the systemic risk caused by failing non-bank financial institutions, while minimizing the attendant problems caused by the regulations themselves—moral hazard and uncertainty. The federal government could strike a superior balance by establishing more stringent ex ante prudential regulations of systemically important non-bank financial institutions aimed at curbing excessive risk-taking and by implementing a regulatory process to resolve the failure of such institutions. The Obama Administration has proposed regulatory reform that endorses such beneficial changes, but certain details in the proposal fall short. I propose specific modifications to the Administration’s proposal to produce a more optimal regulatory framework. By pinpointing and examining the strengths and weaknesses of the Administration’s approach, I formulate a regulatory framework that more effectively contains systemic risk, avoids increasing moral hazard, and reduces excessive uncertainty caused by regulation.

Alan J. Meese

The last several years have seen a vigorous debate among antitrust scholars and practitioners about the appropriate standard for evaluating the conduct of monopolists under section 2 of the Sherman Act. While most of the debate over possible standards has focused on the empirical question of each standard’s economic utility, this Article undertakes a somewhat different task: It examines the normative benchmark that courts have actually chosen when adjudicating section 2 cases. This Article explores three possible benchmarks—producer welfare, purchaser welfare, and total welfare—and concludes that courts have opted for a total welfare normative approach to section 2 since the formative era of antitrust law. Moreover, this Article will show that the commitment to maximizing total social wealth is not a recent phenomenon associated with Robert Bork and the Chicago School of antitrust analysis. Instead, it was the Harvard School that led the charge for a total welfare approach to antitrust generally and under section 2 in particular. The normative consensus between Chicago and Harvard and parallel case law is by no means an accident; rather, it reflects a deeply rooted desire to protect practices—
particularly “competition on the merits”—that produce significant benefits in the form of enhanced resource allocation, without regard to the ultimate impact on purchasers in the monopolized market. Those who advocate repudiation of the longstanding scholarly and judicial consensus reflected in the total welfare approach to section 2 analysis bear the heavy burden of explaining why courts should, despite considerations of stare decisis, suddenly reverse themselves and adopt such a different approach for the very first time, over a century after passage of the Act.

Thomas B. Bennett

What motivates substantive presumptions about how to interpret statutes? Are they like statistical heuristics that aim to predict Congress’s most likely behavior, or are they meant to protect certain underenforced values against inadvertent legislative encroachment? These two rationales, fact-based and value-based, are the extremes of a continuum. This Note uses the presumption against extraterritoriality to demonstrate this continuum and how a presumption can shift along it. The presumption operates to diminish the likelihood that a federal statute will be read to extend beyond the borders of the United States. The presumption has been remarkably stable for decades despite watershed changes in the principles—customary international law and conflict of laws—that once supported it. As the presumption’s normative justifications have diminished, a new justification has grown in importance. Today, the presumption is often justified as a stand-in for how Congress typically legislates. This Note argues that this change makes the presumption less defensible but even harder to overcome in individual cases.