David Bosco‘s new book tells the history of the International Criminal Court. It is nicely done and will be a reference for everyone who does work in this area. The conclusion will not surprise any observers: the ICC survived efforts at marginalization by great powers but only by confining its investigations to weak countries. Thus, the ICC operates de facto according to the initial U.S. proposal, rejected by other countries, to make ICC jurisdiction conditional on Security Council (and hence U.S.) approval.
Bosco seems to think this equilibrium can persist, but the book only touches on (perhaps because it is too recently written) the growing resentment of weak countries, above all African countries, which have woken up to the fact that the Court is used only against them, and have begun to murmur about withdrawing. The Court now faces political pressure to avoid trying not only westerners, but also Africans. Meanwhile, the Kenya trials are heading toward debacle, while the ICC is unable to address international criminals like Assad. The Court’s room to maneuver is shrinking rapidly, and one wonders whether it can sustain its snail’s pace (one conviction over a decade) much longer. The book might have been called “Just Roughness.”
What should originalists do about precedent? If they respect it, then the original meaning will be lost as a result of erroneous or non-originalist decisions that must be obeyed. if they disregard it, then Supreme Court doctrine is always up for grabs, subject to the latest historical scholarship or good-faith judicial disagreement (as illustrated by the competing Heller opinions). One can imagine intermediate approaches: for example, defer only to good originalist precedents, or defer only when a precedent has become really really entrenched. But while such approaches may delay the eventual disappearance of original meaning behind the encrustation of subsequent opinions, they cannot stop it, sooner or later. Our readings–Lawson, McGinnis & Rappaport, Nelson–provide no way out that I can see. (Lawson dismisses the problem, while the others propose intermediate approaches.) Originalism has an expiration date.
Another issue is raised by McDonald–the gun control case. In Heller, Scalia disregards precedent in order to implement what he thinks was the original understanding of the Second Amendment. In McDonald, he writes a concurrence that cheerfully combines Heller with the anti-originalist incorporation decisions. Why doesn’t he feel constrained to revisit those decisions? Instead, he joins a holding that generates constitutional doctrine that in practical terms is more remote from the original understanding (gun rights that constrain the states) than he would have if he had gone the other way in Heller (no gun rights at all), given the greater importance for policing of the state governments both at the founding and today. This is akin to the second-best problem in economics: partial originalism–originalism-and-precedent–may lead to outcomes that are less respectful to original understandings than non-originalist methodologies would.
*** Will responds; his VC colleague David Bernstein’s post about the clause-by-clause problem is also worth reading.
All originalists acknowledge the “dead hand” problem, and so all agree that the normative case for originalism depends on the amendment procedure being adequate for keeping the constitution up to date. Or at least all of the originalists I have talked to (n=1). Yet it can be shown that the Article V amendment procedure is unlikely to be adequate, and the probability that it is adequate across time is virtually nil.
The reason is that outcomes produced by voting rules depend on the number of voters (and also the diversity of their interests but I will ignore that complication since it only reinforces the argument). An easy way of seeing this is to consider the strongest voting rule—unanimity—and imagine that people flip a coin when they vote (the coin flip reflects the diversity of their interests, not a failure to vote their interests), and can agree to change a law only when all voters produce heads. The probability of achieving unanimity with a population of 2 is 1/4 (only one chance of two heads out of four possible combinations), with a population of 3 is 1/8, and so on.
For a more rigorous formulation, consider a spatial model from 0 to 1, with a 2/3 supermajority rule. The status quo is chosen randomly (on average 1/2), and the population chooses whether to change it. If the population is 3, voters will change the outcome with probability of (near) 1, because 2 of the 3 people will draw an outcome greater than or less than 1/2 with probability of (near) 1. If the population is 6, there is now a non-trivial probability that 3 of the 6 people will be on one side of 1/2, and 3 people on the other side, so a 2/3 majority (4 people) will be unable to change the status quo.
The U.S. population has increased from 4 million at the time of the founding to 300 million today. If the amendment rules were optimal in 1789, they are not optimal today. If they are optimal today, then they won’t be optimal in a few years. Originalism with a fixed amendment process can be valid only with a static population.
This argument comes from Richard Holden, Supermajority Voting Rules; and Rosalind Dixon and Richard Holden, Constitutional Amendments: The Denominator Problem (who supply empirical evidence).
There is a related argument that one can make based on the Buchanan/Tullock analysis of optimal voting rules. Thanks to Richard for a helpful email exchange.
Sources: Thomas Miles, Racial Disparities in the Allocation of Wiretap Applications Across Federal Judges, JLS 2012 ; Wikipedia. Domestic 1997-2007; FISA 2004-2012.
The data source is the new Comparative Constitutions Project website. Compare to the growth of international human rights:
The graph shows increases in the number of human rights as recognized by the various human rights treaties when they went into force.
This short, good draft by Balkin (with a very high ratio of ideas to words) seeks to explain the rise of originalism. A starting point is that originalism is (virtually) uniquely an American phenomenon, and a national (not state) phenomenon. Next there is the central role of the founding as a unifying origin myth; the huge impact of the founders in our cultural memory; and the Protestant tradition with its preoccupation with sacred texts and yearning for a purifying return to the fundamentals. But why now? Jack thinks that originalism avant la lettre got started after the New Deal when Justice Hugo Black felt that he needed a justification for overcoming the New Deal quasi-tradition of judicial deference, and took off when conservatives realized that they could use it to bash Warren Court precedents plus Roe v. Wade. Maybe also it is a reaction to modernist anxieties about political foundations provoked by the radical constitutional innovations of the twentieth century.
I like this piece but my initial reaction is that Jack treats originalism as too much of an idiom–a cafeteria meal from which one may pick and choose–and doesn’t take seriously its constraining power in legal argument. While it’s true that Scalia and Thomas invoke it selectively, and the historical materials are frequently ambiguous, there is a reason it took off in the 1980s rather than the 1950s, and that is that originalism really is conservative, whereas the Supreme Court in the 1950s was liberal. The founders were conservative by today’s standards. They cared a lot about property rights, for example, and very little about discrimination against ethnic groups, sexual freedom, and so on (to say nothing of slavery). So I do not think originalism is as malleable as Jack does; it’s not like speaking Italian rather than French but exerts a right-wing gravitational pull. There is a reason (as the data show) that conservative justices are more likely to cite The Federalist than liberal justices are.
Adrian Vermeule’s new book, The Constitution of Risk, argues that much constitutional thinking follows a model of “precautionary constitutionalism,” where doctrines are designed to avoid worst-case outcomes. A better approach is what he calls “optimizing constitutionalism,” where such “political risks” are traded off rather than minimized. The Court of Appeals in Noel Canning, for example, appeared to be driven by a fear that if it upheld President Obama’s recess appointments, then presidents could tyrannize by avoiding the Senate altogether. It ignored the countervailing risk that if the recess appointment is limited, important offices would go unfilled. As Cass Sunstein has written in the area of regulation, the precautionary principle makes little sense on its own terms since there are always risks on all sides, and leads to pretty unattractive outcomes even when it can be applied. It’s as if we should all stay in our basements rather than take the risk that flower pot will fall on our heads if we go outside.
Among the many excellent insights, the one I found most striking was the claim that much traditional constitutional thinking and doctrine has a precautionary-principle cast to it, and is thus vulnerable to the same criticisms as that principle is.
How did the Republic survive its first 190 years? Source: Westlaw.
John Coates recently posted a paper on SSRN entitled Cost-Benefit Analysis of Financial Regulation: Case Studies and Implications. This topic has been important ever since the D.C. Circuit struck down an SEC regulation for failing CBA in Business Roundtable in 2011. Glen Weyl and I held a conference on the topic last fall, and have written several papers arguing that, whatever one thinks of the reasoning in the (justly criticized) Business Roundtable case, CBA is the way to go.
The core of Coates’ paper is a description of efforts by agencies (and other institutions or persons) to perform CBAs of six major financial regulations. Coates persuasively argues that the existing CBAs fall flat and expresses skepticism that it is even possible, given the current state of knowledge, for meaningful CBAs of financial regulations (or certain financial regulations) to be conducted. What valuation should we assign to an avoided financial crisis? Hard to say. Coates concludes that while it makes sense for agencies to engage in cost-benefit balancing using rough guesstimates, their efforts should not be subject to judicial review.
Coates’ argument is sensible, but I am more optimistic than he is about the capacity of agencies to come up with numbers. This criticism–and many others that Coates makes–could be (and have been) made about other forms of CBA, for example, CBA of environmental regulation, which has improved greatly over the last 30 years but remains far from perfect. The fact is that when an agency proposes a capital adequacy rule, it is using an implicit valuation for an avoided crisis. The agency should be required to make that valuation explicit, and defend it. We can agree that a wide range of valuations is acceptable and also believe that public discussion of the range is useful.
I have mixed feelings about his criticisms of judicial review. On the one hand, it seems likely that if courts rigorously applied CBA to new financial regulations, we would not have any new financial regulations (a bad thing), at least not for some time. On the other hand, I’m not sure what will encourage agencies to perform CBAs properly–and this means paying money to consulting firms to generate valuations–if judicial review does not take place. OIRA has encouraged non-financial agencies to use CBA, but OIRA has more limited authority over the financial agencies–none at all in the case of the Fed.
Will asks whether originalists should be heartened or troubled by Campbell’s debunking of Justice Scalia’s historical analysis in Printz. However, the majority opinion is not originalist at all. Scalia doesn’t address the historical materials with any rigor; he argues (quite candidly) that the anti-commandeering principle is consistent with the historical record, not that it emerges from the best reading of the historical record. Where does that principle come from? Precedent: “Finally, and most conclusively in the present litigation, we turn to the prior jurisprudence of this Court.” Scalia thinks that New York v. U.S. controls, and the weak foundation-era history provides no basis for overturning it. Would Scalia have switched sides if Campbell’s work had been before him? Hard to know, but I would not advise Congress that it is now free to commandeer.
On The Federalist Papers, given the specific political use for which they were written and published, there is little reason to believe that they represent a general account of the original understanding as it existed in all 13 states. The reason that The Federalist has fetish-value today is simple: it has been cited over and over by the Supreme Court, and so has become a source of constitutional law that supplements the text and other materials.
Source: Westlaw, Supreme Court Compendium (thanks Lee and Bill). N.B.: I’m not distinguishing majority and other opinions. I suspect JM appears in more majority opinions as time passes. (The y variable is the number of cases in which at least one opinion cites JM divided by total number of cases for that year.)
Glen Weyl and I argue in Regulation Magazine that financial regulators should use cost-benefit analysis. We have written academic treatments here (AER P&P) and here (JLS, under review).
For a skeptical view, see this piece by John Coates. I will comment on it soon.
The number of opinions is for a specific year, not for the entire decade. Source: Westlaw
This is the title of a new book edited by Cary Coglianese, Adam M. Finkel , and Christopher Carrigan. The book contains a chapter by me and Jonathan Masur, which builds on, and responds to criticisms of, our earlier article, Regulation, Unemployment, and Cost-Benefit Analysis. In the chapter and article, we argue that when agencies conduct cost-benefit analysis, they should take into account the effects of regulations on job loss.
The book has a broader scope. It focuses also on the larger question on whether regulation kills jobs. The answer, unavoidably, is sometimes. Our view is that even when it does, regulations are justified when the benefits are large enough, and agencies need to make sure this is the case by conducting cost-benefit analysis properly.
Here is my Slate piece criticizing the originalist argument for ruling that President Obama violated the recess appointments clause. A while back I criticized the “the” argument. (Slate used the genius-level headline “Indefinite Articles”). Will Baude co-authored an interesting amicus brief that makes the originalist case for affirming and further argues that if you’re not an originalist, you should defer to the Senate’s right to define what a session is, so either way the president loses.
New paper on SSRN:
The standard model of judicial behavior suggests that judges primarily care about deciding cases in ways that further their political ideologies. But judicial behavior seems much more complex. Politicians who nominate people for judgeships do not typically tout their ideology (except sometimes using vague code words), but they always claim that the nominees will be competent judges. Moreover, it stands to reason that voters would support politicians who appoint competent as well as ideologically compatible judges. We test this hypothesis using a dataset consisting of promotions to the federal circuit courts. We find, using a set of objective measures of judicial performance, that competence seems to matter in promotions in that the least competent judges do not get elevated. But the judges who score the highest on our competence measures also do not get elevated. So, while there is no loser’s reward, there may be something of a winner’s curse, where those with the highest levels of competence hurt their chances of elevation.