As I note in a comment on NYT’s Room for Debate, the “executive order” imbroglio coming out of the State of the Union speech is strange. The White House told newspapers before the speech that the president planned to sling about executive orders like Zeus with his thunderbolts, and they duly reported it on their front pages. Republicans duly exploded with outrage. The speech itself has a single mention of executive orders (“I will issue an executive order requiring federal contractors to pay their federally-funded employees a fair wage of at least $10.10 an hour”). The president continues in this vein, saying that he is going to do a bunch of other extremely minor things using his existing statutory authority, though it would be better if Congress would chip in with some legislation. The resulting controversy about presidential power is entirely manufactured–by both sides. Maybe the president’s strategy was to look fierce to his supporters while not actually doing anything that might get him in trouble with Congress.
In my response to their thought-provoking paper, I argued that the supposed fallacy that Eric and Adrian identify depends on empirical claims about judicial behavior in a way that they denied. My point was that although the targets of their critique may make different assumptions about what motivates judges and what motivates political actors in the other branches, those assumptions are not necessarily “inconsistent” if the different treatment is justified by the different institutional norms and constraints that operate on judges, as compared to other political actors (which I consider to be at least in part an empirical question). Neither this point – nor any other one I made – depends on controversial claims about the nature of truth or logical consistency, postmodern or otherwise.
In their brief rejoinders, Eric and Adrian continue to insist that their argument does not depend on any empirical claims about what motivates judges. But in so arguing, each of them contradicts himself and concedes my original point in the process.
Adrian first says of the kind of argument they were examining that “it is caught in a dilemma — it can survive filter (1) only by taking a form that causes it to be weeded out by filter (2).” I take Adrian to mean here that the argument can avoid the charge of inconsistency (filter (1)) but only at the cost of making implausible empirical assumptions about how judges act (filter (2)). But then he goes on to say that by the time we are considering the empirical question (filter (2)), “the fallacy has already dropped out by that point; it is not affected at all by whatever happens in the debate at the second stage.” But how can it be that the fallacy is “not affected” by what happens at the second stage if, as he has just said, it can “survive filter (1)” by making empirical claims that filter (2) then “weeds out”?
Eric makes the same error in even more efficient fashion. He says, “we sometimes argue that they escape the problem only by making implausible arguments. But the inside-outside problem does not depend on our skepticism about these specific arguments being correct.” Eric’s second sentence contradicts his first. He acknowledges that the targets of their critique can “escape the problem” (of inconsistency) by making what he considers to be implausible empirical arguments. But then he insists that their charge of inconsistency does not depend on those empirical arguments about judicial behavior being implausible. But how can that be the case if, as he has just said, the scholars can avoid inconsistency if those empirical arguments are correct?
I don’t think I’m the postmodernist in this debate.
I generally follow Johnson’s advice never to respond to critics, but this is the season for breaking resolutions. So let me offer a brief rejoinder to Charles Barzun’s response to the Posner/Vermeule paper on the Inside/Outside Fallacy; both are recently published by the University of Chicago Law Review.
Eric and I suppose that successful arguments (in constitutional theory, inter alia) must pass through two separate, independent and cumulative filters: (1) a requirement of logical consistency (the inside/outside fallacy is one way of violating this requirement); (2) a requirement of substantive plausibility (not ultimate correctness).
With respect to some of the particular arguments we discuss in the paper, we say that the argument is caught in a dilemma — it can survive filter (1) only by taking a form that causes it to be weeded out by filter (2). Now in some of those cases, I take it, Charles disagrees with us that the argument fails the second filter. He is of course entitled to his views about that. But the inside/outside fallacy — which is the first filter — is strictly about the logical consistency of assumptions, not their plausibility. Thus the fallacy has already dropped out by that point; it is not affected at all by whatever happens in the debate at the second stage. It’s just a muddle to say that because Eric and I do happen to have substantive views about what counts as plausible for purposes of the second filter, we are therefore smuggling substantive content into the first filter. Not so — unless one subscribes to the postmodern view that logical consistency is itself a substantive requirement, thereby jettisoning the distinction between validity and truth. (In some passages, Charles seems willing to abandon himself utterly to that hideous error, but for charity’s sake we ought not read him so, if we can help it).
So when Charles says that the inside/outside fallacy smuggles in substantive assumptions, I think that’s a confusion that arises from failing to understand the distinction between the two filters. The reader of Charles’s piece should be alert for skipping to and fro between these distinct questions of logical consistency and plausibility.
(And see Eric’s earlier reply.)
I examine the grudging case at Slate.
International organizations use a bewildering variety of voting rules. Courts, commissions, councils, and the General Assembly use majority rule. The WTO, the International Seabed Authority, the IMF, the World Bank, and the Security Council use various types of supermajority rule, sometimes with weighted voting, sometimes with voters divided into chambers that vote separately, sometimes with vetoes. The voting rules in the EU defy any simple description. Al Sykes and I try to bring some order to this mess in our new paper.
This book provides a nice history of the evolution of voting rules, with emphasis on supermajority rules, but is less successful in its attempt to argue that supermajority rule should presumptively be replaced with majority rule. Schwartzberg simultaneously argues that majority rule is superior to supermajority rule because the latter creates a bias in favor of the status quo, and acknowledges that a status quo bias is justified so that people can plan their lives. Her solution–what she calls “complex majoritarianism”–is the manipulation of majority rules so that they are applied to favor–the status quo. For example, she favors constitutional amendment requiring a temporally separated majority vote in the legislature (plus subsequent ratification), but the effect is just bias in favor of the status quo except in the unlikely event that preferences don’t change. She argues that this approach advances deliberation but deliberation can be encouraged in other ways and in any event the status quo bias is not resolved.
The book is right to emphasize historical, empirical, and institutional factors as opposed to the sometimes tiresome analytics of social choice theory–as emphasized by this enthusiastic review here–but Schwartzberg’s argument against supermajority is ultimately analytic itself, based on abstract considerations of human dignity, rather than grounded in history or empiricism. The empirical fact that the book doesn’t come to terms with is that supermajority rule is well-nigh universal, not only in constitutions but virtually every organization–clubs, corporations, civic associations, nonprofits–where people voluntarily come together and use supermajority rules to enhance stability and to prevent situational majorities from expropriating from minorities.
Simon Caney argues, in a welcome departure from the usual claims in this area of philosophy, that negotiating a climate treaty is not just a matter of distributing burdens fairly, but also requires a climate treaty that countries are actually willing to enter–“feasible,” to use the word that David Weisbach and I use in our book Climate Change Justice.
But he rejects our argument that the only feasible treaty is one that makes every state better off by its own lights relative to the world in which no treaty exists, and that if advocates, ethicists, and (more to the point) government officials insist that a treaty be fair (in the sense of forcing historical wrongdoers to pay, redistributing to the poor, or dividing burdens equally), there will never be such a treaty.
He says that if a government refuses to enter a fair but burdensome treaty because it knows that voters will punish it for complying, then that just means that voters have a duty not to punish the government, and instead to compel the government to act according to the philosopher’s sense of morality. But because voters don’t recognize such a duty, we are back where we started. His underlying assumption seems to be that voters will cause governments to act morally; ours is that voters will (at best) acquiesce in a treaty that avoids harms that are greater than the costs of compliance. So while, unlike many philosophers, he recognizes a feasibility constraint, he waters it down beyond recognition.
The EU’s recent backpedaling on climate rules shows once again that feasibility, not ethics, should be a necessary condition for proposals for distributing the burdens of a climate treaty.
Source: Cingranelli-Richards Human Rights Dataset (0 (bad) to 2 (good)).
Charles Barzun argues that Adrian Vermeule and I smuggled substantive assumptions into what we characterized as a methodological criticism about legal scholarship and judging. I don’t think he’s right. Our major example is the judge who says that he may settle a dispute between the executive and legislature based on the Madisonian theory that “ambition must be made to counteract ambition.” In appealing to Madison, the judge implicitly puts into question his own impartiality, as Madison was referring to the judiciary as well as the other branches. We didn’t argue that judges should never resolve disputes between the executive and the legislative, just that the judge (or, more plausibly, an academic) must supply a theory that does not set the judge outside the system.
Barzun thinks that we make “controversial claims about the nature of law and how judges decide cases,” in particular, that we make excessively skeptical assumptions about judicial motivation. I don’t think we do, but the major point is that our argument in this paper does not depend on such claims.
For example, suppose the judge responds to our argument by saying that he is in fact public-spirited, and only presidents and members of Congress are ambitious. That may well be so, but then he must abandon Madison’s argument and make his own as to why these are plausible assumptions about political behavior. If one shares the judge’s optimism about human nature, one might believe that the president and members of Congress are also public-spirited, in which case judicial intervention in an inter-branch clash may not be warranted. The judge can also, of course, make arguments about different institutional constraints, public attitudes, and so on, which may justify judicial intervention. But that is a different theory, different from the Madisonian theory that he and many scholars propound.
In the course of describing the various ways that scholars respond to the inside-outside problem, we sometimes argue that they escape the problem only by making implausible arguments. But the inside-outside problem does not depend on our skepticism about these specific arguments being correct, and our real point is that most of the time scholars and (especially) judges do not try to make such arguments but instead ignore the contradictions in which they entangle themselves.
David Bosco‘s new book tells the history of the International Criminal Court. It is nicely done and will be a reference for everyone who does work in this area. The conclusion will not surprise any observers: the ICC survived efforts at marginalization by great powers but only by confining its investigations to weak countries. Thus, the ICC operates de facto according to the initial U.S. proposal, rejected by other countries, to make ICC jurisdiction conditional on Security Council (and hence U.S.) approval.
Bosco seems to think this equilibrium can persist, but the book only touches on (perhaps because it is too recently written) the growing resentment of weak countries, above all African countries, which have woken up to the fact that the Court is used only against them, and have begun to murmur about withdrawing. The Court now faces political pressure to avoid trying not only westerners, but also Africans. Meanwhile, the Kenya trials are heading toward debacle, while the ICC is unable to address international criminals like Assad. The Court’s room to maneuver is shrinking rapidly, and one wonders whether it can sustain its snail’s pace (one conviction over a decade) much longer. The book might have been called “Just Roughness.”
What should originalists do about precedent? If they respect it, then the original meaning will be lost as a result of erroneous or non-originalist decisions that must be obeyed. if they disregard it, then Supreme Court doctrine is always up for grabs, subject to the latest historical scholarship or good-faith judicial disagreement (as illustrated by the competing Heller opinions). One can imagine intermediate approaches: for example, defer only to good originalist precedents, or defer only when a precedent has become really really entrenched. But while such approaches may delay the eventual disappearance of original meaning behind the encrustation of subsequent opinions, they cannot stop it, sooner or later. Our readings–Lawson, McGinnis & Rappaport, Nelson–provide no way out that I can see. (Lawson dismisses the problem, while the others propose intermediate approaches.) Originalism has an expiration date.
Another issue is raised by McDonald–the gun control case. In Heller, Scalia disregards precedent in order to implement what he thinks was the original understanding of the Second Amendment. In McDonald, he writes a concurrence that cheerfully combines Heller with the anti-originalist incorporation decisions. Why doesn’t he feel constrained to revisit those decisions? Instead, he joins a holding that generates constitutional doctrine that in practical terms is more remote from the original understanding (gun rights that constrain the states) than he would have if he had gone the other way in Heller (no gun rights at all), given the greater importance for policing of the state governments both at the founding and today. This is akin to the second-best problem in economics: partial originalism–originalism-and-precedent–may lead to outcomes that are less respectful to original understandings than non-originalist methodologies would.
All originalists acknowledge the “dead hand” problem, and so all agree that the normative case for originalism depends on the amendment procedure being adequate for keeping the constitution up to date. Or at least all of the originalists I have talked to (n=1). Yet it can be shown that the Article V amendment procedure is unlikely to be adequate, and the probability that it is adequate across time is virtually nil.
The reason is that outcomes produced by voting rules depend on the number of voters (and also the diversity of their interests but I will ignore that complication since it only reinforces the argument). An easy way of seeing this is to consider the strongest voting rule—unanimity—and imagine that people flip a coin when they vote (the coin flip reflects the diversity of their interests, not a failure to vote their interests), and can agree to change a law only when all voters produce heads. The probability of achieving unanimity with a population of 2 is 1/4 (only one chance of two heads out of four possible combinations), with a population of 3 is 1/8, and so on.
For a more rigorous formulation, consider a spatial model from 0 to 1, with a 2/3 supermajority rule. The status quo is chosen randomly (on average 1/2), and the population chooses whether to change it. If the population is 3, voters will change the outcome with probability of (near) 1, because 2 of the 3 people will draw an outcome greater than or less than 1/2 with probability of (near) 1. If the population is 6, there is now a non-trivial probability that 3 of the 6 people will be on one side of 1/2, and 3 people on the other side, so a 2/3 majority (4 people) will be unable to change the status quo.
The U.S. population has increased from 4 million at the time of the founding to 300 million today. If the amendment rules were optimal in 1789, they are not optimal today. If they are optimal today, then they won’t be optimal in a few years. Originalism with a fixed amendment process can be valid only with a static population.
There is a related argument that one can make based on the Buchanan/Tullock analysis of optimal voting rules. Thanks to Richard for a helpful email exchange.
The data source is the new Comparative Constitutions Project website. Compare to the growth of international human rights:
The graph shows increases in the number of human rights as recognized by the various human rights treaties when they went into force.
As I argue in Slate.
This short, good draft by Balkin (with a very high ratio of ideas to words) seeks to explain the rise of originalism. A starting point is that originalism is (virtually) uniquely an American phenomenon, and a national (not state) phenomenon. Next there is the central role of the founding as a unifying origin myth; the huge impact of the founders in our cultural memory; and the Protestant tradition with its preoccupation with sacred texts and yearning for a purifying return to the fundamentals. But why now? Jack thinks that originalism avant la lettre got started after the New Deal when Justice Hugo Black felt that he needed a justification for overcoming the New Deal quasi-tradition of judicial deference, and took off when conservatives realized that they could use it to bash Warren Court precedents plus Roe v. Wade. Maybe also it is a reaction to modernist anxieties about political foundations provoked by the radical constitutional innovations of the twentieth century.
I like this piece but my initial reaction is that Jack treats originalism as too much of an idiom–a cafeteria meal from which one may pick and choose–and doesn’t take seriously its constraining power in legal argument. While it’s true that Scalia and Thomas invoke it selectively, and the historical materials are frequently ambiguous, there is a reason it took off in the 1980s rather than the 1950s, and that is that originalism really is conservative, whereas the Supreme Court in the 1950s was liberal. The founders were conservative by today’s standards. They cared a lot about property rights, for example, and very little about discrimination against ethnic groups, sexual freedom, and so on (to say nothing of slavery). So I do not think originalism is as malleable as Jack does; it’s not like speaking Italian rather than French but exerts a right-wing gravitational pull. There is a reason (as the data show) that conservative justices are more likely to cite The Federalist than liberal justices are.
Adrian Vermeule’s new book, The Constitution of Risk, argues that much constitutional thinking follows a model of “precautionary constitutionalism,” where doctrines are designed to avoid worst-case outcomes. A better approach is what he calls “optimizing constitutionalism,” where such “political risks” are traded off rather than minimized. The Court of Appeals in Noel Canning, for example, appeared to be driven by a fear that if it upheld President Obama’s recess appointments, then presidents could tyrannize by avoiding the Senate altogether. It ignored the countervailing risk that if the recess appointment is limited, important offices would go unfilled. As Cass Sunstein has written in the area of regulation, the precautionary principle makes little sense on its own terms since there are always risks on all sides, and leads to pretty unattractive outcomes even when it can be applied. It’s as if we should all stay in our basements rather than take the risk that flower pot will fall on our heads if we go outside.
Among the many excellent insights, the one I found most striking was the claim that much traditional constitutional thinking and doctrine has a precautionary-principle cast to it, and is thus vulnerable to the same criticisms as that principle is.