Voting rules in international organizations

International organizations use a bewildering variety of voting rules. Courts, commissions, councils, and the General Assembly use majority rule. The WTO, the International Seabed Authority, the IMF, the World Bank, and the Security Council use various types of supermajority rule, sometimes with weighted voting, sometimes with voters divided into chambers that vote separately, sometimes with vetoes. The voting rules in the EU defy any simple description. Al Sykes and I try to bring some order to this mess in our new paper.

Melissa Schwartzberg’s Counting the Many

This book provides a nice history of the evolution of voting rules, with emphasis on supermajority rules, but is less successful in its attempt to argue that supermajority rule should presumptively be replaced with majority rule. Schwartzberg simultaneously argues that majority rule is superior to supermajority rule because the latter creates a bias in favor of the status quo, and acknowledges that a status quo bias is justified so that people can plan their lives. Her solution–what she calls “complex majoritarianism”–is the manipulation of majority rules so that they are applied to favor–the status quo. For example, she favors constitutional amendment requiring a temporally separated majority vote in the legislature (plus subsequent ratification), but the effect is just bias in favor of the status quo except in the unlikely event that preferences don’t change. She argues that this approach advances deliberation but deliberation can be encouraged in other ways and in any event the status quo bias is not resolved.

The book is right to emphasize historical, empirical, and institutional factors as opposed to the sometimes tiresome analytics of social choice theory–as emphasized by this enthusiastic review here–but Schwartzberg’s argument against supermajority is ultimately analytic itself, based on abstract considerations of human dignity, rather than grounded in history or empiricism. The empirical fact that the book doesn’t come to terms with is that supermajority rule is well-nigh universal, not only in constitutions but virtually every organization–clubs, corporations, civic associations, nonprofits–where people voluntarily come together and use supermajority rules to enhance stability and to prevent situational majorities from expropriating from minorities.

Response to Simon Caney on feasibility and climate change justice

climate change justiceSimon Caney argues, in a welcome departure from the usual claims in this area of philosophy, that negotiating a climate treaty is not just a matter of distributing burdens fairly, but also requires a climate treaty that countries are actually willing to enter–“feasible,” to use the word that David Weisbach and I use in our book Climate Change Justice.

But he rejects our argument that the only feasible treaty is one that makes every state better off by its own lights relative to the world in which no treaty exists, and that if advocates, ethicists, and (more to the point) government officials insist that a treaty be fair (in the sense of forcing historical wrongdoers to pay, redistributing to the poor, or dividing burdens equally), there will never be such a treaty.

He says that if a government refuses to enter a fair but burdensome treaty because it knows that voters will punish it for complying, then that just means that voters have a duty not to punish the government, and instead to compel the government to act according to the philosopher’s sense of morality. But because voters don’t recognize such a duty, we are back where we started. His underlying assumption seems to be that voters will cause governments to act morally; ours is that voters will (at best) acquiesce in a treaty that avoids harms that are greater than the costs of compliance. So while, unlike many philosophers, he recognizes a feasibility constraint, he waters it down beyond recognition.

The EU’s recent backpedaling on climate rules shows once again that feasibility, not ethics, should be a necessary condition for proposals for distributing the burdens of a climate treaty.

Reply to Barzun on Inside/Outside

madisonCharles Barzun argues that Adrian Vermeule and I smuggled substantive assumptions into what we characterized as a methodological criticism about legal scholarship and judging. I don’t think he’s right. Our major example is the judge who says that he may settle a dispute between the executive and legislature based on the Madisonian theory that “ambition must be made to counteract ambition.” In appealing to Madison, the judge implicitly puts into question his own impartiality, as Madison was referring to the judiciary as well as the other branches. We didn’t argue that judges should never resolve disputes between the executive and the legislative, just that the judge (or, more plausibly, an academic) must supply a theory that does not set the judge outside the system.

Barzun thinks that we make “controversial claims about the nature of law and how judges decide cases,” in particular, that we make excessively skeptical assumptions about judicial motivation. I don’t think we do, but the major point is that our argument in this paper does not depend on such claims.

For example, suppose the judge responds to our argument by saying that he is in fact public-spirited, and only presidents and members of Congress are ambitious. That may well be so, but then he must abandon Madison’s argument and make his own as to why these are plausible assumptions about political behavior. If one shares the judge’s optimism about human nature, one might believe that the president and members of Congress are also public-spirited, in which case judicial intervention in an inter-branch clash may not be warranted. The judge can also, of course, make arguments about different institutional constraints, public attitudes, and so on, which may justify judicial intervention. But that is a different theory, different from the Madisonian theory that he and many scholars propound.

In the course of describing the various ways that scholars respond to the inside-outside problem, we sometimes argue that they escape the problem only by making implausible arguments. But the inside-outside problem does not depend on our skepticism about these specific arguments being correct, and our real point is that most of the time scholars and (especially) judges do not try to make such arguments but instead ignore the contradictions in which they entangle themselves.

David Bosco’s Rough Justice

David Bosco‘s new book tells the history of the International Criminal Court. It is nicely done and will be a reference for everyone who does work in this area. The conclusion will not surprise any observers: the ICC survived efforts at marginalization by great powers but only by confining its investigations to weak countries. Thus, the ICC operates de facto according to the initial U.S. proposal, rejected by other countries, to make ICC jurisdiction conditional on Security Council (and hence U.S.) approval.

Bosco seems to think this equilibrium can persist, but the book only touches on (perhaps because it is too recently written) the growing resentment of weak countries, above all African countries, which have woken up to the fact that the Court is used only against them, and have begun to murmur about withdrawing. The Court now faces political pressure to avoid trying not only westerners, but also Africans. Meanwhile, the Kenya trials are heading toward debacle, while the ICC is unable to address international criminals like Assad. The Court’s room to maneuver is shrinking rapidly, and one wonders whether it can sustain its snail’s pace (one conviction over a decade) much longer. The book might have been called “Just Roughness.”

Originalism class 3: precedent

What should originalists do about precedent? If they respect it, then the original meaning will be lost as a result of erroneous or non-originalist decisions that must be obeyed. if they disregard it, then Supreme Court doctrine is always up for grabs, subject to the latest historical scholarship or good-faith judicial disagreement (as illustrated by the competing Heller opinions). One can imagine intermediate approaches: for example, defer only to good originalist precedents, or defer only when a precedent has become really really entrenched. But while such approaches may delay the eventual disappearance of original meaning behind the encrustation of subsequent opinions, they cannot stop it, sooner or later. Our readings–Lawson, McGinnis & Rappaport, Nelson–provide no way out that I can see. (Lawson dismisses the problem, while the others propose intermediate approaches.) Originalism has an expiration date.

Another issue is raised by McDonald–the gun control case. In Heller, Scalia disregards precedent in order to implement what he thinks was the original understanding of the Second Amendment. In McDonald, he writes a concurrence that cheerfully combines Heller with the anti-originalist incorporation decisions. Why doesn’t he feel constrained to revisit those decisions? Instead, he joins a holding that generates constitutional doctrine that in practical terms is more remote from the original understanding (gun rights that constrain the states) than he would have if he had gone the other way in Heller (no gun rights at all), given the greater importance for policing of the state governments both at the founding and today. This is akin to the second-best problem in economics: partial originalism–originalism-and-precedent–may lead to outcomes that are less respectful to original understandings than non-originalist methodologies would.

*** Will responds; his VC colleague David Bernstein’s post about the clause-by-clause problem is also worth reading.

A simple (and serious) puzzle for originalists

 All originalists acknowledge the “dead hand” problem, and so all agree that the normative case for originalism depends on the amendment procedure being adequate for keeping the constitution up to date. Or at least all of the originalists I have talked to (n=1). Yet it can be shown that the Article V amendment procedure is unlikely to be adequate, and the probability that it is adequate across time is virtually nil.

The reason is that outcomes produced by voting rules depend on the number of voters (and also the diversity of their interests but I will ignore that complication since it only reinforces the argument). An easy way of seeing this is to consider the strongest voting rule—unanimity—and imagine that people flip a coin when they vote (the coin flip reflects the diversity of their interests, not a failure to vote their interests), and can agree to change a law only when all voters produce heads. The probability of achieving unanimity with a population of 2 is 1/4 (only one chance of two heads out of four possible combinations), with a population of 3 is 1/8, and so on.

For a more rigorous formulation, consider a spatial model from 0 to 1, with a 2/3 supermajority rule. The status quo is chosen randomly (on average 1/2), and the population chooses whether to change it. If the population is 3, voters will change the outcome with probability of (near) 1, because 2 of the 3 people will draw an outcome greater than or less than 1/2 with probability of (near) 1. If the population is 6, there is now a non-trivial probability that 3 of the 6 people will be on one side of 1/2, and 3 people on the other side, so a 2/3 majority (4 people) will be unable to change the status quo.

The U.S. population has increased from 4 million at the time of the founding to 300 million today. If the amendment rules were optimal in 1789, they are not optimal today. If they are optimal today, then they won’t be optimal in a few years. Originalism with a fixed amendment process can be valid only with a static population.

This argument comes from Richard Holden, Supermajority Voting Rules; and Rosalind Dixon and Richard Holden, Constitutional Amendments: The Denominator Problem (who supply empirical evidence).

There is a related argument that one can make based on the Buchanan/Tullock analysis of optimal voting rules. Thanks to Richard for a helpful email exchange.

 

Rights and More Rights

national const rights growth

The data source is the new Comparative Constitutions Project website. Compare to the growth of international human rights:

inter human rights growth

The graph shows increases in the number of human rights as recognized by the various human rights treaties when they went into force.

Jack Balkin, Why Are Americans Originalist?

madisonThis short, good draft by Balkin (with a very high ratio of ideas to words) seeks to explain the rise of originalism. A starting point is that originalism is (virtually) uniquely an American phenomenon, and a national (not state) phenomenon. Next there is the central role of the founding as a unifying origin myth; the huge impact of the founders in our cultural memory; and the Protestant tradition with its preoccupation with sacred texts and yearning for a purifying return to the fundamentals. But why now? Jack thinks that originalism avant la lettre got started after the New Deal when Justice Hugo Black felt that he needed a justification for overcoming the New Deal quasi-tradition of judicial deference, and took off when conservatives realized that they could use it to bash Warren Court precedents plus Roe v. Wade. Maybe also it is a reaction to modernist anxieties about political foundations provoked by the radical constitutional innovations of the twentieth century.

I like this piece but my initial reaction is that Jack treats originalism as too much of an idiom–a cafeteria meal from which one may pick and choose–and doesn’t take seriously its constraining power in legal argument. While it’s true that Scalia and Thomas invoke it selectively, and the historical materials are frequently ambiguous, there is a reason it took off in the 1980s rather than the 1950s, and that is that originalism really is conservative, whereas the Supreme Court in the 1950s was liberal. The founders were conservative by today’s standards. They cared a lot about property rights, for example, and very little about discrimination against ethnic groups, sexual freedom, and so on (to say nothing of slavery). So I do not think originalism is as malleable as Jack does; it’s not like speaking Italian rather than French but exerts a right-wing gravitational pull. There is a reason (as the data show) that conservative justices are more likely to cite The Federalist than liberal justices are.

Adrian Vermeule on The Constitution of Risk

av corAdrian Vermeule’s new book, The Constitution of Risk, argues that much constitutional thinking follows a model of “precautionary constitutionalism,” where doctrines are designed to avoid worst-case outcomes. A better approach is what he calls “optimizing constitutionalism,” where such “political risks” are traded off rather than minimized. The Court of Appeals in Noel Canning, for example, appeared to be driven by a fear that if it upheld President Obama’s recess appointments, then presidents could tyrannize by avoiding the Senate altogether.  It ignored the countervailing risk that if the recess appointment is limited, important offices would go unfilled. As Cass Sunstein has written in the area of regulation, the precautionary principle makes little sense on its own terms since there are always risks on all sides, and leads to pretty unattractive outcomes even when it can be applied. It’s as if we should all stay in our basements rather than take the risk that flower pot will fall on our heads if we go outside.

Among the many excellent insights, the one I found most striking was the claim that much traditional constitutional thinking and doctrine has a precautionary-principle cast to it, and is thus vulnerable to the same criticisms as that principle is.

John Coates on cost-benefit analysis of financial regulation

John Coates recently posted a paper on SSRN entitled Cost-Benefit Analysis of Financial Regulation: Case Studies and Implications. This topic has been important ever since the D.C. Circuit struck down an SEC regulation for failing CBA in Business Roundtable in 2011. Glen Weyl and I held a conference on the topic last fall, and have written several papers arguing that, whatever one thinks of the reasoning in the (justly criticized) Business Roundtable case, CBA is the way to go.

The core of Coates’ paper is a description of efforts by agencies (and other institutions or persons) to perform CBAs of six major financial regulations. Coates persuasively argues that the existing CBAs fall flat and expresses skepticism that it is even possible, given the current state of knowledge, for meaningful CBAs of financial regulations (or certain financial regulations) to be conducted. What valuation should we assign to an avoided financial crisis? Hard to say. Coates concludes that while it makes sense for agencies to engage in cost-benefit balancing using rough guesstimates, their efforts should not be subject to judicial review.

Coates’ argument is sensible, but I am more optimistic than he is about the capacity of agencies to come up with numbers. This criticism–and many others that Coates makes–could be (and have been) made about other forms of CBA, for example, CBA of environmental regulation, which has improved greatly over the last 30 years but remains far from perfect. The fact is that when an agency proposes a capital adequacy rule, it is using an implicit valuation for an avoided crisis. The agency should be required to make that valuation explicit, and defend it. We can agree that a wide range of valuations is acceptable and also believe that public discussion of the range is useful.

I have mixed feelings about his criticisms of judicial review. On the one hand, it seems likely that if courts rigorously applied CBA to new financial regulations, we would not have any new financial regulations (a bad thing), at least not for some time. On the other hand, I’m not sure what will encourage agencies to perform CBAs properly–and this means paying money to consulting firms to generate valuations–if judicial review does not take place. OIRA has encouraged non-financial agencies to use CBA, but OIRA has more limited authority over the financial agencies–none at all in the case of the Fed.

Originalism class 2: Printz as a paean to the living constitution

Will asks whether originalists should be heartened or troubled by Campbell’s debunking of Justice Scalia’s historical analysis in Printz. However, the majority opinion is not originalist at all. Scalia doesn’t address the historical materials with any rigor; he argues (quite candidly) that the anti-commandeering principle is consistent with the historical record, not that it emerges from the best reading of the historical record. Where does that principle come from? Precedent: “Finally, and most conclusively in the present litigation, we turn to the prior jurisprudence of this Court.” Scalia thinks that New York v. U.S. controls, and the weak foundation-era history provides no basis for overturning it. Would Scalia have switched sides if Campbell’s work had been before him? Hard to know, but I would not advise Congress that it is now free to commandeer.

On The Federalist Papers, given the specific political use for which they were written and published, there is little reason to believe that they represent a general account of the original understanding as it existed in all 13 states. The reason that The Federalist has fetish-value today is simple: it has been cited over and over by the Supreme Court, and so has become a source of constitutional law that supplements the text and other materials.

Kirkland and Ellis Distinguished Service Professor, University of Chicago Law School