Vigna & Casey’s The Age Of Cryptocurrency

age of cryptocurrencyThis book, written by two Wall Street Journal reporters, is the first journalistic account of the rise of bitcoin and related cryptocurrency technologies. The authors write well and clearly, and the book is illuminating. And the authors try hard to bring journalist objectivity to the extreme claims of bitcoin proponents. But they mostly give in. One can only cringe at sentences like this one:

 We may well be on the verge of a profound societal upheaval, perhaps the most significant since the sixteenth century…. (p. 278)

 We’re not. Or if we are, it’s not because of bitcoin. Even if the most extreme and implausible claims of bitcoin proponents (or “evangelists” as they are aptly called) came true, and bitcoin became a worldwide currency, we’d just be back in the nineteenth century, when countries were on the gold standard, albeit a digital version of it. Bitcoin just is the gold standard with bits rather than gold. (The authors, who gently mock goldbugs, don’t seem to realize that they are themselves “bitbugs.”) To be sure, transactions would be cheaper—we’d save some of the 1 to 3 percent that we now pay to use credit cards. That would help out a lot of people, but most people wouldn’t notice. And we wouldn’t worry about inflation (but we would worry about deflation and financial panics). Maybe life would be a bit better, or (as I suspect) a bit worse, but it wouldn’t be much different.

The book revolves around a number of themes: the role of trust in the financial system; the forces of decentralization; and the relationship between cryptocurrencies and the law. These are interesting issues, and all deserving of careful thought. But while the authors have sensible things to say about them, and try to carefully weigh the arguments on each side, in the end I believe they come down on the wrong side on nearly every issue. I will post some observations about this book over the course of this week.

Who is the meanest supreme court justice of all time?

Scalia, right? Nope. Scalia barely cracks the top ten, behind Alito, Kennedy, Thomas, and even Breyer. The actual measure is “friendliness” rather than meanness, and these guys have among the lowest friendliness scores, which is the percentage of positive words used by justices in their opinions minus the percentage of negative words. (Negative and positive words taken from here.)

The friendliness score comes from A Quantitative Analysis of Trends in Writing Style on the U.S. Supreme Court, a new paper by Keith Carlson, Michael Livermore, and Daniel Rockmore, and it contains all kinds of other fun stuff, like the influence of law clerks on judicial writing style. The authors are pioneers in the use of textual analysis to analyze supreme court opinions. One of their findings is that opinions of modern justices are a lot less friendly than the opinions of earlier justices. (They are also written at a lower grade level.)

The friendliest justice–by a long shot–is John Jay, reflecting perhaps his experience as a diplomat. But he wrote very few opinions. I’m therefore handing the title to #2, Oliver Ellsworth. And the meanest? An obscure, one-term justice named Thomas Johnson. [N.B.: an earlier version of this post confused him with William Johnson. The ABA Journal correctly identified him.]

Guest Post: More on absolute bans on torture

Guest post by Ryan Doerfler, Bigelow Fellow and Lecturer in Law, The University of Chicago Law School:

I too find the position Eric discusses (absolute prohibition against torture plus judicial leniency for justified instances) puzzling, or at least frustrating.

My sense is that there are two explanations for the position.  The first, which Eric discussed, has to do with incentives or, as I would put it, epistemic reliability (maybe these are the same at the end of the day).  The argument here is an application of the more general argument for rule utilitarianism: Because individuals will overestimate systematically the considerations speaking in favor of particular sorts of action (e.g., torture, lying) if allowed to reason on a case-by-case basis, better to adhere to absolute prohibitions as a bulwark against bad reasoning.  As is obvious, one would have to do the math to determine whether an absolutist regime is preferable to a case-by-case regime in a given instance since there will be errors under both.  The suggestion of judicial leniency in the case of torture indicates that even those advocating an absolute prohibition do not think the math comes out favorably if the prohibition is really absolute.

Thinking about non-repeat players, I guess I do not think of the suggestion as arbitrary.  Plausibly, the tendency to overestimate the considerations speaking in favor of torture are greater in the heat of the interrogation chamber than in the cool of the courtroom.  And, so long as would-be torturers are unaware of the prospect of judicial leniency (plausible, in the case of non-repeat players), one might get decent results under this regime (e.g., one would torture only when the apparent need to torture was so great as to warrant personal sacrifice) without human sacrifice.  This is all speculative, of course, but at least not implausible.  The problem is that, in the real world, would-be torturers are almost all repeat players (or at least members of repeat-play institutions).  Hence, the prospect of judicial leniency would be well known.

The other explanation for the position, I think, has to do with the impulse to preserve both absolutist (e.g., ‘Thou shalt not kill’) and non-absolutist (‘Thou shalt not kill, unless …’) moral intuitions.  My sense is that this impulse has not to do with accuracy or expected outcomes but instead with bedrock intuitions about moral decency or something like that (e.g., a common attitude is that one should cringe at images of killing or torture, even if the killing or torture in question is justified).  That impulse manifests in various places in moral philosophy.  Where it is plainest, though, is in discussions of so-called “dirty hands” cases, i.e. cases in which a particular action is justified but somehow morally problematic, regrettable, etc.

I have always found this idea hard to understand (e.g., If an action is justified, how could it be regrettable?).  But, for whatever reason, it has real popular appeal.  One high-visibility, non-scholarly example is the television show 24.  On the one hand, 24 is written in such a way that the audience can be expected to think of Jack Bauer’s actions as justified (as Justice Scalia said, what jury is going to convict him?).  On the other hand, the show is written such that (and the showrunners are expressly of the view that) Jack Bauer must suffer so that the rest of us can be safe.  I suppose one could interpret this as a metaphor for the psychological costs torturers must incur, which are real.  More plausible, though, is that the underlying attitude is that Jack Bauer in some way should suffer for what he has done, even though what he has done is justified.  Again, I think this is confused.  But it is a pervasive sentiment.  What Jack Bauer does is right … but also wrong.  Good … but also bad.  Alas.

[N.B.: there is an ancient literary and artistic theme that the person who saves the community by breaking its norms must himself be expelled from the community, or otherwise suffer and be made an outcast. This person must be a hero who follows a higher morality and accepts the sacrifice. I think philosophers like McMahan unconconsciously reproduce this logic, not realizing that institutions cannot themselves be designed to permit the exceptional act. In real-life institutions, if you tell agents they will be punished for doing X, and they believe you, they won’t do X. — EP]

Uber and the law of large numbers

An interesting article here in the NYT about the “Uber model.” Uber drivers enjoy flexibility–they can drive whenever they want–partly because the app connects them to customers but mainly because there are so many drivers. People who want rides can get them because of the high probability of a nearby riderless Uber car. The author argues that this model can be applied to many other settings, including legal services and medicine. A doc with a bit of spare time can make himself available via app and you might consult him if you happen to be nearby.

The relevant law here is not the law of taxi or doctor licensing but the law of large numbers. It’s what ensures that someone is nearby when you need him, even though drivers and doctors have all kinds of other unpredictable commitments, given a large enough pool. I tell my students that the most important law in banking regulation is the law of large numbers. It’s what makes it possible for a bank to offer money in a steady way to borrowers when the bank’s own lenders–short-term depositors–might need their money on a moment’s notice. The Uber model is, at bottom, the bank model.

If torture can ever be morally justified, why should the ban on it be absolute?

In the NYT, the philosopher Jeff McMahan argues that torture is almost always morally wrong, but he believes that in certain cases–when it is used to prevent a greater evil like killing or mass killing–it is morally permissible or even morally obligatory. Yet he believes that torture should be banned even then. That doesn’t seem to follow. If McMahan’s moral position is correct, shouldn’t the law permit torture just in those conditions when it is morally permissible, and otherwise ban it?

As far as I can tell, his answer is that morally justified torture may be so rare that it can be safely ruled out by an absolute ban. That may well be right; maybe that is the lesson of the Bush torture debacle. But it does seem puzzling. Consider shooting-to-kill. Shooting-to-kill is also a horrible thing to do to people (worse than all but the most extreme forms of torture), and it is rarely justified. Yet police officers are permitted to shoot-to-kill in order to prevent greater evils. Even after recent events suggesting abuse  and discrimination in police shootings, no one wants to impose an absolute ban on them.

McMahan, like other philosophers (such as Henry Shue, who is mentioned in the piece) who both want to ban torture but believe that it is sometimes morally justified, can’t bring himself to enforce the absolute ban with absolute strictness. Instead, courts should be allowed to exercise leniency. Yet McMahan appears to believe that the agent cannot be excused. Some punishment (a very light punishment?) must be imposed.

But why exactly? Why would you punish (even lightly, as in this famous German case) someone for engaging in a (by hypothesis) morally permissible or even morally obligatory act of torture? The answer seems to be that if you don’t, then other agents will engage in torture that is morally wrong. I confess I have never understood this argument. Doesn’t the logic of it suggest that we should prohibit the police from morally justified shootings because if we don’t, police will engage in morally unjustified shootings?

Or taken from the other side, if leniency is permitted, why shouldn’t we worry that the prospect of leniency will encourage agents to engage in wrongful acts of torture? After all, if non-punishment of morally justified torture will encourage wrongful torture, as McMahan claims, why wouldn’t lenient punishment of morally justified torture also encourage wrongful torture? The effort to split the difference by banning torture but providing for leniency seems arbitrary.

Let’s go back to police shootings. Possibly, torture can be distinguished. One distinction is that police shootings are usually (almost always?) morally justified, whereas acts of torture are almost never morally justified. But do we actually know this? The reason that police shootings are usually morally justified, or seem to be, is that the police are given training, and police shootings are always investigated carefully. There is no comparable institutional infrastructure for torture. Maybe if there were, acts of torture would seem as morally justified as police shootings (although no doubt much rarer).

Arguments like McMahan’s, which are scattered throughout the philosophy literature, always seem to be based on psychological (about how people respond to incentives) and institutional assumptions (about how organizations operate) that are not articulated.

More flip-flops

At Slate. This is based on the paper I’ve written with Cass Sunstein. The Slate piece discusses some surveys that we did (the paper provides more detail). If you have any comments on them (or the paper), please email me. We could do some more surveys if you have ideas.

Foreign Sovereign Immunity & Comparative Institutional Competence

blog2

A Guest Post by Adam Chilton and Christopher Whytock:

Discretion to make political or legal decisions is frequently given to one branch of government based on the belief that it is better suited to make a particular kind of decision than another branch. For example, much of administrative law is premised on the idea that administrative agencies are better positioned to make decisions about how to carry out their missions than judges. When these claims of comparative institutional competence are made, however, they are rarely based on systematic empirical evidence. This is in part because there are rarely opportunities to evaluate what happens when two different branches of government are tasked with making the kind of decision.

 In a paper published last week, Foreign Sovereign Immunity and Comparative Institutional Competence, we empirically evaluate one comparative institutional competence claim by taking advantage of a situation where Congress moved the authority to make a certain kind of decision from the State Department to the courts. That situation was created by the passage of the Foreign Sovereign Immunities Act (FSIA) of 1976.

There is a longstanding principle of customary international law that governments should not be subject to suit in the courts of a foreign country. In 1952, however, the United States adopted a new, restrictive theory of when sovereign immunity should be granted. This paved the way for foreign governments to be sued in American courts over their “commercial activities” (like breaking a contract with an airplane manufacture) but not their “public acts” (like passing legislation that limits what kinds of airplanes are allowed to fly in their airspace).

Of course, whether any particular action taken by a foreign government constitutes a commercial activity or a public action is not always clear. When suits against foreign governments arose, initially it was the State Department that was forced to make these calls. It was not long though before critics began to argue that the State Department was making politically motivated decisions—for example, that immunity was awarded to important countries even though a specific suit was clearly based on commercial activities that should prevent immunity from being granted. In 1976, Congress responded to these criticisms by moving the authority to make foreign sovereign immunity decision from the State Departments to the courts.

To leverage this change in authority, we built a database of immunity decisions made before and after the passage of the FSIA. By controlling for the facts underlying each dispute and the characteristics of the parties involved, we are able gain some traction on how these two different branches of government have made foreign relations decisions. In contrast to previous studies that evaluated a small number of cases qualitatively, our study does not reveal evidence of systematic bias in the State Department’s immunity decisionmaking, while it does identify potential political influences on the courts’ decisionmaking. Although there are admittedly some limitations to our data and approach, these results still challenge the frequently made argument that the State Department is worse at making legal determinations free from political interference than other branches of government. If you want to read more about our method and results, you can find the paper at SSRN.

 

Institutional flip-flops

People constantly accuse politicians, judges, and commentators of flip-flopping on institutional issues. Republicans who objected to filibusters of Bush’s nominees now defend the practice as applied to Obama’s–and the Democrats who defended filibustering then attack it now. Most of the liberal commentators who accused Bush of abusing executive power have now fallen silent, while the earlier Republican defenders of Bush have now, under Obama, discovered the dangers of the imperial presidency. Justices who appeal to the majesty of democratic rule in the course of upholding a statute today turn around and strike down a statute despite majority support for it tomorrow. And so on.

Many flip-flops reflect meaningless political posturing, but so do many of the accusations of flip-flopping. An apparent flip-flop can turn out to be nothing of the sort once one pierces the often sloppy rhetoric. Perhaps real flip-flops can be justified as the result of learning, at least to a limited extent. But beneath the surface, there is much of interest. Flip-flopping can result from an ambiguous or unsettled institutional norm. People are not just posturing but trying to get the norm settled in a way that advances their interests.

Much more can be said, and is said, in a new paper that I have written with Cass Sunstein, available at SSRN.

Reply to Coates on financial cost-benefit analysis

Glen Weyl and I have been going back and forth with John Coates on the question whether financial regulators should use cost-benefit analysis. Weyl and I defended CBA of financial regulation here and here. Coates wrote an article criticizing CBA here. Our response is now posted, as is his reply to our response. Below is our reply to his response to our response to his response to our earlier arguments.

In his (latest) response, Coates usefully narrows the focus to the crucial issue: is there any reason to think that CBA of financial regulation and CBA of other types of regulations (like safety or environmental regulations) are different? Weyl and I agreed with Coates that lots of efforts to perform CBA of financial regulations have been shoddy, but that’s just because the methodology is at an early stage. Early environmental CBAs were shoddy as well, but they have improved greatly over the years, thanks to pressure from the White House, which ultimately forced regulators to enlist economists to help them improve environmental CBA.

Weyl and I think that, on theoretical grounds, CBA should be a lot easier for financial regulations (which are, after all, all about money, with tons of data) than environmental regulations (which involve many difficult-to-monetize valuations). Coates makes the opposite argument. In his latest reply, he makes the following points.

1. Coates rejects our merger guidelines as an example where cost-benefit principles have informed market regulation. His major point seems to be that those guidelines are not themselves instructions to perform CBAs, or the result of formal CBAs that were reviewed by courts, and that they are implemented loosely rather in a rigid way. We just don’t understand the force of this argument. The guidelines are the result of economic analysis (Weyl participated in writing them), and they basically enable the government to do cost-benefit analyses of mergers by creating certain economically informed presumptions. Mergers generate the same kind of problems about cost and benefit estimation that financial regulation does. At a minimum, this suggests that these valuation problems are not insuperable.

Coates also says that academics and practitioners admire the guidelines because they prefer rules (“constrained discretion”) over ad hoc judgments. But there is no reason to think that rules in general are better than discretion. Bad rules are worse than discretion, at least if discretion is used in good faith. The guidelines are good both because they are rules (or presumptions) and they are good rules grounded in economic principles.

2. Coates also repeats his argument that (if we understand it correctly) the natural sciences play a greater role in other forms of regulation than in financial regulation. The merger guidelines example was intended to show that this assumption is false: “market regulation” under the rubric of antitrust law is social science all the way down. But let’s consider his argument from safety regulation: the rear-facing camera. Coates argues that the main issue is one of natural science and engineering, which makes it easy to determine whether a mandate is cost-benefit justified.

A simple response to this argument is just to acknowledge that some kinds of regulation are easier than others. We certainly do not deny that. We do doubt whether the camera mandate is as simple as he says. Everything depends on how people respond to the new technology, and we know that how people respond to new technology is often difficult to predict. And, of course, safety regulations raise difficult issues about valuing human life. But the broader point is that many types of regulation seem easy just because we’ve already advanced down the learning curve. That will be true for financial regulation just as for any other type of regulation.

3. Finally, Coates seems to back off from his claim that financial regulation is special, and to argue that, across all areas of regulation, we need to distinguish between areas of regulation that have what he calls “non-stationary” (which seems to mean rapidly evolving) features and those areas that do not. And so his critique may turn out to apply to regulation of drugs, for example, or regulation of any activity where technology is changing at a rapid pace. Maybe he thinks that non-stationary features dominate the financial system but not other systems, but he doesn’t show this. Weyl and I are much more optimistic, based on recent developments in academic economics, including IO and antitrust, where (to repeat) the object of regulation is an incredibly non-stationary phenomenon–the market itself.

But if Coates is right, what does this mean? He advocates regulation based on “conceptual CBA,” which as far as we can tell, means CBA based on guesses rather than reasonable estimates.  We suspect that a more plausible response to his skepticism is not regulation but deregulation. If the government cannot explain why it is imposing costly constraints on the market, then regulation will be difficult to defend politically as well as legally.

Podcast on Charlie Hebdo and freedom of speech

I discuss this topic with Jonathan Rauch, with Jeffrey Rosen moderating. Rauch and I disagreed about (almost) everything.

1. Should The Times have republished the latest Charlie Hebdo cover?

Rauch: yes, because of its news value.

Me: no, if The Times reasonably feared retaliation against its reporters. I also say that the news value of the cover is minimal because anyone can see images of it on the web.

2. Did European media that failed to republish the offending Charlie Hebdo cartoons act wrongly?

Rauch: not if they reasonably feared retaliation, but still it would have been better if everyone had republished in order to strengthen free speech norms.

Me: not if they reasonably sought to avoid provoking additional violence, against themselves or others.

3. Does the Charlie Hebdo attack show that European hate-speech laws are a bad idea?

Rauch: hate-speech laws cannot be enforced neutrally, resulting in hypocrisy and chilling effects. Hate-speech laws do not improve safety.

Me: if hate-speech laws had been enforced against Charlie Hebdo, then this attack would not have happened. So at a minimum, there is some evidence that they reduce violence. Rauch is right that hate-speech laws cannot be applied “neutrally.” But they can be enforced sensibly, to censor low-value speech that offends groups to the extent that violence may result.

4. Should Europe adopt U.S.-style free-speech law?

Rauch: yes, noting that we have more social peace in the United States than in Europe, and arguing that the First Amendment may account for this. People have no incentive to use violence to force the government to censor offending speech because they know that the First Amendment blocks the government from accommodating them.

Me: [bungling my description of French law, but anyway–] no, the U.S. is an outlier, strongly suggesting that what works for us (or currently works for us) is not ideal for other countries. Specifically, First Amendment law in the United States reflects various pragmatic compromises among groups in a pluralistic society that are different from the compromises that must be made in other countries, which have different groups with different views and interests. Our tendency to think that U.S. law reflects universal principles should be resisted.

5. Will the Charlie Hebdo attacks strengthen freedom of speech in France?

Rauch: yes, as illustrated by the marches and rallies, the outpouring support for Charlie Hebdo.

Me: no, the government will crack down on hate speech in order to reduce violence, and in a (perhaps futile) effort to repair the frayed bond between French Muslims and the state.

The Charlie Hebdo attack and liberty-liberty tradeoffs

Terrorist attacks generate a familiar pattern in public debate. First, conservatives (and often middle-of-the-road types) argue that the government’s failure to stop the terrorist attack shows that counterterrorism policy is too weak. Then, liberals (and often other middle-of-the-road types) argue that we should not strengthen counterterrorism measures if doing so will sacrifice our civil liberties to security. This sets up a debate about security versus liberty. Typically, civil libertarians argue that there really is no tradeoff (an argument I have never understood), or (more plausibly but I think wrongly) that the government will inevitably put too much weight on security and not enough on liberty. An important subtheme, one that resonates with American historical experience and mythology, is that the people who put more weight on security are cowards who sell our liberties too cheaply.

Thus, the rhetoric. In truth, there is liberty on both sides of the equation. People who fear terrorist attacks lose some of their liberty as they avoid airplanes and public places; and the people who die in those attacks lose their liberty along with their lives. Nonetheless, it is undeniable that the civil libertarian position is understood to place greater weight on due process than on security, and that position has very powerful resonance in our society, perhaps because of distrust of the government.

The Charlie Hebdo attack has not followed this pattern for an interesting reason. The attack, both by design and in effect, was targeted at a liberty–freedom of expression. In this respect, the attack is unique among all the terrorist attacks since 9/11, none of which singled out freedom of expression as a target among all the western vices. The planned French crackdown on civil liberties thus sets up a clearer, harder-to-deny, liberty-liberty tradeoff: liberty from surveillance, arbitrary detention, and the like, versus liberty to speak one’s mind. It’s harder for a civil libertarian to argue that “mere” security is at stake, that principled people must oppose stricter counterterrorism measures.

This tradeoff has not yet received much attention, though it is implicit in the debate about whether Charlie Hebdo’s speech was really worth defending. Civil libertarians should ask themselves: if greater censorship in France made the French safer, with the result that they don’t need to give police greater surveillance and detention powers, would they be better off or worse off?

This is the most important policy question that has emerged from the attack. Why has no one asked it?

The Supply Side of Compliance with the WTO

blog post

A guest post by Rachel Brewster and Adam Chilton:

One of the primary questions studied by scholars of international law is whether countries comply with their international legal commitments. For example, scholars study whether countries comply with treaties they have signed that regulate the conduct of war or mandate the protection of human rights.

In most of these studies, the focus is on assessing whether the national government of a given country complies with some obligation. Of course, national governments are comprised of many institutions and, depending on the obligation, different institutions must take actions to comply with international law. A topic that has received little attention, however, is how the likelihood that the country will comply with international law is affected by which particular institution is required to take action on behalf of the national government.

In a recently published paper, Supplying Compliance: Why and When the United States Complies with WTO Rulings (available here), we argue that when the United States loses trade disputes, the particular domestic institution required to act is an important predictor of whether (and when) the U.S. will comply with the ruling. In fact, it was the most important factor.

Our paper empirically studies this topic by analyzing compliance with legal challenges brought against the United States at the World Trade Organization (WTO). The WTO allows members to bring disputes against other members that arguably aren’t complying with various trade agreements. Since the WTO was created twenty years ago, countries from around the world have brought over one hundred cases against the United States. In cases that the United States has lost, different branches of government have been required to take act to cure the violations. For example, the President responded to some complaints by issuing executive orders and Congress responded to other complaints by amending sections of the tax code.

The dataset we built for our project includes all WTO complaints made against the United States before 2012. For each complaint, we tracked down the policy changes that the United States made after the dispute. We also collected data on the characteristics of the countries that filed the complaints and the topics of the disputes. After controlling for a number of variables, we found that the United States was more likely to comply (and to comply more quickly) with a WTO decision when the executive alone could bring the country into compliance than when Congressional action was needed.

Although there are a number of confounding factors that may influence this result, as well as limits to the generalizability of our findings—both of which we discuss in the paper—we think these results suggest that having a complete understanding of compliance with international law requires paying more attention to the specific domestic institutions that are involved.

The case for Uber-regulation

I make the case in Slate, which is that the market for short-term, on-demand car rides is inherently monopolistic. That is in fact why taxi regulation exists, and always has, virtually everywhere. The Slate piece arose from some initial thoughts in this blog post, further stimulated by Ilya Somin’s criticisms of that post. One point of disagreement centers around how to interpret people who consent to and then complain about surge pricing. Somin thinks they are irrational. I think they are reasonably concerned that they are being overcharged. The underlying problem is the high cost of search in this market, as explained in the Gallick & Sisk JLEO paper I cite. There is an interesting sense in which Uber’s disruption of the taxi market replays an earlier disruption in the 1920s when mass-produced automobiles threatened to unravel taxi pricing with the introduction of part-time drivers who skimmed off the best fares.

Tyler Cowen on quadratic voting

Cowen believes that QV would encourage extreme preferences:

I would gladly have gay marriage legal throughout the United States.  But overall, like David Hume, I am more fearful of the intense preferences of minorities than not.  I do not wish to encourage such preferences, all things considered.  If minority groups know they have the possibility of buying up votes as a path to power, paying the quadratic price along the way, we are sending intense preference groups a message that they have a new way forward.  In the longer run I fear that will fray democracy by strengthening the hand of such groups, and boosting their recruiting and fundraising.  Was there any chance the authors would use the anti-abortion movement as their opening example?

There are two possible interpretations of this argument.

First, QV would encourage people with extreme preferences to engage in activities that are disruptive of democracy. But the opposite is more likely the case. The problem with one-person-one-vote-majority-rule is that minorities are shut out unless they can organize. This is why minority groups so often resort to civil disobedience, protest marches, strikes, and boycotts. They can vindicate their preferences in the political arena only by making life miserable for the majority. By contrast, QV allows them to vindicate their intense preferences, and in such a way that partly compensates the majority.

Maybe a more attractive version of this argument is that people with intense preferences would lose the incentive to try to persuade the majority to agree with them. Under QV, they pay them off instead. But the difference between the two systems is marginal along this dimension. The cost of buying votes becomes expensive very quickly; so if persuasion can be effective, then minorities will adopt that strategy instead or (more likely) in conjunction with voting.

Second, Cowen might believe that QV would actually change people’s preferences, causing moderate people to become extreme. There are no good theories about how preferences change, so it is hard to evaluate this claim. Perhaps his idea is that under ordinary voting systems people with extreme preferences who are always outvoted somehow become persuaded that their preferences are wrong and drop them. Maybe. But it is just as likely that they give up on a political system that disregards their deepest commitments and search for extra-legal or disruptive means to vindicate them.

We can’t wish away people with intense preferences, and shouldn’t want to. Indeed, nearly everyone has intense preferences along different dimensions; that is why there is a sense in which our rights-based system, which provides judicial protection to minorities with intense preferences under certain conditions, is supported by the majority. But QV provides a better way to incorporate intense preferences into the social welfare function.

Is opposition to Uber’s surge pricing irrational?

Everyone says it is. Here, for example, is a representative statement from Ilya Somin. The argument is just that unregulated markets are efficient and therefore price caps are inefficient. Somin concludes that everyone opposed to surge pricing is irrational or ignorant.

I don’t oppose surge pricing but I don’t think Somin is right.

1. Although a practice may be efficient, it doesn’t follow that everyone is made better off by it. People rationally oppose surge pricing as long as they value the dollar savings resulting from a price cap more than the extra time they spend waiting for an Uber car or taxi to show up. These people oppose an efficient practice that happens to harm them. What’s so irrational about that? In fact, the contrary view would be irrational.

2. When surge pricing comes into effect, there is an undersupply of cars. This means that Uber has market power. Taxis can’t raise their prices, and Lyft apparently won’t match Uber’s price above a threshold. This means that for the class of people who will pay the surge price, there is no meaningful competition. It doesn’t take much imagination to believe that Uber–which takes infantile pride in its ruthlessness–will charge a price that eats up as much of the consumer surplus as possible, in the process pricing some passengers out of the market.

This is not an argument for banning surge pricing or even regulating it. It’s possible that Uber’s monopoly profits will bring in additional competition, or that the threat of additional competition keeps Uber in line, and that’s what we usually assume in antitrust law, so we let the Ubers of the world charge supracompetitive prices except in egregious circumstances. But people’s efforts to shame Uber into lowering its surge prices may not only be in their self-interest; they may serve social welfare as well.

3. Everyone thinks of Uber as an app that allows drivers and passengers to match. But also think of Uber as an efficient way of cartelizing drivers and obtaining and analyzing the data of passengers so as to maximize revenues. (Can Uber determine your price sensitivity based on past trips? Probably. Does it exploit that information by varying prices by passenger type? Maybe not. Yet.) True, Uber is a better cartel than the taxis. True also, many Uber drivers did not offer services before Uber organized them. But not everything Uber does is by definition a good thing.

Quadratic voting

Glen Weyl has uploaded a new version of his paper, Quadratic Voting (written with Steven Lalley), to SSRN, which now includes the completed proofs. Quadratic voting is the most important idea for law and public policy that has emerged from economics in (at least) the last ten years.

Quadratic voting is a procedure that a group of people can use to jointly choose a collective good for themselves. Each person can buy votes for or against a proposal by paying into a fund the square of the number of votes that he or she buys. The money is then returned to voters on a per capita basis. Weyl and Lalley prove that the collective decision rapidly approximates efficiency as the number of voters increases. By contrast, no extant voting procedure is efficient. Majority rule based on one-person-one-vote notoriously results in tyranny of the majority–a large number of people who care only a little about an outcome prevail over a minority that cares passionately, resulting in a reduction of aggregate welfare.

The applications to law and public policy are too numerous to count. In many areas of the law, we rely on highly imperfect voting systems (corporate governance, bankruptcy) that are inferior to quadratic voting. In other areas of the law, we require judges or bureaucrats to make valuations while knowing they are not in any position to do so (environmental regulation, eminent domain). Quadratic voting can be used to supply better valuations that aggregate private information of dispersed multitudes. But the most important setting is democracy itself. An incredibly complicated system of institutional self-checking (separation of powers, federalism) and judicially enforced constitutional rights try to correct for the defects of one-person-one-vote, but do so very badly. Can quadratic voting do better? Glen and I argue that it can.

Debate with Kenneth Roth about human rights treaties

You can read the debate here. Since having a blog means always having the last word, I add a few further responses to his last entry.

1. I don’t actually advocate the repeal of human rights treaties. It is enough to ignore them, or even just to recognize that they allow almost unlimited discretion because violation of them is unavoidable.

2. Ken argues correctly that the mere fact that a law (for example, the law against murder) is frequently violated is not an argument for repealing the law. But that’s not my argument. I think governments frequently violate human rights law for good reasons–having to do with the limits of their capacity and the rigidity of the law. I don’t think anyone has a good reason to violate laws against murder or rape.

3. Ken recounts a number of anecdotes where he says that treaty ratification led to a change in the behavior of states. I never claim that literally no one pays attention to specific treaty obligations, and several of his examples (the landmine treaty, the European Convention, and so on) go beyond the scope of my claims, which are restricted to the universal human rights treaties. Beyond that, while his anecdotes are compelling, I have seen too many examples of anecdotal arguments falling apart on close inspection to be willing to take them at face value.

4. Finally, in the battle of reductios, Ken argues that I must believe that countries should be permitted to enslave their workers because I reject economic rights embodied in the ICESCR. My actual argument is that if the ICESCR is interpreted as giving migrant workers in Qatar western-style employment rights, that could very well hurt many more people than it helps. In actual fact, the ICESCR is ambiguous, so it is HRW that is urging Qatar to recognize minimum wages or collective bargaining rights. Will this improve the lives of most workers or end up grievously harming many workers because of a reduction in the demand for labor? What bothers me is that HRW thinks or pretends that it knows the answer to this question, but it doesn’t.

Is the “norm” or taboo against torture dead (continued)?

torture bubbleAs I noted a few days ago, Christopher Kutz argues that the anti-torture norm is (or might be) dead. Another way of putting this claim is that the longstanding taboo against torture has lapsed. A practice is taboo if not only it is forbidden but open debate about it is forbidden. Anyone who challenges the taboo will be regarded as tainted or contaminated, as outside the community. Contrary to what we like to think, hundreds of taboos flourish in American society, as many I’m sure as in any of the tribal societies studied by early anthropologists from which the term was adopted. Our taboos surround not only religion, but also race, gender relations, and the treatment of children. Free speech is firmly entrenched in the law but anyone who thinks that one can speak freely about these topics without risking significant social sanctions hasn’t been paying attention. Taboos constantly change (many sexual taboos have lapsed, just in the last few decades), but while they prevail they are extremely powerful.

The process by which taboos break down is mysterious; Kutz doesn’t really explain why the torture taboo has eroded if it has. At least part of the explanation must lie with technological change that causes people to question traditional prohibitions. The invention of modern forms of birth control made many of the taboos surrounding sex, which may at some earlier time have been broadly functional (in the sense of protecting people from the burdens of unwanted children or quelling social conflict), seem nonsensical. Yet the erosion of those taboos (not yet complete) was complicated. People had to be motivated to challenge the taboos and endure social sanctions. Sexual desire is a potent motivation, and eventually the arguments could not be ignored. But if there is no strong incentive to challenge taboos–as may be the case with taboos that don’t ban behavioral anyone really wants to engage in (like cannibalism)–then they are likely to persist.

If the torture taboo is eroding, then the explanation must be different from change in technology. The torture technologies used by the CIA are decades, even hundreds of years, old. And as is common with many taboos, the prohibition was never complete–U.S. government has committed torture before (just as incest takes place despite the incest taboo); what’s new is that torture is openly discussed as a legitimate policy option, by some people. In the 1990s and earlier, the U.S. engaged in torture through proxies, and no one talked about torture used by American combat soldiers in wartime. What seems to have happened is that an unusual configuration of events–the 9/11 attack, the earlier enactment of torture laws that forced the CIA to seek legal cover through a Justice Department opinion, relatively new norms of government openness, and so on–forced torture out into the open, where it could no longer be ignored.

You can see the persistent taboo-like character of torture in the debates surrounding the CIA’s interrogation practices. Many of the critics feel compelled both to argue against torture (“it doesn’t work,” “it violates our values”), and to argue that this argument is unnecessary because torture is plainly wrong or off the table (“it’s not who we are”). But the mere making of the first argument, which often requires elaborate claims about how institutions work, contradicts the second. Torture (unlike, say, cannibalism or incest) then becomes a matter of debate, perhaps like any other policy. The real force of the much-derided ticking time-bomb hypothetical is not that it provides a policy justification for institutionalized torture, but that it explodes the taboo. If you agree that torture may be acceptable in this setting, then you can argue against its expansion to less extreme scenarios only by making complicated empirical and institutional arguments that can be debated by people who have different intuitions.

I wonder whether the prosecutions that the CIA’s critics desire would have the perverse effect, even if they are successful, of further unraveling the taboo. In a court of law, defense lawyers will argue that their clients acted reasonably, and to do so, they will elicit testimony that some interrogation practices that amount to torture are actually effective. Whether or not this testimony is persuasive, the mere fact that it is introduced and debated will help remove torture from the realm of the taboo. Like so many (actually nearly all) police practices, there is just no reliable evidence of efficacy, in one direction or the other, and in such cases courts tend to defer to the judgment of experts. Whatever the outcome of the prosecutions, the efficacy of torture becomes merely an empirical question, deserving of further study perhaps, one about which reasonable people may differ–in which case it can’t be taboo.

This is, I think, what happened in the gay marriage cases, which helped destroy another taboo that until recently was extremely powerful. The importance of the evidence introduced in those cases was not so much that it supported the case for same-sex marriage but that it showed the question of same-sex marriage is an empirical one. Once empirical doubts are recognized, they cannot displace powerful equality norms.

Vermeule replies to Baude: A Pre-Chevron mind?

From Adrian Vermeule:

Thanks to Will Baude for his thoughts on our paper project (see here and here for our puzzles and conjectures). It’s interesting that the proposal for judges to take into account the votes of other judges provokes a kind of instinctive resistance. But it’s not clear what exactly the objection is. Some possibilities:

(1) Will seems implicitly to assume that “textualists” and “purposivists” inhabit different methodological universes, so that judges in one camp would obtain no information from considering the views of judges in the other. That’s not how interpretation works, however. Purposivist judges are certainly interested in text and canons, in part because those things supply evidence of the purposes that a reasonable legislator might have. Conversely, many textualist judges, like Holmes, have been willing to examine legislative history and other extra-textual sources as evidence that might shed light on the ordinary meaning of text.

But even when textualist judges eschew legislative history altogether, that does not mean there is no overlap between their approach to interpretation and that of purposivist judges. Schematically, it is not the case that textualist judges consider sources or arguments {A, B, C} while purposivist judges consider sources or arguments {D, E, F}. Rather closer to the truth is a schema in which textualists consider {A, B, C} while purposivists consider {B, C, D}, or even {A, B, C, D}. This implies that judges in both camps will often gain relevant information — relevant even on their own theories — from observing the votes of other judges, even judges in other camps. And, of course, most judges are not theoretical at all, and just consider all sources and arguments in a sort of promiscuous jumble.

(2) Will also seems to think it important that judges in each camp think their own theory “correct” (Will’s italics). Under the Chevron framework, however, even if I think I am correct, the question I have to answer is whether I think the other person’s view is not only wrong, all things considered, but is actually unreasonable. The whole point of Chevron is to create space for that distinction. It is a symptom of a pre-Chevron mind (sub-Chevron mind?) to conflate these two questions, assuming that if my view is correct, yours must be beyond the pale. There is an interesting, under-explored question whether Chevron implies that agencies should have a kind of meta-discretion to choose among reasonable theories of interpretation. But the fact that proponents of competing views think their views correct will not help us figure that out.

(3) Yet another separate question, which we flagged in our opening posts, is whether and under what conditions it is systemically desirable for a given judge to take any of this information into account. We think that is the critical question for the paper, which will attempt to sift out the conditions under which it is or is not desirable. Will points out that sometimes it is better for decisionmakers not to attempt to consider all available information; certainly that is true. But he seems to assume that throwing away this particular category of information is necessarily desirable in all settings. His confidence in that approach seems to outrun the available evidence and theory, as far as we can see. It’s an interesting puzzle why our proposal provokes such a reaction.

Kirkland and Ellis Distinguished Service Professor, University of Chicago Law School