All posts by Eric Posner

Is Brett Kavanaugh an originalist?

If there is one thing that Kavanaugh’s critics and most ardent supports agree on, it is that he is an “originalist,” someone who interprets the Constitution according to the public understanding of it at the time of ratification (and in the case of amendments, adoption).

But there is, in fact, no evidence—at least, none I can find—that Kavanaugh considers himself an originalist. At the White House, he says only “a judge must interpret the Constitution as written, informed by history and tradition and precedent”—a standard line that could be given by anyone at all. In a video, he is asked point blank about his originalism, and he simply fails to answer. Instead, he talks about interpretation of statutes (which is not what he was asked), and only at the very end says this about the Constitution: “you start with the text but there are whole bodies of precedent on all of these areas or most of all of these areas of constitutional interpretation.” Not much of answer, and certainly not a ringing endorsement of originalism.

In fact, in his writings, Kavanaugh hardly mentions originalism at all. A textualist, yes. An enthusiastic fan of Justice Scalia, yes. But also a fan of William Rehnquist, no one’s idea of an originalist.

(For people who don’t follow legal debates, a “textualist” is someone who, when interpreting statutes, places primary weight on the normal meanings of the words, rather than on legislative history, the purpose of the statute, public policy, etc. A textualist is not necessarily an originalist, indeed, the two ideas are in tension, as the originalist tends to fall back on constitutional purposes as reflected in the contemporary public debate because the constitutional text is so often vague.)

What has Kavanaugh (as opposed to his supporters and critics) said on the topic? In his judicial opinions, next to nothing. He has cited The Federalist Papers, a favorite of originalists, a few times. But I haven’t found an opinion in which he engages in serious originalist analysis. To be sure, as a Court of Appeals judge, opportunities for doing so would be rare, but it is still surprising that an originalist wouldn’t take the opportunity, in a concurrence or dissent, to make originalist arguments in favor of changing doctrines that he believes the Supreme Court has gotten wrong. Kavanaugh, who has had some success in influencing the Supreme Court, would have been in an ideal position to do this.

Kavanaugh has discussed originalism in two lectures he published in the Notre Dame Law Review. They are:

Our Anchor for 225 Years and Counting: The Enduring Significance of the Precise Text of the Constitution, 89 Notre Dame L. Rev. 1907 (2014)

Two Challenges for the Judge as Umpire: Statutory Ambiguity and Constitutional Exceptions, 92 Notre Dame L. Rev. 1907 (2017)

The articles make an interesting pair. In the 2014 lecture, Kavanaugh takes a Hugo Black-like stance on the Constitution, arguing that “one factor matters above all in constitutional interpretation and in understanding the grand sweep of constitutional jurisprudence—and that one factor is the precise wording of the constitutional text.” He then discusses a few separation-of-powers cases which, he claims, were resolved entirely by the constitutional text. He admits that some clauses of the Constitution (“due process,” “equal protection”) are hard to parse. “But in far fewer places than one would think.” Thus, he argues a “textualist” approach to the Constitution is the best one. Whether or not he thinks this is the same as originalism is not clear. Perhaps you think it might be, but this is not the end of the story.

By 2017, Kavanaugh seems to have had second thoughts. In truth, a great many separation-of-powers cases were not decided on textualist grounds; scholars have often scratched their heads about why the Court sometimes uses a kind of textualist approach (the cases that Kavanaugh discussed) and sometimes not (cases he did not mention). Maybe Kavanaugh realized this problem, and sought to address it head on. In the second piece, he turns his attention to the vaguer language in the Constitution. He acknowledges upfront that it is impossible to interpret them in a textualist fashion (which by now he calls “absolutist”):

No exceptions [to the First Amendment prohibition on restriction of “freedom of speech,” as applied by the Fourteenth Amendment to the states] would mean no libel laws and no defamation laws. Threats and incitements would be protected under the First Amendment. Traditional state restrictions on speech could be wiped away if the rights were absolute and incorporated such that they applied against both federal and state governments. And no one was prepared to do that, and no Justice has ever advocated such an approach, as far as I know. Indeed, even Justice Scalia—the foremost textualist and originalist—did not hold that view. No one—and I mean no one, not even Justice Black—articulates that view of the First Amendment.

So what does that mean? It means that there are exceptions to constitutional rights. But how do we determine what the exceptions are? And there it is. That’s the battleground. That’s the difficulty. That’s the threat to the rule of law as a law of rules. That’s the threat to the judge as umpire.

Kavanaugh goes on to acknowledge (unlike in his first piece) that the text of the Constitution has not determined the Supreme Court’s jurisprudence:

In all of these examples, what I want to emphasize is that the exceptions here are ultimately a product of common-law-like judging, with different Justices emphasizing different factors: history and tradition, liberty, and judicial restraint and deference to the legislature being three critical factors that compete for primacy of place in different areas of the Supreme Court’s jurisprudence articulating exceptions to constitutional rights.

So what to do? To an originalist, the answer would be plain. You throw out all the cases that depart from the original meaning. Of course, some originalists say we should give weight to precedent, but this admission is largely fatal to the originalist enterprise, or at least the version that Kavanaugh addresses, since putting weight on precedent involves the subtle questions of balance that Kavanaugh wants to avoid.

Kavanaugh’s answer? “I do not have all the answers to those problems. But we should identify and study these issues.”

Whatever he is, he’s not an originalist, at least not by self-identification. Not yet, anyway.

[Update. A correspondent drew my attention to two opinions that, he argues, shows that Kavanaugh is an originalist: the dissents in Free Enterprise Fund v. Public Co. Accounting Oversight Bd. (2008), and Heller v. District of Columbia (2011). I can see the argument that Kavanaugh’s opinion in FEF is orginalist in approach. I don’t think his analysis is correct (the founding-era understanding was that the executive should have some degree of responsibility over employees in the executive branch but was not clear how much), but that’s not the issue.  In Heller, Kavanaugh said that he was following supreme court precedent, and I wouldn’t put much weight on a footnote in which he says “post-ratification adoption or acceptance of laws that are inconsistent with the original meaning of the constitutional text obviously cannot overcome or alter that text.” He then cites Marbury v. Madison and Brown v. Board of Education, which I would prefer to chalk up to a bit of judicial trolling].

The Decline of Supreme Court Deference to the President, Trump edition

The graphic comes from the New York Times, based on data from a paper written by Lee Epstein and me, plus new data for Trump. The Trump data confirms the trend that we discovered in our (pre-Trump) paper: the decline in the win-rate for presidents before the Supreme Court since Reagan. The question is what accounts for it. Our two preferred hypotheses are the growth of the private Supreme Court bar, and an increase in the self-confidence of the Court. Will the trend continue after Trump’s next appointment?

Radical Markets and cognitive load

One frequent response so far is skepticism that ordinary people can handle the cognitive load some of our proposals would impose on them. Here’s Ryan Avent, for example:

[T]he book seems to dramatically underestimate the cognitive load that is likely to be associated with its proposals, and the likely resistance to those programs on that basis. The authors reckon that apps can be used to make management of these markets as easy as possible. Even so, they are asking people to begin thinking in market-oriented ways about lots of things which don’t currently require such thinking. That, after all, is the point: that aggregating the considered, distributed reasoning of lots of people is likely to produce better outcomes. But contributing to that considered, distributed reasoning is a pain; even if it can all be done on an app, you have to sit, and weigh your actions, and worry that you made an error of judgment. To give just one example: Uber has become far more pleasant to use since surge pricing went away. The system “worked better” in some sense, when riders and drivers had to think harder about how much they actually valued the trip. But that thinking was itself a cost of the service.

This is an important point, and I agree with the Uber example [N.B, updated: actually, I don’t; the airline example below is a better one, as Uber continues to adjust pricing but in a more obscure way]. The same point can be made about the way airlines package and disaggregate different aspects of the service in their pricing decisions. But the problem turns out to be trickier than it first seems. Our COST proposal, for example, requires people to estimate the values of (say) their home, but people have to do that anyway when they sell their home, and also when they buy a home in the first place—and, at least in principle, when they take out mortgages, plan for their retirement, rent out space, etc. And while the COST requires repeated valuations over time, which may enhance the cognitive load for the possessor, this also means that the cognitive load is reduced for all potential buyers, who no longer need to bargain with sellers.

In the case of quadratic voting, the cognitive load is reduced in a more direct way. Because QV effectively allows people to trade political influence over different domains, it allows me to focus on the issues I care about (say, data privacy), or the campaigns that matter to me, or the geographic unit of politics I’m comfortable with. I may be deeply immersed in, and affected by, decisions of the local schoolboard, and I can (implicitly) trade for influence over it with someone who cares much more about the identity of the next president. You might think of it this way: we are all currently generalist producers of democratic outcomes, where QV allows division of labor and specialization in a natural, decentralized way.

Won’t we all be worse citizens then? The current system of democracy puts a massive cognitive load on all of us—we are expected to be informed about literally everything—all issues, national, state and local, dozens of candidates, etc., so that we can vote wisely and responsibly in countless elections and referenda. Of course, nearly all of us duck this load by remaining massively ill-informed, just as consumers do when confronted with complicated products and services. I suppose when democracy was first proposed, someone must have said—“you must be crazy: how are people going to bear the cognitive load?” Or the ancient Greek equivalent.

But I think the better way to think about this is to start with the general problem: all the goods and services in the economy must be allocated somehow. If we gave the task to a central planner, the cognitive load would be far too great for any single person. Central planners of the past tried to solve this problem by creating vast bureaucracies, enabling division of labor and specialization. But as von Mises, Hayek, and others pointed out long ago, the cognitive load (or what economists came to call information costs, but is better understood in psychological terms, I think) is best distributed among all citizens via the mechanism of the market. The more broadly shared the burden, the more easily it is borne by individual citizens.

What they didn’t establish is that the market institutions of their time were a superior bearer of the aggregate cognitive load than possible alternative market institutions, as their focus was the critique of central planning. But having accepted their critique, the next question is: what market design does the best job of distributing the cognitive load among the most people, and among the people best able to bear it?

Every argument is two arguments, or the rise of private-sector censorship

The first argument is the argument about policy; the second argument is about whether the first argument is permissible.

Thus, for Ezra Klein and Sam Harris, the first argument is the argument about the relationship between race and intelligence, and the second argument is about whether Sam Harris is allowed to interview Charles Murray about the relationship between race and intelligence. For Michelle Goldberg, the first argument is about whether women and men can work together in the workplace, and the second argument is about whether it is appropriate to debate Jordan Peterson about whether women and men can work together in the workplace. For Robin Hanson and Jordan Weissmann, the first argument is about whether sexual opportunities should be redistributed, and the second argument is about whether Hanson should have been allowed to make the first argument. For Jennifer Finney Boylan, the first argument is about the treatment of trans people and the second argument about whether the treatment of trans people may be debated. In many cases, the second argument is couched in terms of the permissible ways of making the first argument (Hanson/Weissman), including who should be allowed to make the first argument (Klein/Harris), rather than the first argument is literally permissible, but ultimately it amounts to the same thing.

It’s enough to make your head spin, but the pattern is pretty clear. Person #1 makes an argument. Person #2 might criticize the argument on the merits, in which case we have a rare single-argument encounter. But with increasing frequency, Person #2 says (in essence) that Person #1’s willingness to make the first argument shows that Person #1 is a “creep,” or a fascist, or is acting in bad faith, which always is meant to imply that Person #1 should be fired from his job, or deprived of a public forum, or not taken seriously—should be publicly shamed. Moreover, because of the vagaries of language and the nature of the current hothouse political and cultural climate, it is easy for Person #2 to mistakenly (or deliberately) believe that Person #1 made the second argument even if Person #1 did not, which leads Person #2 to make the second argument about Person #1, namely that Person #1 is trying to censor and shame Person #2 and for that reason Person #1 should be fired, or deprived of a public forum, or not taken seriously (see Harris/Klein). Sooner or later, someone will make the second argument.

Because our government doesn’t censor people, second-argument makers have sought speech restrictions from the private sector, and have succeeded in many ways:

1. Universities, which have increasingly enacted speech codes.

2. Employers, which have cracked down on dissenters.

3. Social media companies, from Facebook to Twitter, which have imposed numerous restrictions on content.

4. Even content sellers, like Spotify, have gotten into act.

Failing all that, shame campaigns on social media may sometimes lead to 1, 2, 3, or 4.

While not all of this is new (employers have always regulated speech in workplaces), what is new is the contribution of technology, which has raised the stakes for everyone, and given tech giants quasi-monopolies over the public sphere, and ideological shifts, which have produced polarization (at least relative to the last 20-30 years). You can think of the second-argument phenomenon as the result of ideological polarization amplified by technological advances in communication.

Keeping expectations low

More Radical Markets. Glen on TV. Glen on radio (okay, “podcast”). Our screed against economics in the Chronicle. Nice reviews in the Economist (“arresting if eccentric”) and the Irish Times (“It made my head hurt, and then spin”). The Irish Times review was written by a government minister. Can you imagine an American politician admitting he has read a book, let alone reviewing one? Well, there’s this (“the key for me is to keep expectations low,” which is either self-refuting or self-reinforcing, or possibly both).

The cryptocurrency (and more broadly, blockchain) governance conundrum

The enthusiasm for cryptocurrencies, and blockchain more generally, derives from the sense, to enthusiasts, that an old dream is within reach, thanks to advances in computer technology: governance without (human) discretion. Consider our system of property and contract rights. Many people think these bodies of law allow us to organize our affairs in a fair and efficient way, and that the law can be reduced to a set of relatively simple rules that can, in principle, be applied mechanically. The frustration and tragedy is that we must rely on human beings—judges and other bureaucrats—to apply the law, and these human beings unavoidably make errors and, worse, may be biased or corrupt. The tension is even worse for more controversial areas of the law, such as tax law, where the suspicion that IRS officials enforce the law arbitrarily and with bias, illustrated by the Tea party scandal, is widely held.

But if the laws could be enforced by computer, using open-source software that can be examined by anyone, the problem is solved. Hence the excitement about smart contracts, for example. But more than that: what is wanted is a system of law that is entirely self-contained—made, or at least approved, by the public rather than by politicians, and enforced automatically. This is, of course, the blockchain as applied to the (theoretically) self-enclosed system of currency.

But there is a problem—the governance problem. One source of perplexity is who designs the initial software program. What ensures that he or she designs a program in the public interest? Rousseau was so stymied by this problem in the context of political theory that he believed that the lawgiver must be seen as divine. Satoshi Nakamato, who helped get Bitcoin started by shrouding his identity and achieving semi-mythical status, knows his Rousseau!

But even if this initial problem is solved, the second and more interesting problem is how the law (code) can be updated as events change. In the case of bitcoin, miners who collectively hold more than 50% of computer power can change the protocol. But this creates obvious incentives toward consolidation, which appears to have taken place. Bitcoin is a plutocracy. Other cryptocurrencies keep power in the hands of the founders and designated agents, creating, in other words, an oligarchy.

If we try to imagine democratic alternatives, we run into significant problems. It is tempting to think that we could give every (say) bitcoin user one vote. But bitcoin users are anonymous; the network can’t trace transactions to a single user (since a single user can use different keys for each transaction). Perhaps in a different version of the protocol, a vote could be attached to a bitcoin, so whoever happens to possess a bitcoin (or fraction) at any time could vote that bitcoin (or fraction). Still, the wealthy would possess most of bitcoins and hence most of the political power, and strong incentives would exist to accumulate bitcoins in order to maximize power.

A truly democratic cryptocurrency would be one in which all people (presumably, in the world, since national boundaries mean nothing in cyberspace) would possess equal voting power, including people with few coins or none at all.

Global voting seems scarcely imaginable, and also seems incompatible with anonymity. But put aside anonymity and imagine that every person in the world, or in the relevant jurisdiction if voting based on nationality or some other identity were possible, could vote on proposals to update the protocol. Formidable problems would continue to exist. Standard voting procedures—one-person-one-vote—have significant problems, and would be vulnerable to collusion. More sophisticated voting procedures—like quadratic voting—could made headway with this problem, but would still be vulnerable to the classic weakness of democracy: most people would be both incapable of, and uninterested in, making informed decisions on proposals to update a computer program, as the questions would involve technical issues of both finance and programming. Some kind of representative system might be necessary, but the representatives would necessarily have discretion, and be vulnerable to political influence, and we are back where we started.

For these and related reasons, I believe that the dream remains unattainable, at least in the near future, and with respect to currency supply and other public goods. A more realistic possibility is that relatively discrete blockchain systems would be governed based on users’ contributions in maintaining those systems, much as bitcoin is, but these systems would need to be narrowly tailored to some specific commercial problem and couldn’t be used for public goods, like currencies, that affect everyone. Even in the discrete case, however, discretion probably cannot be completely eliminated, as it is in games like chess. People seem to underestimate the extent to which even straightforward-seeming transactions, an interest-rate swap, for example, involve complex contracts with terms that are deliberately ambiguous so that they can be applied in a discretionary manner by judges if unforeseen circumstances arise. But the scope of discretion can be narrowed, and within the remaining field of decision-making, better or worse governance systems can be used. This is the area in which progress can be made.

Vitalik Buterin on Radical Markets

The inventor of Ethereum has much of interest to say about Radical Markets. An excerpt:

Radical Markets…could be best described as an interesting new way of looking at the subject that is sometimes called “political economy” – tackling the big questions of how markets and politics and society intersect. The general philosophy of the book, as I interpret it, can be expressed as follows. Markets are great, and price mechanisms are an awesome way of guiding the use of resources in society and bringing together many participants’ objectives and information into a coherent whole. However, markets are socially constructed because they depend on property rights that are socially constructed, and there are many different ways that markets and property rights can be constructed, some of which are unexplored and potentially far better than what we have today. Contra doctrinaire libertarians, freedom is a high-dimensional design space…All in all, I am optimistic that the various behavioral kinks around implementing “radical markets” in practice could be worked out with the help of good defaults and personal AIs…I particularly welcome the use of the blockchain and crypto space as a testing ground…Could decentralized institutions like these be used to solve the key defining challenges of the twenty first century: promoting beneficial scientific progress, developing informational public goods, reducing global wealth inequality, and the big meta-problem behind fake news, government-driven and corporate-driven social media censorship, and regulation of cryptocurrency products: how do we do quality assurance in an open society? All in all, I highly recommend Radical Markets…to anyone interested in these kinds of issues, and look forward to seeing the discussion that the book generates.

Sunstein wins Holberg prize

The Holberg is a kind of Nobel for fields not covered by the Nobel—arts, social sciences, law. Even before his contributions to behavioral science, everyone considered Sunstein one of the very top legal scholars by virtue of his contributions to constitutional and administrative law, among much else. His last 20 years of work, much but not all of it devoted to behavioral science and law, and star wars, has extended his influence across the galaxy. It is amazing to think that in his 40s, Sunstein was already a great scholar yet had not even begun the work that would prove his most influential. At this age, most scholars have settled into whatever groove that will take them to their grave.

I tell aspiring legal academics to read Sunstein first. It is hard to replicate his brilliance and imagination, but it is possible, by reading him, to understand that even technical scholarship can be written in a fun, stylish, and engaging way. Everyone should imitate his efforts to reach out to people he disagrees with, and treat them with respect and decency.

Is the Stormy Daniels (alleged) hush agreement legally invalid?

I’m not going to teach the Stormy Daniels / “David Dennison” contract in my contracts class. Time is short. But there is some good stuff for law students to ponder—even an integration clause!

In the end, Daniels’ case is pretty weak.* In seeking a declaratory judgment that the “hush agreement” is invalid, she makes three, or possibly four, arguments.

1. No contract was formed because “Mr. Trump [the putative real identity of David Dennison] never signed the agreements. Nor did Mr. Trump provide any valid consideration. He thus never assented to the duties, obligations, and conditions the agreement purportedly imposed upon him” (para. 38).

2. The agreements “are invalid, unenforceable, and/or void under the doctrine of unconscionability” (para. 39).

3. The agreements are invalid because “they are illegal and/or violate public policy” (para. 40).

4. Elsewhere, in the factual recitation, she argues that because Trump (or actually his lawyer Cohen) breached the agreement by disclosing his payment of $130,000 to Daniels, Daniels is no longer obligated to keep her side of the bargain.

On #1, there is no legal requirement that Trump sign the agreement, as far as I know. He can simply consent to it through his lawyer. And even if there were, there’s no reason why the contract wouldn’t be valid as a deal between Daniels and Essential Consultants, the LLC apparently created for this purpose, with Trump as a third-party beneficiary. In other words, Essential Consultants pays Daniels to keep mum, and she agrees to keep mum. The $130,000 payment, whether from Trump himself or Cohen or EC, is valid consideration. Contract? Sure.

On #2, Daniels does not allege facts that make out a claim of unconscionability. She was represented by a lawyer in the negotiations, and she seemed, if anyone, to be in the superior bargaining position. While she vaguely asserts that Trump and Cohen “aggressively sought to silence” her, she does not describe the aggression. If there was something amounting to unconscionability—significant threats, say—she would presumably have mentioned them. Moreover, the $130,000 payment would not likely be regarded by a court as unconscionably low.

On #3, public policy actually favors settlements. There has been some argument lately about whether non-disclosure agreements may violate public policy by hiding bad behavior, but the law, as far as I know, says otherwise.

On #4, Daniels would normally be entitled to damages if Trump/Cohen breached the agreement in this way. But, while I can’t say I’ve read every word of the agreement, virtually all the non-disclosure obligations are (not surprisingly) on Daniels’ part. Moreover, if, as Daniels argues, Cohen violated the non-disclosure provisions, it would seem that Trump (not Daniels) is the injured party, and that Trump would have a remedy against his attorney, rather than Daniels against Trump. Incidentally, Cohen is not listed as a defendant; and it does seem that Daniels alleges that Cohen’s disclosure was at the behest of Trump, suggesting that the breach was Trump’s rather than Cohen’s. Maybe, but the weakness of the contractual language will be a problem for Daniels; she would have difficulty showing that the disclosure was a material breach given that her major aim in the contract was apparently the monetary payment, which apparently took place; and it’s also not clear that she can prove she was injured by the disclosure.

[Additional thought: however, it is my impression that a court might refuse to enforce a non-disclosure agreement if the protected party did not take reasonable efforts to maintain the confidentiality of the information. The question then becomes why Cohen revealed the information he revealed, and whether Trump really ordered him to. If Cohen did so on his own, then that won’t invalidated the agreement though it may diminish the harm from further disclosures, depending on what they are.]

Of further interest, the contract includes a $1 million liquidated damages clause that benefits Trump alone. Law students: is this clause enforceable or illegal as a penalty? I’m inclined to think the latter because the clause stipulates the same penalty for any violation, no matter how trivial. It is also large relative to the payment to Daniels.

*Under common law principles. Maybe there are idiosyncrasies of California law, especially in relation to the public policy claim, that I don’t know about.

What is “norming” and what does it have to do with regulation?

It’s a style of regulation that has received very little comment. It may be troubling. You can read an article I wrote about it, with Jonathan Masur. Abstract below.

How do regulatory agencies decide how strictly to regulate an industry? They sometimes use cost-benefit analysis or claim to, but more often the standards they invoke are so vague as to be meaningless. This raises the question whether the agencies use an implicit standard or instead regulate in an ad hoc fashion. We argue that agencies frequently use an approach that we call “norming.” They survey the practices of firms in a regulated industry and choose a standard somewhere within the distribution of existing practices, often no higher than the median. Such a standard burdens only the firms whose practices lag the industry. We then evaluate this approach. While a case can be made that norming is appropriate when a regulatory agency operates in an environment of extreme uncertainty, we argue that on balance norming is an unwise form of regulation. Its major attraction for agencies is that it minimizes political opposition to regulation. Norming does not serve the public interest as well as a more robust standard like cost-benefit analysis.

Can It Happen Here?: Authoritarianism in America

The book, edited by Cass Sunstein, is out. Buy a copy and another one for a friend.

The book exemplifies a paradox of aggregation. Individually, most authors (including me) answer “probably not.” Yet read every essay and close the book, and the impression you’re left with is more like “uh-oh.” Perhaps the reason is that the authors bring so much energy and varied perspective to describing the threats to constitutional democracy, and draw on different historical and comparative sources that reinforce each other. However, when explaining why those threats are likely to be countered, the authors converge on a series of familiar observations about the strength of our institutions and political culture. With each repetition, these observations sound more like hollow reassurances than irrefutable truths. Well, villains are always more interesting than heroes.

What would a useful Twitter look like?

When I opened an account on Twitter several years ago, I naively made the following assumptions.

1. I could follow newspapers and other publications so I would get stories I’m interested in.

2. I could follow worthy charitable institutions, cultural organizations, and entertainment venues, which would keep me up-to-date about events and projects.

3. I could follow colleagues and other academics, journalists, commentators, and experts of various sorts, who would keep me informed about developments in areas of expertise I share or wanted to learn about. Also, specialized academic journals and institutions.

In fact, nothing worked out.

1. New organizations bombarded me with the same headlines over and over. How many times do you want to read “Longtime Trump attorney says he made $130000 payment to Stormy Daniels” on your feed?

2. Worthy charitable institutions showered me with pleas for money. Cultural organizations sent redundant notices of events I did not want to attend. Restaurants I might visit twice a year sent me their menus every day.

3. Most people in category #3 did not actually alert me to developments in their area of expertise. When not ranting and raving, they sent me links to #1 and #2 along with snippets of text expressing their outrage and indignation. Meanwhile, the tweets of people who tweeted responsibly got lost in the deluge. When you think about it, if you tweet once a day, or a couple times a week, how likely is it that people who receive tens of thousands of tweets daily will see yours? They’d have to watch their feed all day long.

As the years passed, I realized I had filtered out nearly everyone and everything, whereupon I deleted my account.

So what would a useful Twitter look like?

I would like a Twitter that, like a good personal (human) assistant, sent me an email once a week with:

(a) News related to my interests that I might have overlooked. No headlines; I see those on my own.

(b) Cultural events and entertainment options I might actually want to attend. Once.

(c) The latest academic scholarship related to my interests—this is likely to be no more than 10-20 articles or books per week.

Twitter is an enormously inefficient method for accomplishing (a), (b), and (c). Currently, it’s much easier to do periodic google searches. Maybe after years more of AI development, Twitter could give me want I want. But I’m not holding my breath. Generating content that people scan once a week is no way to make money. Twitter can’t work unless it can make itself an obsession.

The worthy tweet

My friend and colleague Will Baude writes that Twitter need not be a black hole or a planetary scale hate machine. We, or at least we academics and public-policy types, can save Twitter by adopting three simple rules:

  1. Aspire to inform, not to convince.
  2. Promote the kinds of things you’d like to see more of.
  3. Don’t promote the kinds of things you’d like to see less of.

Here’s the problem with Will’s proposal: it is entirely at variance with Twitter’s goal, which is to make money by generating content that vast numbers of people will read.

Will is one of thousands of Twitter’s unpaid laborers. Because Twitter pays Will nothing, it can’t dock his wages for producing content that few people read. But Twitter can downgrade his content in favor of the things people want to see: sarcasm, derision, humor, and images and videos rather than links to academic papers on SSRN.

Twitter figured this out years ago, when it stopped displaying tweets on your feed in the order that they arrive, and started tinkering with algorithms that promote tweets that are most likely to be read, liked, and retweeted (as well as ads and paid promotions), including tweets from people who you do not even follow. Twitter’s description of the algorithm is cagey, to say the least. But we don’t need much imagination to understand what it is trying to do.

Will’s view is a bit like that of a conscientious casino operator who worries that slot machines addict people unfairly by relying too much on noise and colors. The operator turns off the sound and replaces the colorful images of fruits and dollars with a black-and-white sign that says “win” or “lose.” He should hardly be surprised if customers disappear the next day—even though the (very bad) odds remain unchanged. The whole point of the color and excitement is to lure the customers and open their wallets. The odds are beside the point.

Twitter cannot make profits unless people use it addictively. Not only will it never create a system that a conscientious academic might design. It also cannot tolerate the content that a conscientious academic wants to produce. If you don’t believe me, look at your analytics. You can still tweet, but hardly anyone will see your tweets. What’s the point of that?

A planetary-scale hate machine

Jack Dorsey:

We’re committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.

Commenter zestyping at Hacker News:

Today, Twitter is a planetary-scale hate machine. By which I don’t mean “people post hateful things on Twitter.” I mean literally generates hate, as in, put a bunch of people with diverse perspectives on Twitter and by the end of the day they hate each other more than when they started. Common ground might have existed, but they won’t find it, because Twitter, like any arms dealer, works better when they fight. It even benefits from collateral damage, when they hurt people they didn’t specifically intend to hurt.

Through its core design—short messages, retweets, engagement metrics—Twitter incapacitates the safeguards necessary for civil discussion. It eliminates context, encourages us to present each other out of context, prevents us from explaining ourselves, rewards the most incendiary messages and most impulsive reactions, drives us to take sides and build walls.

If Twitter is going to foster healthy conversation, it will have to change fundamentally. It won’t be a matter of tuning some filters and tweaking some ranking algorithms.

The best concise account of the problems with Twitter I have seen. Who are you going to believe?