The Dimension of the Supreme Court

Article excerpt

It is a rare occurrence when the New York Times, (1) Washington Post, (2) NPR, (3) and even Jack Kilpatrick (4) discuss a political science paper. Nonetheless, that is what happened after A Pattern of Analysis of the Second Rehnquist US Supreme Court, by Lawrence Sirovich, (5) was published in the Proceedings of the National Academy of Science in June 2003. Sirovich's paper applies two unusual mathematical techniques to the decisions of the Court with the aim of "extracting key patterns and latent information." (6) Using information theory, and in particular the idea of entropy, Sirovich claims that the "Court acts as if composed of 4.68 ideal Justices." (7) After applying a singular value decomposition to the decision data he concludes that the Court's decisions can be accurately approximated by a suitably chosen two-dimensional space.

While some commentary has questioned whether Sirovich's conclusions are novel, at least one of the methods of analysis is new (in the context of political science) and might also prove useful in other circumstances. Moreover the methods themselves raise interesting questions about the Court. It is therefore worthwhile to consider the methods more carefully.

Before discussing the methods themselves, we need to explore how Sirovich encodes data from the Court. He starts by ordering the Justices in alphabetical order (although any order would work) and then encodes each decision by a vector with nine entries in which a 1 signifies a Justice who was in the majority and a -1 signifies a Justice in the minority. For example, a case decided unanimously is coded (1,1,1,1,1,1,1,1,1) and a case decided by the classic 5-4 conservative-liberal split (say Garrett (8)) is coded (-1,-1,1,1,1,1,-1,-1,1) where the first -1 indicates that Breyer (the alphabetically first Justice) was in the minority and the last 1 indicates that Thomas (the alphabetically last Justice) was in the majority. (9) Thus, Sirovich reduces each case to a string of 1 and -1's of length 9. I will refer to these codings as vote-patterns.

There are two things worth noting about Sirovich's data set. First, he records the decisions of the Court and not the opinions. For instance, Lawrence v. Texas (10) is recorded as (1,1,1,1,-1,-1,1,-1) with O'Connor listed in the majority even though she did not join the majority opinion. The second fact worth noting is that Sirovich discarded 30% of the cases because "the vote was incomplete or ambiguous (per curiam ... decisions furnished no details of the vote and were deemed inadmissible, as were cases in which a Justice was absent or voted differently on the parts of a case)." (11) Later I will reexamine his decision to exclude these cases.

ENTROPY

The most original part of Sirovich's paper is his use of information theory, and in particular the idea of entropy, to analyze the Supreme Court. Sirovich uses information theory to measure the variability of the set of vote patterns of the Court. While others have discussed the distribution of decisions from the Court, (12) and the correlation of votes among the Justices (13) no one has proposed an overall measure of the variability of decisions until now. This fact alone makes Sirovich's paper worth reading.

Entropy is a measure of the total amount of variability in a situation. Suppose there are n different possible outcomes which we list as 1, 2,..., n and that outcome j occurs with probability [p.sub.j] The entropy of this set of outcomes is defined to be

I = - [n.summation over j = 1] [p.sub.j] log [p.sub.j]

where the logarithm is taken to be base 2. (14) Entropy measured in these terms can be interpreted as the smallest average code word needed to convey the outcomes. (15) It is infeasible to provide a complete explanation here, but a few examples should suffice to explain how it works.

First, a small example to help clarify the ideas. Suppose that when you talk to your stockbroker he recommends Buy with probability 1/2, Hold with probability 1/4 and Sell with probability 1/4. …