Taking Disqualification Seriously

Article excerpt


In 2007, the American Bar Association's (ABA) Standing Committee on Judicial Independence(SCJI) received an Enterprise Fund Grant from the ABA to undertake a project on judicial disqualification. The resulting Judicial Disqualification Project is currently preparing a report, and resolution to the ABA's House of Delegates, that surveys the state of disqualification rules and practice around the country, identifies problems, and proposes reforms.

This article is excerpted, in large part, from the Project's draft report. It first summarizes the history of disqualification, which highlights the movement toward more rigorous disqualification rules across the country, and then surveys the current state of disqualification rules and practice across the states. Despite the gradual evolution of robust and widely adopted disqualification rules, judges appear to remain reluctant to fully embrace a rigorous disqualification regime, for reasons discussed in the third part of the article. The article concludes by identifying two specific problem areas that the Project will seek to address in its effort to encourage effective disqualification standards and practice

History and evolution

Under English common law, recusail was a decidedly limited practice guided by a single, pithy principle first articaulated in 1609 by Sir Edward Coke in Dr. Bonham's case. "No man shall be a judge in his own case." While a judge could not be challenged on grounds of bias, he could be removed for having an "interest" in its outcome. For example, in Dr. Eonham's case, a judge was disqualified from a case in which he would receive the fines he assessed. However, disqualification for "interest" did not extend to relationships, such as where the judge was related to a party. As Professor John Frank explained: "English common law practice at the time of the establishment of the American court system was simple in the extreme. Judges disqualified for financial interest. No other disqualifications were permitted, and bias . . . was rejected entirely."2 In contrast to civil law systems, under which "a judge might be refused upon any suspicion of partiality," William Blackstone noted that in England "the law is otherwise," and, "it is held thatjudges or justices cannot be challenged."* He elaborated: "For the law will not suppose a possibility of bias or favour in a judge, who is already sworn to administer impartial justice, and whose authority greatly depends upon that presumption and idea."

In the United States, the law of disqualification began quietly but gained in complexity and strength over time:

In 1792, Congress enacted legislation that codified the common law by calling for disqualification of district judges who were "concerned in interest," but added that a judge could also be disqualified if he "has been of counsel for either party." In 1821, relationship to a party was added as another ground for disqualification. In 1891, Congress enacted legislation forbidding a judge from hearing the appeal of a case that the judge tried, and 20 years later, federal law was further amended to require disqualification from cases in which the judge was a material witness.

In 1911, Congress enacted legislation entitling a party to disqualify a judge by submitting an affidavit that the judge has "a personal bias or prejudice" against the affiant or for the opposing party. In 1921, in Berger v. United States, the Supreme Court found that this statute prohibited rulings on the truth of matters asserted in these affidavits and required automatic disqualification if the affidavit was facially sufficient.

Common law aversion to judicial bias as grounds for disqualification, however, continued to exert considerable influence. While the law ostensibly enabled a party to secure disqualification simply by submitting an affidavit alleging personal bias, Professor John Frank noted at die time:

Frequent escape from the statute has been effected through narrow construction of the phrase "bias and prejudice. …