Academic journal article Information Technology and Libraries

Webcats and Checklists: Some Cautionary Notes

Academic journal article Information Technology and Libraries

Webcats and Checklists: Some Cautionary Notes

Article excerpt

Joan M. Cherry's September 1998 ITAL - article does a careful job of evaluating Web online catalog displays based on checklist judging. This communication raises some questions about checklist judging in general and the Cherry checklist in particular, suggesting that checklist judging is inherently flawed and that we don't know enough to establish the ideal online catalog display (if such an animal exists). A digression discusses the draft IFLA guidelines for catalog displays and suggests that they may do more harm than good in recommending particular approaches for "standard" catalog displays.

Joan M. Cherry's article in the September 1998 Information Technology and Libraries ("Bibliographic Displays in OPACs and Web Catalogs: How Well Do They Comply with Display Guidelines?") offers a realworld catalog display evaluation based on years of theory and discussion. As I read it, I found myself alternating between feeling appreciation for what the article did well and mild distress about the underpinnings of the article.

These comments are not intended as an attack on Ms. Cherry's article. Instead, I hope to offer some cautionary notes about checklists in general and the checklist used in particular. Maybe it's time for someone to do a major new work on aspects of online catalog design; maybe some of the "old hands" should come together with new practitioners to improve our understanding of the world out there. Or maybe there isn't a single best answer, and no checklist can serve to judge a catalog.

Appreciating the Research

Cherry's article is very well done. The literature search is good, although I'm naturally disappointed that my 1992 book The Online Catalog Book: Essays and Examples wasn't used when refining the checklist. The checklist seems to have been used carefully and consistently. I have no doubt that the findings in both studies are legitimate. If the checklist is a legitimate approach to judging catalogs, the results appear to be sound.

I don't question the results as they stand. The only INNOPAC system came out best overall for bibliographic display, with a SIRSI system trivially behind and a university-developed system slightly behind that. Overall, the best systems only managed to succeed on about two-thirds of the applicable checklist items. Systems generally did worst on text handling and best on instructional information. Even though I haven't seen most of these systems in use, those results seem perfectly sensible from what I do know of the field.

I have good reasons to love the checklist and the analysis. Much of the checklist resembles points made in my own writing. Better yet, the end-user interface I designed (RLG's Eureka) comes out smelling like a rose in this evaluation. As far as I can tell, the current Eureka on the Web scores 85 percent for labels, 70 percent for text, 92 percent for instructional information, 93 percent for layout, and 83 percent overall--which is scarcely surprising, given the confluence of, guidelines and designer.

Unfortunately, it isn't that simple. If it was, Eureka would score 100 percent and some of the systems reviewed would do better than they did. There are real-world problems with the checklist and the whole concept of judging online catalogs (or catalog displays) using checklists.

Problems with Checklist Judging

One problem is common to all checklist judging: it encourages "featuritis," where features are added whether they make sense or not. Additionally, checklists deal with specifics while great online systems require overall coherence. It's one thing to score 90 percent on a 200-item checklist, but I'd rather use a system that scored 70 percent and worked coherently as a whole. Excessive adherence to individual checklist items can get in the way of overall coherence---particularly if the checklist has been assembled from multiple sources.

A checklist can include internal contradictions: features that make sense in isolation but conflict with one another when combined. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.