Cleanup and Deduplication of an International Bibliographic Database

Article excerpt

A two-year project to improve the quality of a major bibliographic database resulted in corrections to 2.1 million data fields and removal of 8.2 percent of the records. Queries now retrieve 20 percent more hits than the same queries did before the project began, despite removal of duplicate records. The literature of duplicate removal is discussed, with emphasis on the trade-offs between human and computer methods.

INTRODUCTION

Description of Database

The Conservation Information Network (Network) was created by an international collaborative effort managed by the Getty Conservation Institute, an entity of the J. Paul Getty Trust. The Network consists of an electronic messaging service and access to the following three databases:

* The bibliographic database (BCIN), consisting of references to international conservation literature; all records contain abstracts

* The materials database, containing records on products relevant to conservation practice

* The product/supplier directory, containing names and addresses of manufacturers and suppliers of materials used in conservation

The institutions contributing data records are the following:

* Canadian Conservation Institute (CCI)

* Smithsonian Institution's Conservation Analytical Laboratory (CAL)

* Getty Conservation Institute (GCI)

* International Centre for the Study of the Preservation and the Restoration of Cultural Property (ICCROM)

* International Council on Monuments and Sites (ICOMOS)

* International Council of Museums (ICOM)

Approximately four hundred institutions around the world use the Network regularly to improve their conservation of cultural property, especially art, archaeological sites, buildings, and museum collections.

The Network is resident on a Control Data Corporation (CDC) mainframe managed by the Canadian Heritage Information Network (CHIN), a government installation in Ottawa.

BCIN contained about 140,000 bibliographic records when this project began. Because BCIN was initially formed through the machine conversion of diverse files from the participants, numerous anomalies, errors, and duplicate records resulted. Furthermore, differences in cataloging standards between countries and over time led to variations in style (capitalization, punctuation, etc.). These factors contributed to the need for cleanup and deduplication.

BCIN contains all of the normal data used for identifying and describing bibliographic records, except that LCCNs are absent, and ISBNs, ISSNs, and CODENs are rare. BCIN is not in a MARC format but is stored using Information Dimension's BASIS database management system. Maximum possible record length is 15,000 bytes, although the longest record is 5,254. The shortest is 82 bytes; the mean length is 973 bytes.

Purpose of Project

The purpose of the cleanup and deduplication project was threefold:

1. To locate and correct data errors automatically, using computer programs

2. To flag records with data errors that the programs could detect but could not correct

3. To identify for human review records likely to be duplicates

During the summer of 1989, the author conducted a study of how these goals could be accomplished. Programs to implement the findings of the feasibility study were written during the winter of 1990 and were run against the database during April 1990-March 1991.

Basic Procedure

An early decision was made to perform as much of the cleanup and deduplication as possible on PCs, with the assumption that this would be more expeditious than developing programs to run on the CDC mainframe in Ottawa. In fact, it turned out that the entire project was done on PCs.

However, the required time would have been too great and disk space (for work files) too large to process the 140-megabyte database in one batch. …