Cleanup and Deduplication of an International Bibliographic Database
Toney, Stephen R., Information Technology and Libraries
A two-year project to improve the quality of a major bibliographic database resulted in corrections to 2.1 million data fields and removal of 8.2 percent of the records. Queries now retrieve 20 percent more hits than the same queries did before the project began, despite removal of duplicate records. The literature of duplicate removal is discussed, with emphasis on the trade-offs between human and computer methods.
Description of Database
The Conservation Information Network (Network) was created by an international collaborative effort managed by the Getty Conservation Institute, an entity of the J. Paul Getty Trust. The Network consists of an electronic messaging service and access to the following three databases:
* The bibliographic database (BCIN), consisting of references to international conservation literature; all records contain abstracts
* The materials database, containing records on products relevant to conservation practice
* The product/supplier directory, containing names and addresses of manufacturers and suppliers of materials used in conservation
The institutions contributing data records are the following:
* Canadian Conservation Institute (CCI)
* Smithsonian Institution's Conservation Analytical Laboratory (CAL)
* Getty Conservation Institute (GCI)
* International Centre for the Study of the Preservation and the Restoration of Cultural Property (ICCROM)
* International Council on Monuments and Sites (ICOMOS)
* International Council of Museums (ICOM)
Approximately four hundred institutions around the world use the Network regularly to improve their conservation of cultural property, especially art, archaeological sites, buildings, and museum collections.
The Network is resident on a Control Data Corporation (CDC) mainframe managed by the Canadian Heritage Information Network (CHIN), a government installation in Ottawa.
BCIN contained about 140,000 bibliographic records when this project began. Because BCIN was initially formed through the machine conversion of diverse files from the participants, numerous anomalies, errors, and duplicate records resulted. Furthermore, differences in cataloging standards between countries and over time led to variations in style (capitalization, punctuation, etc.). These factors contributed to the need for cleanup and deduplication.
BCIN contains all of the normal data used for identifying and describing bibliographic records, except that LCCNs are absent, and ISBNs, ISSNs, and CODENs are rare. BCIN is not in a MARC format but is stored using Information Dimension's BASIS database management system. Maximum possible record length is 15,000 bytes, although the longest record is 5,254. The shortest is 82 bytes; the mean length is 973 bytes.
Purpose of Project
The purpose of the cleanup and deduplication project was threefold:
1. To locate and correct data errors automatically, using computer programs
2. To flag records with data errors that the programs could detect but could not correct
3. To identify for human review records likely to be duplicates
During the summer of 1989, the author conducted a study of how these goals could be accomplished. Programs to implement the findings of the feasibility study were written during the winter of 1990 and were run against the database during April 1990-March 1991.
An early decision was made to perform as much of the cleanup and deduplication as possible on PCs, with the assumption that this would be more expeditious than developing programs to run on the CDC mainframe in Ottawa. In fact, it turned out that the entire project was done on PCs.
However, the required time would have been too great and disk space (for work files) too large to process the 140-megabyte database in one batch. Each record required about two seconds of processing just for cleanup, and the time to match records approximately squares as the number of records doubles. Therefore, records were copied from the mainframe onto diskettes in pools of about five thousand records and sent from Ottawa to Systems Planning in Northern California. After cleanup and duplicate matching, the sets of matched duplicates were sent to GCI in Southern California, where the editors decided which sets were true duplicates. The results were sent back to Systems Planning for merging, after which the complete file was sent to CHIN for uploading to the online file, replacing the earlier versions of the records. Because of the size of the data sets, Federal Express was used for all transfers.
The remainder of this paper reports on the cleanup and deduplication work in detail.
During the feasibility study for this project, three error types were defined:
1. Type 1 errors are those that a computer program can recognize and correct. For example, diacritical marks in five styles that …
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Cleanup and Deduplication of an International Bibliographic Database. Contributors: Toney, Stephen R. - Author. Journal title: Information Technology and Libraries. Volume: 11. Issue: 1 Publication date: March 1992. Page number: 19+. © 1991 American Library Association. COPYRIGHT 1992 Gale Group.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.