Academic journal article Information Technology and Libraries

An Overview of Applications of Automation to Special Collections: Rare Books and Art Collections

Academic journal article Information Technology and Libraries

An Overview of Applications of Automation to Special Collections: Rare Books and Art Collections

Article excerpt

Automation traditionally has been viewed as an inappropriate means of control of the unique item in special collections libraries because of its fundamental requirement of standardization. However it has come to be viewed as an excellent means of control of and access to these item by curators who view the special collections library as a system precisely because of standardization. This study looks at the issue of standardization in the application of computerized automation--specifically to rare books and art objects--and at some recent examples of those applications in both North America and Western Europe.

Historically, automation has come late to special collections, even later than to libraries generally. The primary reason for this tardiness, which may turn out to be advantageous as computer applications become over time more refined and more economical, lies in the apparent nature of the two entities: special collections materials are by definition rare or unique items that do not on first sight lend themselves readily to the standardization characteristic of computer technology. For example, the benefits derived from the shared cataloging of current materials via national bibliographic database utilities like OCLC and RLIN are not easily applicable to unique copies of rare books, manuscripts, or archives. Furthermore, standards of bibliographic control become irrelevant when applied to works of art and other museum objects.

Additionally, curators of special collections have been characterized, accurately or not, as obstructionists when confronted with the possibilities of automation: "These attitudes [of curators] seem to reflect a kind of institutional parochialism and lack of vision with regard to the role of special collections as a national research tool and not just a local resource or private treasure. This parochialism combines in some cases with a competitiveness with other special collections, a general unwillingness to engage in cooperative projects, and a highly developed chauvinism about the importance of their own collections,"[1] according to Steven Paul Davis. On their side, curators and archivists are frequently mystified by the inability of librarians to comprehend that rare books, manuscripts, and archives are not just peculiar entities whose descriptions should be distorted until they fit those of current monographs.

The major issues in the computerization of special collections are fundamentally a single issue, that of standardization. Although seen as a problem by some special collections curators, standardization via computer technology is, according to those curators and archivists cited in this paper, an excellent means of control of the unique materials found in special collections libraries. They share the view that a special collections library, like any organization, is a structure of interrelated systems with repeated functions that lend themselves to automation. Whether the function has to do with acquiring the items in the collection, establishing a form of inventory control, providing access to and information about the collection, administering the collection, or fitting the collection into its place in larger systems, certain aspects of each function lend themselves to the standardization that is fundamental to the computer. The machines and the software already exist in a bewildering plethora of choices, and new ones are being announced every day. The fundamental issue is that of perspective: approaching the special collections library from a systems point of view and then making appropriate decisions about which automated tools to choose in order to accomplish the objectives of the particular special collections library regarding its clientele and resources. The problem is the solution.


The recent history of the standardization of bibliographic control of rare books dates from the mid-1970s. According to John B. Thomas, III, librarian of the University of Texas at Austin and chair of the Standards Committee of the Rare Books and Manuscripts Section of the Association of College and Research Libraries, "descriptive cataloging codes for preparing machine-readable records for many types of materials (including rare books) began to be created soon after the first presentation by IFLA [International Federation of Library Associations and Institutions] of the International Standard Bibliographic Description for Monographic Publications or ISBD(M) in 1973. The impetus for a code for older materials was the attempted and unsatisfactory use of the MARC format in cataloging projects at the Bibliotheque Nationale, the Bodleian, and the National Library of Scotland in the early 1970s."[2]

The inadequacies of the ISBD(M) for bibliographic control of rare books stemmed from its design objective of control of current materials. It made no provision for exact transcription of title, for collation of every page, or for detailed information about facts of publication. The IFLA remedied these deficiencies in 1975 with the release of International Standard Bibliographic Description for Older Monographic Publications (Antiquarian) or ISBD(A). Also influential at this time in the development of a United States standard for rare book cataloging was the Library of Congress' Bibliographic Description of Rare Books, which incorporated provisions of the ISBD(A) into AACR2 cataloging rules. Additional fields for genre, provenance, physical aspects, printer, and details of publication were provided for the MARC format at the instigation in the United States of the Independent Research Libraries Association (IRLA) and carried out by the Standards Committee of the Rare Books and Manuscripts Section of the Association of Research Libraries.

The result of the above efforts is that for the first time there exists an internationally agreed-upon standard for bibliographic control in machine readable form of rare materials with its concomitant benefits of increased access by the international scholarly community, exchange of information among collections curators, and rationalized collection development. The key to success to date has been the willingness of rare book catalogers to embrace ISBD(A) and the MARC format, to work to enhance and change them where necessary, and to cooperate in developing national and international standards.

One approach to an internationally available database of rare books records in MARC format is the Incunable Short Title Catalogue (ISTC), a database of fifteenth-century books and other printed materials being compiled at the British Library. The copy-specific records come from American, British, and European union catalogs and represent "over three-quarters of the estimated total of incunable printing.... the largest incunable bibliography ever assembled."[3] The ISTC permits searching of each MARC field directly and in combination on the British library's BLAISELINE mainframe subscription service; it is planned to make the database available on disk for use on microcomputers and on CD-ROM.

For special collections libraries that lack access to mainframe computers, the microcomputer has provided automated bibliographic control. One such installation, using the commercially available software Pro-Cite, is that in the Map Department at Texas A&M University. Less a map library than a special collection of maps, pamphlets, books, and serials on the topic of travel and tourism, the department explored automation as the solution to its problems of control of a collection of fragile, uncataloged, and unlisted materials that were heavily used. The choice was made to eschew full cataloging in favor of a short-record database that would be "easy to learn, possible to edit and update regularly, and provide a printout that would be easy for patrons to use."[4] Pro-Cite, from Personal Bibliographic Software for use on IBM PCs, was chosen because of favorable reviews, low price, and the fact that it was the only commercially available software with an option for maps. Pro-Cite uses the MARC format and provides twenty predefined workforms for completing all or selected fields. All fields and records are variable length. Records may be selected with Boolean operators, sorted, formatted according to a variety of bibliographic styles, written to another file, or printed. Indexes may be created for any field. The department assigned Library of Congress call numbers and subject headings to the items in the collection because these were familiar to their patrons. The data were entered by students working from entry forms designed by the staff. The department was satisfied with the choice of Pro-Cite to solve a delimited problem and did not consider integrating this system with the other systems of the department or sharing the data with other collections.

Stephen Davis offers a warning regarding the introduction of microcomputers into special collections: "While microcomputers may in the future hold out many benefits to special collections, they also seem to have the potential of returning us to the dark ages of purely local practice in terms of cataloging and automation standards. Use of the bibliographic utilities has gradually imposed a basic consistency and standardization upon catalog records--something they never had before in special collections. Given their history, it would not be surprising if some institutions leapt at the chance of doing cataloging directly on microcomputers in order to get some of the advantages of automation but still continue to catalog the way they did a hundred years ago.... Microcomputers should generally not be used for cataloging in place of a local or national system unless a mechanism is in place to communicate those holding subsequently to a national database."[5]


The challenges of bibliographic control of rare books pale in comparison to those encountered in the management of two- and three-dimensional art objects. The museum community has approached automation as a possible answer to its need for control, access, and information exchange.

In the United States the effort to establish a national art and architecture database has come primarily from the Research Libraries Group (RLG), with art and architecture as one of the categories in which special membership may be granted to special and independent libraries. Early members of RLG were the Metropolitan Museum of Art; the Museum of Fine Arts, Boston; and the Art Institute of Chicago. The Art and Architecture Program Committee (AAPC) of the RLG was established in 1979.

The achievements of the AAPC are described by Nancy S. Allen of the library of the Museum of Fine Arts, Boston.[6] In 1983 the AAPC began a shared cataloging project for monographic series; in 1986 an exhibition catalog task force was created to study the problem of shared acquisitions and cataloging of these elusive publications; the RLG Art Conspectus was issued in 1981 and revised in 1987, providing data to enable member libraries to rationalize their collection development in line with policies at other libraries; and in 1984, a J. Paul Getty Trust grant funded a three-year retrospective conversion project. Special databases within the RLG database RLIN include the online Avery Index to Architectural Periodicals, comprising some 52,000 entries in more than 500 periodicals, and the Sales Catalog Index Input On-Line (SCIPIO), with almost 90,000 records entered since 1980.

In spite of the fact that RLIN could be considered "the |defacto' national art library database" (p. 143), an AAPC questionnaire sent to member libraries in 1987 revealed that "cataloging varies between institutions and often between visual formats within one institution, thesauri and subject headings vary, and that relatively little of this material is in machine-readable form" (p. 150). Therefore, the AAPC's Program for Research Information Management (PRIMA) recommended that the following needs be addressed in the future: "1) an increase in the retrospective conversion of bibliographies and indexes to serial literature; 2) more complete online bibliographic control over museum bulletins, exhibition catalogs, artists' books, trade catalogs, artists catalogs, and art newspapers; 3) preservation efforts for the literature of art; 4) links between RLIN and bibliographic database[s] outside the United States; 5) automated access to archival records from repositories worldwide; 6) development and use of a standard MARC format for works of art; 7) automated control of architectural drawings; 8) MARC cataloging of important photograph collections worldwide; 9) access to iconolaries for art automation projects; 10) online access to images of works of art, monuments, architectural drawings, and photographs" (pp. 148-49).

Allen concludes that "the AAPC might be considered a model for cooperation among art libraries. Its success has been due largely to the strength of the Research libraries Group itself" (p. 152).

One local project using RLIN is AVIADOR, the project of Columbia University's Avery Architectural and Fine Arts library to provide "integrated access to the contents of the collection regardless of the format ... [so that] a user should be able to find in one spot an answer to a question such as |What do you have on Frank Lloyd Wright?' and know that there are 156 books by him, 136 books about him, at least 178 periodical articles on him, and approximately 600 drawings by him."[7] The project, which has been funded by the Mellon Foundation, the NEH, and Eastman Kodak, has as its goal the automated catalog records of some 45,000 architectural drawings with future access via videodisk. Those involved in the planning for the project found that the addition of genre to the MARC record (added at the instigation of archivists), the notion (from librarians) of a uniform title, and standardized subject headings from the Getty Art and Architecture Thesaurus had benefitted their efforts, thus emphasizing the similarity of indexing needs of archival and art materials.

Another approach to automating the cataloging of art objects is that of the Vancouver Maritime Museum in Vancouver, British Columbia.[8] The problem of standardization became paramount when confronted with the task of attempting to describe historic watercraft and other three-dimensional art objects in words. Although the curators early made the decision to follow the guidelines of the Data Dictionary of the Canadian Heritage Information Network (CHIN), Canada's National Museum computer organization, they had, when faced with artifacts "ranging from rubber life rafts still in their fiberglass canisters to the Thomas F. Bayard, a 19th century pilot schooner and sealing vessel, floating in the museum's harbour" (p. 257) and little existing standards work, to rethink museological terminology from the ground up.

In this case, the postindexing capabilities of the computer provided the solution. "The answer seems to lie in the power of the computer to hold a number of variant terms in relation, with the retrieved information depending on the search. That is, through a combination of entry guides for physical data including standards for descriptions and a sophisticated postcoordinated search system supported by a thesaurus and a network of see also' references, a museum can record vast quantities of data without normalizing it ahead of time (precoordinating), and still maintain the capability for effective searches" (p. 259). Still, the curators argue for "some kind of standing committee for authority work and thesaurus construction" (p. 259).

An example of an integrated system combining free-text software with MARC format records for a museum/library is the British Architectural Library's (BAL) database of books, periodicals, drawings, photographs, and realia, which will ultimately include images of the items on videodisk. This collection, the private library of the Royal Institute of British Architects, functions as the national architectural library of Great Britain as well as a museological collection. "The BAL could be described as a museum with traditional museum curatorial functions developed in parallel with the documentation functions of a traditional library service."[9] Although AACR2 standards had been implemented for the cataloging of the library's monographs and serials, or different in-house system with different subject headings had been developed for the nonprint materials.

Automation provided the opportunity to create a single, integrated system of control and access for all the items in both collections. It was not just the automated equipment and software that made the integration possible, it was the process of standardization and systematization the automation required. "It was understood that automation alone could not solve the problems, but that cohesive systems could be achieved only through adopting common standards" (p. 246).

The system chosen was a free-text software package, STATUS, developed by the United Kingdom Atomic Energy Authority and adapted for use on a Prime 2655 minicomputer. STATUS was attractive for the library's purposes because of its "powerful retrieval capabilities and ... its ability to cope with a variety of record formats" (p. 246). The research organization British Non-Ferrous Metals had developed several software packages for use with STATUS: a thesaurus, a text editor, and a data entry system. STATUS at the British Architectural Library "consists of a central integrated database (the IDB) with satellite databases linked on the inner circle of satellites through the release of common data to the IDB, and on the outer circle by being searchable on a word-by-word basis achieved through the free-text application of indexing each word . . . "(p. 247). Other satellite databases include management databases for periodicals accessions and for object accessions, and a database of biographical information that serves as an authority file.

Although the system functions as a stand-alone system, because of its MARC format it will be able to link with other compatible systems internationally. However, some MARC fields had to be adapted and others borrowed (from MARC AMC) because no standards for describing art objects existed at the time the system was being developed.

Access to a similar combination of print and object collections has been developed at the Smithsonian Institution via a rigorous in-house analysis of the nature of the collections in terms of systems.[10] In 1987 the seven Smithsonian museums with major art collections, including the National Museum of American Art, the National Portrait Gallery, the Hirschhorn, and the Freer, initiated a change from the individual department systems that had evolved in a haphazard fashion to a single, integrated system that would track objects and print materials throughout their life cycles as well as provide management control and public access. The result was a move from the magnetic tape, batch processing, and two-week turnaround for hard copy of the Smithsonian's SELGEM system of the early 1970s to Infodata's INQUIRE, a text-oriented database package running on an IBM 4381 mainframe computer.

The analytic process that led to this choice was conducted in two phases: in the first phase museum functions were analyzed and models built, and in the second phase the nature of museum data was analyzed. The Smithsonian needed a system that would support "the acquisition of objects, title transfer, shipping, object tracking, conservation, maintenance of collections documentation, and much more" (222). The data analysis involved identifying and defining data elements, grouping them into logical data groups, and specifying the relationships between groups. A third phase consisted of merging, the results of the functional analysis and the data analysis, creating matrices to demonstrate the relationships between the data and the functions.

One of the difficulties the Smithsonian team experienced was changing mental habits from thinking in cataloging terms to thinking in terms of data and structures. "The goal of data modeling is to develop systems that are data driven rather than process driven. Processes are subject to change while data tend to be constant" (p. 228).

A promising application of automation to art collections is the use of analog optical disks, alone or in conjunction with digital text databases. While waiting, for digital video inter-active (DVI) technology which integrats text and pictures in a digitized environment, to become standardized and economical, some museum curators have chosen the relatively inexpensive route of videodisks as a storage medium for images of their collections.

Pamela N. Danziger, director of Information Research Services at the Franklin Mint, summarizes the nature of the choice: "For some applications where visual images need to be manipulated or high-quality detail must be maintained, such as pages of text, or technical images, such as architectural or engineering drawings, analog optical disk storage is clearly not suitable. But for many other applications, such as photo research files, picture reference, museums and art collections, where the only storage alternatives other than paper are slides or microimage, ... optical disk storage may be ideal, and eminently practical."[11] (1) compact storage--up to 55,000 images on a twelve-inch platter; (2) immediate retrieval, accomplished by entering the image's location via a keyboard or keypad; (3) excellent presentation medium, allowing for both monitor display and projection; and (4) ease of use. The disadvantage is the requirement of two screens, one for the database and one for the images. Danziger recommends subjecting the collection to a needs analysis before making the choice for videodisks.

Examples of the application of videodisk technology to museum collections come from two major French museums:

The Louvre has chosen videodisks for three different applications. The first is the production of a videodisk presenting a general introduction to all the Louvre collections, which is used by visitors to the museum and is also for sale. According to lean Galard, Chef du Service Culturel art the Louvre, "each volume contains, on the one hand a series of animated sequences, each one lasting about a minute, presenting and commenting upon several masterpieces; the other part is a bank of fixed images. The first volume is concerned with 2,500 paintings and drawings, the second 1,000 sculptures and other objects, the third 1,000 works of antiquity. Each work is presented in an ensemble view, with a caption of identification, and in at least five detailed images."[12] Each volume is accessible via an index, accompanied by a detailed catalog, and available in French, English, and Japanese. To ensure international standardization, each is produced according to PAL (European television) standards and to NTSC (American and Japanese) standards.

The second videodisk application has been designed for visitors to use in the area of the Louvre devoted to ancient Greek art. The visitor at a workstation featuring two screens may choose to access the database and the images via several different themes, such as "The Sculptural Program of the Parthenon" or "Greek Civilization." Each program lasts from ten to twenty minutes. Of the 2,300 images on this videodisk, only 1,000 came from the Louvre collections; so the videodisk provides access to an even broader range of art than that available in the Louvre.

The third videodisk was designed as an aid for the Louvre docents. Projected onto large screens, the images orient visitors to the objects in die museum's collections by placing them within the geographical context of the objects' origins: "This [videodisk] contains geographical maps, plans, diagrams, and views of the sites or the monuments of which the Louvre conserves fragments, as well as images of works conserved [in their entirety]" (p. 25).

Another major French museum to apply videodisk technology is the Union Francaise des Arts du Costume, housed since 1986 in the new Musee des Arts de la Mode. The collection comprises 10,000 garments and 32,000 accessory items dating from the eighteenth century to the present. These materials are fragile and are housed in several different locations, thereby making the videodisk an ideal access medium. The museum has produced two videodisks, the first in 1987 containing 24,000 images from their twentieth-century collection (offered for sale under the title "Mode"), and the second in 1989 containing 5,000 images from their eighteenth- and nineteenth-century collections. The videodisks are accessed in tandem with a digitized database.

According to Marie-Helene Poix, the museum's documentaliste, the results achieved with the videodisk were very satisfactory: "Along with its great storage capacity (54,000 images to a side), its very rapid access time, its excellent level of interaction when it is coupled with a microcomputer, the videodisk quickly imposed itself as a necessity, a solution to our needs."[13] The videodisks are used both by the museum staff as a management tool and by scholars.

The combination of database and optical imaging technology is one of the most interesting developments in the application of automation to collections of visual images. The combination of Boolean searching of keywords and surrogate images not only provides much quicker access to the desired images in the collection but also contributes significantly to the preservation of fragile originals by reducing the amount of handling to which they are subjected.

One such combined application has been developed by the National Archives of Canada for the 20,000 editorial cartoons and caricatures in the Canadian Centre for Caricature. The initial planning stage, as described by Gerald Stone, chief of the Information Services Section, Documentary Art and Photography Division, and Philip Sylvain, optical disc advisor,[14] involved consideration of various means--from photography to microfilming to CD-ROM to analog optical disks--to attain their goal of access plus preservation.

Their initial solution was to combine a database management system with digital WORM optical disks for image retrieval, but they ultimately chose videodisks instead of optical disks in order to achieve a higher resolution for display and printing of the surrogate images. Their database management system is ZIM, a fourth-generation language product of Zanthe Information, Inc.

The database management system permits Boolean searching, variable-length descriptive records, and repeatable fields. Data may be entered directly or by file transfer from MINISIS, the microcomputer-based system used by the National Archives of Canada for descriptions of archival materials. Users may search by subject, artist, publication, place, date, and item number. They may then view the corresponding surrogate images with their descriptive records, zoom in on a selected image, and print newspaper-quality images via a laser printer. Plans are under way to extend this combined system, called Archi-VISTA, to the collections of photographs within the National Archives. A need for an interactive thesaurus is recognized.

An example of integrated text and image software is IMAGEQUERY, the University of California at Berkeley's prototype, which was developed in-house in response to the needs of the curators of the university's several collections of images and objects scattered throughout the campus. The UC Berkeley Image Database Project is described by Howard Besser,[15] assistant professor of library and information science at the University of Pittsburgh, who while at Berkeley was involved with the project.

The challenge was to provide access to fragile material in the University Art Museum, the Architectural Slide Library, the Geography Department's Map Library, and the Lowie Museum of Anthropology's collection of photographs of its objects. Few of the collections had benefited from any kind of automated access and most were outside the jurisdiction of the university library. However, Berkeley was committed to the goal of remote access to all campus materials via workstations. As a result, the curators sat down with the staff of the university's computing center and together thought through the problem.

The result was the prototype of IMAGEQUERY, an integrated software designed to work on a high-speed distributed computer network. The system may be accessed by a variety of microcomputers in bit-mapped workstations running X-Windows on the campus network. Searching is by Boolean operators to produce a hit file of surrogate images with shortened textual descriptions. The various collections maintain their identity, requiring the user to be familiar with each collection's set of indexing terms. However, it is anticipated that terms from specialized thesauri such as the Art and Architecture Thesaurus and the capability to search across collections will be available in the future.

A much more economical combination of text and image databases was devised by a microcomputer user. Clarita S. Anderson of the Department of Textiles and Consumer Economics at the University of Maryland first set out to implement a grant awarded for the creation of a database of her historic textiles collection. She chose the commercially available software dBASE III+ to meet her criteria of handling up to ten thousand records, performing complex searches with several variables, being easy to learn and enter data into, and being upgradable in a nonworkstation environment.

However, "it soon became apparent that the database would be a more useful research tool if it were enhanced with images."[16] The critical factor in this user's choice of image software was cost. She warns, "Problems with the software for capturing images, the image capture board, the equipment to capture the images, the cost, and the overall quality of the images were much more difficult to solve than the initial choosing of the data management software" (p. 598). Her choice of image software was PicturePower. However, the camera required turned out to be of higher quality and therefore more expensive than the software producer had indicated, and adjustments also had to be made to the software in order to produce an image of acceptable resolution.

The user's final adjustment was to modify DBASE so that the searcher would not have to leave dBASE and enter PicturePower in order to search for images. The resulting combined system, composed of commercially available software and hardware, met the needs of the user. Provision of broader access and compatibility with other like databases were not considerations.


The application of automation to special collections has greater benefits than merely automating existing functions. One important benefit lies in the fundamental standardization required by the computer. Although standardization has been historically a bugbear to curators of rare books and art objects, the existence of commonalities in this group of unique items suggests the possibility of improved control and access to them. As Stephen Davis says, "one of the lessons of the past few years has been that special collections in different areas, such as rare books, graphic materials, manuscripts, maps, music, archival motion pictures, even machine-readable data files, have a great deal in common in terms of specialized access requirements."[17]

Not only improved access can be achieved by automation but also an improved restructuring of all of the functions of the special collections library. Typical of almost every successful automation project has been functions analysis as a first step. Annette F. Waterman of the School of Information Studies at Syracuse University points out that "a manual system simply transferred to a computer does not change the structure of a system."[18] "The first step should be the formulation of a statement of purpose for the collection .... A systematic analysis of the collection and its internal workings should be the second step" (p. 61).

Every aspect of the special collections library should be included in a functions analysis, according to Richard M. Kesner, archivist, consultant, and manager of Office Systems and Services for the F. W Faxon Company: "In developing a plan of action, word processing, financial modeling, and facility management must share the stage with indexing and information retrieval."[19] He identifies seven areas of activity to be considered: "collection development, physical control over collections, intellectual control over collections, reference services, general administration, grants administration, and publications production" (p. 24).

The final benefit of automation for special collections derives from the fact that automation in itself creates a special collection, a new way of looking at the disparate units in the collection by bringing them together in new and unexpected ways. A special Collection is an organism, whose totality is greater than the sum of its parts and whose contribution to research frequently arises from its juxtaposition of unique pieces of reality, not just from amassing them in one place. Automation is a very friendly means to this end.


[1.] Stephen Paul Davis, "Bibliographic Control of Special Collections: Issues and Trends," Library Trends 36 (Summer 1987): 112-13. [2.] John B. Thomas III, "The Necessity of Standards in an Automated Environment," Library Trends 36 (Summer 1987): 130. [3.] "ISTC," Papers of the Bibliographical Society of America 83 (Sept. 1989): 371. [4.] Julia M. Rholes and Suzanne D. Gyeszly, "Pro-Cite as a Management Tool for a Travel and Tourism Collection," Collection Building 10, nos. 3/4 (1989): 48. [5.] Davis, 119. [6.] Nancy S. Allen, "The Art and Architecture Program of the Research Libraries Group," INSPEL 23 (1989): 141-54. [7.] Angela Giral, "At the Confluence of Three Traditions: Architectural Drawings at the Avery Library," Library Trends 37 (Fall 1988): 233. [8.] John E. Summers and Edward G. Summers, "The Computerized Cataloguing of Historic Watercraft: A Case Study in Information Retrieval in Museology," Journal of the American Society for Information Science 40 (May 1989): 253-61. [9.] Jan Floris van der Wateren, "Achieving the Link Between Art Object and Documentation: Experiences in the British Architectural Library," Library Trends 37 (Fall 1988): 244. [10.] Patricia Ann Reed and Jane Sledge, "Thinking About Museum Information," Library Trends 37 (Fall 1988): 220-31. [11.] Pamela N. Danziger, "Picture Databases: A Practical Approach to Picture Retrieval," Database 13 (Aug. 1990): 13. [12.] Jean Galard, "Le Videodisque au Louvre: nouveaux moyens, nouvelles pratiques," Art Libraries Journal 15, no. 2 (1990): 24; my translation. [13.] Marie-Helene Poix, "L'Union Francaise des Arts du Costume, le Centre de Documentation, le videodisque," Art Libraries Journal 14, no. 4 (1989): 23; my translation. [14.] Gerald Stone and Philip Sylvain, "Archi-VISTA: A New Horizon in Providing Access to Visual Records of the National Archives of Canada," Library Trends 38 (Spring 1990): 737-50. [15.] Howard Besser, "Visual Access to Visual Images: The UC Berkeley Image Database Project," Library Trends 38 (Spring 1990): 787-98. [16.] Clarita S. Anderson, "A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database," Journal of the American Society for Information Science 42, no. 8 (1991): 598. [17.] Davis, "Bibliographic Control," 117. [18.] Annette F. Waterman, "First Steps in Planning the Automation of a Slide Collection," Art Documentation 8 (Summer 1989): 62. [19.] Richard M. Kesner, Automation for Archivists and Records Managers (Chicago: American Library Assn., 1984), 10.

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.