The diverse avenues to information that are rapidly emerging challenge the role and very survival of special libraries. Information-seekers who once turned to their corporate or agency library for help may now be prompted to use electronic or commercial document delivery services, to purchase individualized access rights in the forms of database subscriptions or purchased books, or to accept abbreviated abstracts instead of retrieving full-text articles. Not only are libraries competing for customers within this changing information delivery marketplace, they are re-examining their management, their manner of justifying budget, and their very existence. To compete effectively and survive, special libraries may profit by using the managerial and marketing tools and approaches developed in business, such as total quality management (TQM). TQM emphasizes providing quality services as perceived from the customer's point of view, not the management point of view. Heavily used in Japan, TQM has been adopted in manufacturing and service industries in the United States over the last 20 years. Where quality considerations have long been a concern of information professionals, TQM has not been widely applied in libraries.
A major stumbling block to implementing TQM in special libraries is the lack of an adequate, transferable instrument for assessing service quality from a customer's point of view. Lyon has issued a call for standardized instruments oriented to specialized reference services. Most questionnaires are developed for a specific study with no attempts to devise a more generic instrument. It is especially important that such an instrument provide adequate feedback to allow libraries to determine the criteria that library users value about information services. Specific feedback allows libraries to modify services to meet the customers' criteria. Developing an instrument is costly and perhaps unnecessary if instruments already exist which are appropriate for, or can be adapted for, special libraries.
This paper surveys the marketing literature to identify models and instruments that have been used in service industries to measure service quality and assesses their applicability to special libraries. It marks the first stage of a project funded by the Special Libraries Association to develop an instrument for assessing service quality in special libraries. Research in a range of service industries has pointed to numerous common factors characterizing all types of service industries. As service organizations, special libraries and information centers can benefit from models and techniques developed and widely used in service industries. The instrument finally suggested in the project may provide a basis for comparing special library performance with the performance of other service industries and will help individual libraries to implement TQM and thus become more competitive.
Characteristics of Services
Services differ from goods in several ways that make judging service quality difficult. A good is a tangible object. A service is a performance or an act and thus is intangible. Within product lines, goods have great consistency and are often produced to meet certain standards or guidelines. Dependent on the interaction between client and service provider, services, even of the same type, are subject to greater variation than goods. With goods, production is separate from consumption. The customer is present only at the final stage. With services, the production and consumption stages are often inseparable. As a result, the client is often present throughout the service encounter. Services, then, are characterized by intangibility, heterogeneity, and inseparability of production and consumption.
Information services are perhaps among the most difficult to measure in terms of both customer satisfaction and service quality because of the perceptual overlap between information as a commodity and information as a process. Whitehall, for example, says his literature review is "about the quality of a service, not just the quality of information." In the course of performing services, providers in many service sectors often generate a tangible output. An accountant, for example, audits a firm's books and presents the results in a report. An information specialist searches a computerized database and generates a bibliography for a client. In the information related literature, the values assigned to the product are often confounded with the values assigned to the service.
Definitions of Service Quality
Service quality is a judgment about the ability of a service to fulfill its task. Orr defined quality as "how good is the service?" In some cases, the definition is an operational one designed to facilitate continued research. Parasuraman, Zeithmal, and Berry refer to it as "a form of attitude, related but not equivalent to satisfaction, [which] . . . results from a comparison of expectations with perceptions of performance."
The reference to "satisfaction" in the latter definition is important. The relationship between customer satisfaction and service quality is an ongoing question in service marketing. Researchers agree that customer satisfaction refers to a judgment made about a specific transaction. Service quality, on the other hand, is a more generalized, enduring judgment based in part on previous encounters which themselves resulted in satisfaction judgments. It would be possible for a client to have an occasional unsatisfactory encounter with an organization he continues to rate high on service quality. This transaction/long-term judgment distinction is not always clear in the library literature.
Two approaches to measuring service quality have evolved in service marketing over the last 10 to 15 years. The dominant one, referred to as the P-E approach, views service quality as the gap between expectations (E) and performance (P). Critics have raised several questions about this approach, however, and measures based on performance alone have developed recently. Each approach will be addressed along with an instrument that has been developed for use with it.
Measuring Service Quality: Performance-Minus-Expectations Approach
A significant development influencing the study of service quality and the acceptance of the P-E approach is the "gaps model" formulated by Parasuram, Zeithaml, and Berry in 1985. This model is grounded in disconfirmation theory, which is also a prevalent approach to studying customer satisfaction. Disconfirmation theory as applied in service quality posits that, before using a service, a client has certain expectations about it. After the service encounter, he compares those expectations with actual performance and his perception is either confirmed (if they match), negatively disconfirmed (if the perception is lower than the expectation), or positively disconfirmed (if the perception is higher than expectations). The essence of the theory is a comparison between expectations and performance.
The gaps model focuses on several service gaps that affect service quality: between customers' and management's perception of service expectations (Gap 1); between management's perception of customers' expectations and service-quality specifications (Gap 2); between service-quality specifications and actual service delivery (Gap 3); and between actual service delivery and what is communicated to customers about it (Gap 4). The quality gap (Gap 5) can be closed by reducing the four internal gaps found within the management of a service organization.(14) In measuring service quality and applying this model, however, the emphasis has been on the "expected service-perceived service gap" (P-E).
In 1988, to test the gaps model, Parasuraman, Zeithaml, and Berry devised the SERVQUAL instrument for measuring service quality. They revised it slightly in 1991. Since the gaps model was derived from studies in several different service industries, the authors intentionally designed a "generic instrument with good reliability and validity and broad applicability." They envisioned the instrument being used across different types of service institutions, modified slightly as needed. It has become the most widely used instrument for measuring service quality in settings such as banks, car service shops, accounting firms, dry cleaning firms, educational institutions, hospitals, hotels and restaurants, pest control firms, public recreation programs, and travel agencies. No other instrument for measuring service quality has been tested as stringently and comprehensively as SERVQUAL.
In SERVQUAL, the client responds to the same 22 questions twice: first, to establish his expectations of the ideal service; then, to note his perceptions of the actual service provided by a particular firm. Each response is scored on a seven-point Likert scale. Difference scores are computed by subtracting the score for expectations from the perceptions, so scores can range from -6 to +6. The higher the score, the higher the perception of quality.
The 22 items elicit information about service quality in connection with the first five dimensions identified in Table 1 on page 39. The first three dimensions and the asterisked dimensions were considered in the original qualitative research to develop SERVQUAL. In developing the scale, overlaps in the factors or dimensions were eliminated, resulting in the first five dimensions. Courtesy, for example, is included in the Assurance dimension.
SERVQUAL is often used in conjunction with other questions which assess overall service quality or evidence of subsequent action, [TABULAR DATA OMITTED] e.g. recommending the service to a friend, willingness to use the service again.
Several criticisms have arisen about the SERVQUAL scale as a result of its widespread use and close scrutiny by other researchers. Some are more important than others, and most have been rebutted or addressed in subsequent articles by Parasuraman and his colleagues. The criticisms have focused on: the scale's theoretical base, the comparison norms for "expectations," the number and generic nature of the dimensions, the instrument's length, the ease of administration and analysis of data, the need to use both perceptions and expectations data, the validity of difference scores as data, and the basis for inferring that higher scores always indicate higher quality.
In numerous studies, the researchers have reworded items, substituted or inserted new items, and removed items from the scale to make it more appropriate for the service industry being studied. Such modifications are not considered criticisms of SERVQUAL since this kind of use was anticipated and suggested by the original developers. As Parasuraman and his collaborators note, however, criticisms and findings questioning the number and nature of the dimensions may arise from modifying the scale so much that its integrity is undermined.
Researchers have rarely disputed the validity of the individual 22 items or statements used in the revised scale, considering them well-supported by the scale development and revision procedures and through use in subsequent studies. As a result, the actual SERVQUAL items serve as the basis for other instruments.
Measuring Service Quality: Performance-Based Approach
Several of the criticisms of SERVQUAL can be remedied without rejecting the perception of service quality as a gap between performance and expectations or the P-E approach. Brown and others, for example, tested an alternative to difference scores. Addressing definitional problems with the term "expectations," Parasuraman and his cohorts have since clarified expectations as "normative," not prescriptive. The expectations represent the qualities an excellent service organization would have, not what it should have. Word changes in the 1991 revision establish that orientation more clearly.
But other criticisms of SERVQUAL are interrelated and originate in its definition of service quality as a performance/expectations gap. Once this theoretical approach is accepted, and assuming the validity of the dimensions, the instrument must measure both expectations and performance through a range of items, resulting in a long instrument. Various researchers have discovered that performance scores alone have a greater predictive value for overall assessments of service quality and thus question the need for both measures. As a result, within the last few years, several authors have developed measures based on performance alone.
The movement to a performance-based measure is not strictly a pragmatic response to difficulties with the SERVQUAL instrument. Proponents of the performance-based methods contend that multi-attribute attitude theory, especially the "adequacy-importance" model, is more appropriate than the gaps model and disconfirmation theory if the intent is to predict actual behavior or behavioral intent. The basic premise of multi-attribute attitude theory is that clients form attitudes about service or product quality on the basis of service or product attributes. This theory better explains relationships between service quality, customer satisfaction, and purchase or use intentions.
SERVPERF is an instrument used to generate a performance-based measure of service quality. It was developed by Cronin in 1992 in a study of four service sectors (banking, pest-control, dry cleaning, and fast food). Operationally, SERVPERF in its final form omits the expectation items section of SERVQUAL. SERVPERF consists of the 22 items questioning customers' perceptions of service, worded exactly as in SERVQUAL. It may include questions to assess the importance of the items' dimensions and several questions about overall service quality, satisfaction, and purchase intention. As in SERVQUAL, the questions can be modified and additional items included. SERVPERF is shorter and does not require the use of difference scores for analysis.
A literature review indicates no use of SERVPERF by other researchers. Its relatively recent appearance may mean that such use simply has not been reported.
Applicability of Service Marketing Scales to Special Libraries
Libraries and information services can benefit significantly by stressing their commonalties rather than their differences with other segments of the service industry. Numerous typologies of services have been devised to see similarities and differences across sectors of the service industry. Based on Lovelock's typology of services, special libraries and information centers offer intangible services directed at people's minds. They often have a membership relationship with their clients and usually provide services in discrete transactions. The services themselves are highly customized, and staff exercise considerable judgment in meeting individual needs. The extent of demand probably fluctuates only narrowly over time, and, in most cases, peak demand can usually be met without major delay. The nature of the interaction between the client and special library differ: sometimes the client comes to the library; sometimes the library delivers services to the customer. Services are delivered in person but also through electronic communications or mail. Services may be provided at a single site or at multiple sites.
The services marketing research is conceptually and methodologically rich. In addition, by moving to that framework, library administrators will be adopting a perspective that is more prevalent within many of their parent institutions.
Aside from these general factors, several others were considered in determining the potential appropriateness of service quality instruments in the service marketing area to special libraries:
1. The complexity of the instrument.
2. The ease of administration and analysis of results.
3. Its orientation to overall performance quality or to quality of specific services.
4. Its usefulness for predicting overall variance.
5. Its usefulness for providing diagnostic information.
6. Its usefulness for providing a basis for comparisons across a range of types of libraries and other service organizations.
Of the two instruments described in this paper, SERVPERF is less complex, shorter, easier to administer, and better in predicting overall variance. SERVQUAL, on the other hand, is attractive because it is more comprehensive. It provides better diagnostic information, and, if desired, the performance data alone can be used to explain overall variance. Because it has been used more widely, SERVQUAL also allows for greater comparability with other service organizations.
Both instruments are oriented to overall performance quality, not to the quality of specific services; they are generic instruments. Both draw on the same items, whose wording would have to be modified slightly to fit library settings.
The dimensions covered by both are the same and seem appropriate for libraries. One medical library study of online services has already accepted the SERVQUAL dimensions. In a related study, Danuta Nitecki is testing the applicability of SERVQUAL to several services in an academic library. It seems reasonable to base the project instrument on one of these two. Their strengths outweigh their deficiencies, and the rigor with which at least SERVQUAL has been developed can be matched only with considerable effort. Both are flexible instruments and can be adapted as necessary for special libraries.
An important unknown which prevents outright adoption of one of these instruments aside from minor word changes is the extent to which the dimensions covered by SERVPERF and SERVQUAL adequately reflect the range of values library clients attach to information services. If they do not, some items may have to be added. A major task in the next stage of the project, preliminary to the addition of any new items, is a more thorough comparison of values presented in the library literature related to library service, values attached to services in the service marketing literature, and the dimensions covered in the development of SERVQUAL and SERVPERF. This analysis will be supplemented by focus group meetings with special library clients and a computer-mediated discussion with special librarians.
Acknowledgment: This research is funded by a research grant from the Special Libraries Association.
1 Deming, W. Edward. Out of the Crisis. Boston, MA: Massachusetts Institute of Technology, Center for Advanced Engineering Study, 1986; Walton, Mary. The Deming Management Method. New York: Perigee Books, 1986; Lovelock, Christopher H., Services Marketing. 2d ed. Englewood Cliffs, NJ: Prentice Hall, 1991.
2 For evidence of long-standing concerns about quality, see, for example, Baker, Sharon L. and F. Wilfrid Lancaster. The Measurement and Evaluation of Library Services. 2d ed. Arlington, VA: Information Resources Press, 1991; Orr, R.H. "Measuring the Goodness of Library Services: A General Framework for Considering Quantitative Measures." Journal of Documentation 29(3): 315-332 (September 1973). For specific comments about TQM, see Johannsen, Carl Gustav. "The Use of Quality Control Principles and Methods in Library and Information Science Theory and Practice." Libri 42(4): 283-295 (October-December 1992); Mackey, Terry and Kitty Mackey. "Think Quality! The Deming Approach Does Work in Libraries." Library Journal 117(9): 57-61 (1992); Riggs, Donald. "Strategic Quality Management in Libraries," in Advances in Librarianship, v. 16, Irene P. Gooden, ed. New York: Academic Press, 1992. pp. 93-105; Shaughnessy, Thomas W. "Benchmarking, Total Quality Management, and Libraries." Library Administration and Management 7(1): 7-12 (Winter 1993).
3 Lyon, Elizabeth. "The Questionnaire - A Quality Control Method for Online Searching?" Health Libraries Review 6(1): 3-19 (1989).
4 Parasuraman, A., Valarie A. Zeithaml, and Leonard L. Berry. "A Conceptual Model of Service Quality and Its Implications for Further Research." Journal of Marketing 49(4): 41-50 (Fall 1985).
5 Whitehall, Tom. "Quality in Library and Information Service: A Review." Library Management 13(5): 23-35 (1992).
6 Dalton, Gwenda M.E. "Quantitative Approach to User Satisfaction in Reference Service Evaluation." South African Journal of Library and Information Science 60(2): 89-103 (June 1992); Taylor, Robert S. Value-Added Processes in Information Systems. Norwood, NJ: Ablex Pub. Corp., 1986; Whitehall.
8 Parasuraman, A., Valarie A. Zeithaml, and Leonard Berry. "SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality." Journal of Retailing 64(1): 12-40 (Spring 1988).
9 Parasuraman, A., Valarie A. Zeithaml, and Leonard L. Berry. "Reassessment of Expectations as a Comparison Standard in Measuring Service Quality: Implications for Further Research." Journal of Marketing, 58(1): 111-124 (January 1994).
10 Bitner, Mary Jo. "Evaluating Service Encounters: The Effects of Physical Surroundings and Employee Responses." Journal of Marketing 54(2): 69-82 (April 1990); Bolton, Ruth N. and James H. Drew. "A Multistage Model of Customers' Assessments of Service Quality and Value. "Journal of Consumer Research 17(4): 375-384 (March 1991); Swan, John. "Consumer Satisfaction Research and Theory: Current Status and Future Directions," in International Fare in Consumer Satisfaction and Complaining Behavior. Ralph L. Day and H. Keith Hunt, eds. Bloomington, IN: School of Business, Indiana University, 1983. pp. 124-129.
11 Dalton; Tessier, Judith A., Wayne W. Crouch, and Pauline Atherton. "New Measures of User Satisfaction with Computer-Based Literature Searches." Special Libraries 68(11): 383-389 (November 1977).
12 Parasuraman, Zeithaml, and Berry, 1985; Zeithaml, Valarie A., A. Parasuraman, and Leonard L. Berry. Delivering Quality Service: Balancing Customer Perceptions and Expectations. New York: The Free Press, 1990.
13 Bitner; Churchill, Gilbert A., Jr., and Carol Surprenant. "An Investigation into the Determinants of Customer Satisfaction." Journal of Marketing Research 19(4): 491-504 (November 1982); Oliver, Richard L. "A Cognitive Model of the Antecedents and Consequences of Satisfaction Decisions." Journal of Marketing Research 17(4): 460-469 (November 1980); Oliver, Richard L. "Measurement and Evaluation of Satisfaction Processes in Retail Settings." Journal of Retailing 57(1): 25-48 (Fall 1981). See Dalton for a library-related application.
14 Parasuraman, Zeithaml, and Berry, 1985.
15 Parasuraman, A., Valarie A. Zeithaml, and Leonard L. Berry. "Refinement and Reassessment of the SERVQUAL Scale." Journal of Retailing 67(4): 420-450 (Winter 1991).
16 Cronin and Taylor, 1992.
17 Bouman, Marcel and Ton van der Wiele. "Measuring Service Quality in the Car Service Industry: Building and Testing an Instrument." International Journal of Service Industry Management 3(4): 4-16 (1992).
18 Bojanic, David C. "Quality Measurement in Professional Services Firms." Journal of Professional Services Marketing 7(2): 27-36 (1991).
19 Cronin and Taylor, 1992.
20 Rigotti, Stefano and Leyland Pitt. "SERVQUAL as a Measuring Instrument for Service Provider Gaps in Business Schools." Management Research News: MRN 15(3): 9-17 (1992).
21 Babakus, Emin and W. Glynn Mangold. "Adapting the SERVQUAL Scale to Hospital Services: An Empirical Investigation." Health Services Research 26(6): 767-786 (1992).
22 Saleh, Farouk, and Chris Ryan. "Analysing Service Quality in the Hospitality Industry Using the SERVQUAL Model." Service Industries Journal 11(3): 324-345 (1991).
23 Cronin and Taylor, 1992.
24 Crompton, John L. and Kelly J. Mackay. "Users' Perceptions of the Relative Importance of Service Quality Dimensions in Selected Public Recreation Programs." Leisure Sciences 11(4).
25 Fick, G.R. and J.R.B. Ritchie. "Measuring Service Quality in the Travel and Tourism Industry." Journal of Travel Research 30(2): 2-9 (1991).
26 Parasuraman, Zeithaml, and Berry, 1988; Parasuraman, Zeithaml, and Berry, 1991.
27 For reactions by the developers of SERVQUAL to the criticisms, see: Parasuraman, A., Leonard L. Berry, and Valarie A. Zeithaml. "More on Improving Service Quality Measurement." Journal of Retailing 69(1): 140-147 (Spring 1993); Parasuraman, Berry, and Zeithaml, 1991; Parasuraman, Zeithaml, and Berry, 1994. Responses by the critics are: Cronin, J. Joseph and Steven A. Taylor. "SERVPERF versus SERVQUAL: Reconciling Performance-Based and Perceptions Minus Expectations of Service Quality." Journal of Marketing 58(1): 125-131 (January 1994); Teas, R. Kenneth. "Expectations as a Comparison Standard in Measuring Service Quality: An Assessment of a Reassessment." Journal of Marketing 58(1): 132-139 (January 1994).
28 Cronin, J. Joseph, Jr., and Steven A. Taylor. "Measuring Service Quality: A Reexamination and Extension." Journal of Marketing 56(3): 55-68 (July 1992); Cronin and Taylor, 1994.
30 Carman, J.M. "Consumer Perceptions of Service Quality - An Assessment of the SERVQUAL Dimensions." Journal of Retailing 66(1): 33-55 (1990).
31 Cronin and Taylor, 1992.
32 Ibid.; Ennew, Christine T., Geoffrey V. Reed, and Martin R. Binks. "Importance-Performance Analysis and the Measurement of Service Quality." European Journal of Marketing 27(2): 59-70 (1993).
34 Babakus, Emin and Gregory W. Boller. "An Empirical Assessment of the SERVQUAL Scale." Journal of Business Research 24(3): 253-268 (May 1992); Brown, Tom J., Gilbert A. Churchill, Jr., and J. Paul Peter. "Improving the Measurement of Service Quality." Journal of Retailing 69(1): 127-139 (Spring 1993); Carman; Peter, J. Paul, Gilbert A. Churchill Jr., and Tom J. Brown. "Caution in the Use of Difference Scores in Consumer Research." Journal of Consumer Research 19(4): 655-662 (March 1993).
35 Teas, R. Kenneth. "Expectations, Performance Evaluation, and Consumers' Perceptions of Quality." Journal of Marketing 57(4): 18-34 (October 1993); Teas, 1994.
36 Boulding and others; Carman. New items are noted in the text of these articles.
37 Parasuraman, Berry, and Zeithaml, 1991, p. 445.
38 Brown, Churchill, and Peter. For a rebuttal to their concerns, see Parasuraman, Berry, and Zeithaml, 1993.
39 Parasuraman, Berry, and Zeithaml, 1991.
40 Babakus and Boller; Babakus and Mangold; Bolton and Drew; Churchill and Surprenant; Cronin and Taylor, 1992.
41 Cronin and Taylor, 1992.
43 SERVPERF is described here as it would be for subsequent studies. To test their argument, the authors actually used SERVQUAL, supplemented by questions to measure the importance of the SERVQUAL items and single-item scales to measure overall service quality, customer satisfaction, and purchase intentions. Ibid.
44 The importance weight questions were adapted from a similar section in the 1988 version of SERVQUAL, dropped in the 1991 version.
46 Humphries, Anne Wood and Gretchen V. Naisawald. "Developing a Quality Assurance Program for Online Services." Bulletin of the Medical Library Association 79(3): 263-270 (July 1991).
47 Nitecki, Danuta. "An Assessment of the Applicability of SERVQUAL Dimensions as Customer-Based Criteria for Evaluating Quality of Services in an Academic Library." (Dissertation research in progress)
Marilyn Domas White is Associate Professor and Eileen G. Abels is Assistant Professor at the College of Library and Information Services at the University of Maryland in College Park, MD.…