Logons and downloads offer a glimpse into user behavior, but they present only part of the picture. To create a fuller understanding, initiatives such as Project MESUR and the Eigenfactor, as well as user-oriented models and ROI studies, have emerged.
We've all seen signs like the one in figure 10: as access to Web-based resources has improved, libraries have broadcast to patrons that library-provided resources are available to them, 24/7, in the comfort of their home, office, or dorm room. It seems that patrons have gotten that message loud and clear; while academic and public libraries report that door counts have increased significantly from the dark days of the late 1990s, some statistics, such as reference requests, have never fully rebounded. As a result, libraries have shifted energy and financial resources to realizing the potential of electronic access to increase and improve service to patrons, leading some to speculate that "electronic use is replacing physical use." (1) Researchers investigating remote library use frequently must make do with data about the number and duration of logons to specific databases. This approach to measuring use is arguably little different from the virtual equivalent of door counts and circulation statistics, and usually does little to clarify our understanding of the role of the library and information sources in the life of the user.
Librarians recognize the need for a creating a deeper understanding of electronic resources usage but are hampered by the Three Billy Goats Gruff of librarianship: lack of time, lack of financial resources, and lack of technical capability. Few of the electronic resources librarians who responded to an informal survey (see chapter 4) reported that they assess electronic resources usage beyond reviewing COUNTER-generated statistics, although many expressed frustration at not being able to do so. The need for improved vendor support and skepticism about the accuracy of statistics--even in reports issued by COUNTER-compliant products--were frequently cited as impediments. In the words of one respondent, "we do keep track of sessions and searches, but have not gone further into the data than the basic numbers. Although there may be valuable information within that data, I do not have the time to mine it."
[FIGURE 10 OMITTED]
Although the LIS literature features regular assertions that there is much to be learned about patron use behavior from database statistics, little is reported on this topic beyond information about the number and nature of database logons and article downloads. Although download-level statistical analysis remains the dominant approach, there are several models in various stages of development that offer a promising glimpse at the future of electronic resource evaluation, several of which were discussed at a December 2009 workshop entitled "Scholarly Evaluation Metrics: Opportunities and Challenges" sponsored by the National Science Foundation (NSF). While speakers focused more specifically on alternatives to relying on citation as a gauge of scholarly research influence, several approaches that were discussed have implications for improving understanding of library-provided electronic resources.
Alternatives to Download Statistics: Citation
Citation--the act of making reference to a journal, a particular work, or individual or collected works by a specific author--has traditionally been treated as a proxy for scholarly influence or importance. According to Wilson, "the main strategy for determining what information has actually been used over the past fifty years has been citation analysis." (2) Kurtz and his colleagues called citation "the primary bibliometric indicator of the usefulness of an academic article." (3) If we agree that an article or book that has been cited has been determined to be useful by the person making the citation, can we also assume that (a) the cited work's content has been used and (b) the citing author considers the cited work to be of high quality or importance? …