Academic journal article Library Technology Reports

Chapter 3: Digging into the Data: Exposing the Causes of Resolver Failure

Academic journal article Library Technology Reports

Chapter 3: Digging into the Data: Exposing the Causes of Resolver Failure

Article excerpt

Abstract

OpenURL link resolvers have become a core component of a library user's toolkit, yet a historical comparison suggests that they fail nearly a third of the time, and have not improved over the past six years (see table 3). This study dissects the evidence of failure types and causes for two resolver installations in order to identify and prioritize specific tasks that libraries can undertake to accomplish incremental improvements in their resolver's performance. In doing so, we hope to stimulate understanding, thinking, and action that will greatly improve the user experience for this vital tool.

**********

The preceding chapters of this report address the state of the art of OpenURL (chapter 1) and general improvements that libraries can make to their local link resolver implementations (chapter 2). This chapter reports the results of a detailed study carried out to determine link resolver accuracy rates and to tease out the causes of link resolver failure at the authors' institutions. (1) In addition to quantitative assessment of local resolver functionality, we gained valuable qualitative experience as extensive users of our own systems. The results of these two types of observation are then combined into a top ten list of tasks that should accomplish significant improvements in link resolver effectiveness at our libraries. The majority of these tasks are broadly applicable, and many can be applied individually to improve resolver effectiveness at any library.

Testing OpenURL Full Text Link Resolution Accuracy at Our Institutions

This study is based on the "real-life" approach of Wakimoto and others (2006) to allow a historical comparison with their 2004 SFX testing results. (2) Resolver results from likely keyword searches for a number of popular databases were tested from September 2009 through June 2010. Stratification by document type was added to increase exposure of non-journal resources. Each author tested seven databases, collecting results for journal articles (10), book chapters (5), books (5), dissertations (5), and newspaper articles (5) whenever citations to those document types were available in the source database (table 3). Citations that included native full text were avoided, as well as those from journals or books that had been tested previously.

Overall, 351 source URLs were tested in this study. About half of the resulting resolver menus offered one or more online full text links (n = 169 [48%]; average full text link number = 2.01). The other half of the menus indicated that no full text was available, offering links to search the catalog, populate an ILL request, and search Google Scholar instead (table 4). Every full text link was checked for access (n = 343), and Google Scholar and Google were searched for each result with no full text available (n = 182). The results were then coded into six categories, mirroring Wakimoto, Walker, and Dabbour's designations. (3) Their results are included for comparison (table 5).

Wakimoto and others (2006) reported that about 20 percent of their resolver results were erroneous. Roughly half of the errors incorrectly indicated availability (false positives), while the other half incorrectly failed to indicate availability (false negatives). Our result rates for these errors were similar. For this study, however, the category "Required search or browse for full text" was reassigned from the Correct group to the Error group to reflect reduced user willingness or ability to further navigate to the full text. When the target full text item or abstract with full text links is not presented on the target page, most users and even many librarians perceive the resolver as having failed. This category increases the total error rate by nearly 70 percent, averaged across both datasets. This results in total error rates of 35 percent for the Wakimoto and others dataset and 29 percent for our dataset (table 5). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.