Anyone who follows the topics addressed by information industry conferences realizes that the issue of open access (OA) has pushed everything else aside. Rather than simply discuss the business model, however, the Association of Learned and Professional Society Publishers (ALPSP) put together an excellent program for its 21st International Learned Journals Seminar (held April 8), which addressed the topic of how scholarly publishing will operate and develop in an established Web environment.
When combined with other meeting agendas this year (not to mention my observations of listservs and blogs), this London conference leads me to think we are beginning to look seriously at ways that Internet technology can be adopted to bring new features and services to researchers. We're no longer simply reproducing the printed journal as a PDF.
Introduction and Keynote
Conference chair Alan Singleton introduced the meeting by providing a list of buzzwords for future technologies that he hoped would get some consideration from attendees. On the list were collaboratories, data mining, semantic Web, knowledge representation, ontologies, RDF (Resource Description Framework), and intelligent agents. The meeting was structured so that the scholars' and researchers' views (along with those of the various drivers for change) were presented in the morning, while the afternoon saw responses from representatives from the publisher's world.
Keynote speaker Simeon Warner shared his thoughts. Warner, currently with Cornell University, has worked with Paul Ginsparg at Los Alamos and Cornell. Although grounded in experience with Ginsparg's arXiv e-print repository, Warner's vision ranged much further into the future. He outlined the development of a scholarly communication process that is no longer a simple linear chain from author to publisher to reader, but one of constant refinement and improvement involving the author as a central player. A blurring of the boundaries between formal and informal communications is required, he suggested, in order to more closely mirror the research process itself. Today, this is typified by international collaboration, network-based working, and access to large data sets.
Warner mentioned a few areas where some of these concepts are beginning to be addressed* The Grid, for example, is a large-scale data storage and computing network used in research ranging from genomics to climate modeling. Presently, scholarly communication is separate from the Grid, but Warner wants to see interoperability between the two to allow data, code, and visualizations to become a part of the scholarly record. This would be achieved by reference and not by actual inclusion of the data in the communication itself.
He also noted the semantic Web (search engine refinements where machine understanding of context will provide greater accuracy). It is the availability of such automated tools coupled with better meta-data that Warner sees as the solution to the researcher's information overload problem. The idea that good indexing is required for precision and recall was not new to this audience.
Warner concluded by stating, "The challenge for publishers is to identify appropriate functions that really add value and to implement them in a truly networked fashion that will best serve the community." Much of his presentation was based on a paper published in the September 2004 issue of D-Lib Magazine ("Rethinking scholarly communication: Building the system that scholars deserve"; http://www.dlib.org/dlib/september04/van desompel/09vandesompel.html).
The Impact of Usage Statistics
Moving from "what researchers want" to "what researchers are doing" with existing electronic resources, David Nicholas (Centre for Information Behavior and the Evaluation of Research at University College London) described information-seeking behavior derived from usage data.
Describing himself as relatively new to the field of e-journal usage statistics, Nicholas said he was stunned to learn that some journal publishers felt threatened by usage studies. …