Please update your browser

You're using a version of Internet Explorer that isn't supported by Questia.
To get a better experience, go to one of these sites and get the latest
version of your preferred browser:

Relative Influence of Professional Counseling Journals

Article excerpt

In recent years, there has been a virtual explosion in the academic literature discussing methods for evaluating the relative quality of journals in various fields and supporting or opposing the use of specific measures, such as the journal impact factor (JIF) and various citation analyses. Several disciplines, professional organizations, and academic fields have struggled with techniques for ranking their professional journals (Sellers, Perry, Mathiesen, & Smith, 2004). Thus, journal ranking still remains controversial across disciplines (Togia & Tsigilis, 2006), across programs, and even within programs. Although the professional counseling literature has been largely silent on the issue, there have been anecdotal reports (e.g., Barrio Minton, Fernando, & Ray, 2008) that counselor educators are being called on to defend the quality and rigor of the journals in which they publish. Such a trend is consistent with reports from other fields that the "quality of the journals in which a researcher's work appears is a make or break factor when the merits for promotion and tenure are concerned" (Straub & Anderson, 2010, p. iii). Indeed, Togia and Tsigilis (2006) documented an increased role of publication metrics as indicators for assessing faculty in education, and O'Connor (2010) voiced concern regarding "the growing hegemony of publication outputs as a means to determine the scientific worth of an individual's, department's, or entire institution's true worth" as "surprising and alarming" (p. 141).

Understanding the significance of counseling journals and their relative influence on the dissemination of knowledge can be of value to counselor educators in a variety of ways: (a) as a contributory factor in personnel decisions involving faculty selection, compensation, promotion, and tenure (Sellers et al., 2004; Smaby & Crews, 1998); (b) as information for authors who must decide which journals are the best sources of informative, practical, and relevant literature and which are the best (most influential) channels for their research and practice results (Matocha & Hanks, 1993; Thompson, 1995); (c) as information for doctoral students and new entrants to the field who must gain insight into where the field has been and where it may be heading; (d) as information for individuals, departments, and libraries that must assign scarce resources to reading and/or subscribing to journals (Journal Citation Reports [JCR], 2008); and (e) as data for editors of journals to use in evaluating their own performance and the shape of their editorial agendas (McGowan, 1994; Thompson, 1995).

Two widely used and accepted methods to rank journals have been reputation or opinion surveys and citation scores (Sellers et al., 2004; Straub & Anderson, 2010). Within the reputation or opinion survey approach, researchers develop journal rankings by surveying a panel of experts in the field (e.g., faculty, department heads, deans, journal editors, and authors) about their perceptions of the quality of particular journals. Although there is an advantage in having the opinions of reputed professionals in the field, the primary limitation of this method is the issue of subjectivity. On the other hand, citation analyses measure a journal's visibility by noting the extent to which its articles are cited in other publications. A commonly used citation method in journal ranking is the JIF, which is calculated as the ratio of the number of citations of articles in a given journal to the number of articles in a set of journals over a specified time period (Lewis, 2008). This method is often used to evaluate a journal's significance compared with other publications listed by the Institute for Scientific Information (JCR, 2008).

Although the JIF is considered by some to be an objective method for evaluating journal quality, a number of scholars have identified pitfalls and cautions of using the JIF as a useful and credible measure of the quality and impact of journals (e. …