Relative Influence of Professional Counseling Journals
Fernando, Delini M., Minton, Casey A. Barrio, Journal of Counseling and Development : JCD
In recent years, there has been a virtual explosion in the academic literature discussing methods for evaluating the relative quality of journals in various fields and supporting or opposing the use of specific measures, such as the journal impact factor (JIF) and various citation analyses. Several disciplines, professional organizations, and academic fields have struggled with techniques for ranking their professional journals (Sellers, Perry, Mathiesen, & Smith, 2004). Thus, journal ranking still remains controversial across disciplines (Togia & Tsigilis, 2006), across programs, and even within programs. Although the professional counseling literature has been largely silent on the issue, there have been anecdotal reports (e.g., Barrio Minton, Fernando, & Ray, 2008) that counselor educators are being called on to defend the quality and rigor of the journals in which they publish. Such a trend is consistent with reports from other fields that the "quality of the journals in which a researcher's work appears is a make or break factor when the merits for promotion and tenure are concerned" (Straub & Anderson, 2010, p. iii). Indeed, Togia and Tsigilis (2006) documented an increased role of publication metrics as indicators for assessing faculty in education, and O'Connor (2010) voiced concern regarding "the growing hegemony of publication outputs as a means to determine the scientific worth of an individual's, department's, or entire institution's true worth" as "surprising and alarming" (p. 141).
Understanding the significance of counseling journals and their relative influence on the dissemination of knowledge can be of value to counselor educators in a variety of ways: (a) as a contributory factor in personnel decisions involving faculty selection, compensation, promotion, and tenure (Sellers et al., 2004; Smaby & Crews, 1998); (b) as information for authors who must decide which journals are the best sources of informative, practical, and relevant literature and which are the best (most influential) channels for their research and practice results (Matocha & Hanks, 1993; Thompson, 1995); (c) as information for doctoral students and new entrants to the field who must gain insight into where the field has been and where it may be heading; (d) as information for individuals, departments, and libraries that must assign scarce resources to reading and/or subscribing to journals (Journal Citation Reports [JCR], 2008); and (e) as data for editors of journals to use in evaluating their own performance and the shape of their editorial agendas (McGowan, 1994; Thompson, 1995).
Two widely used and accepted methods to rank journals have been reputation or opinion surveys and citation scores (Sellers et al., 2004; Straub & Anderson, 2010). Within the reputation or opinion survey approach, researchers develop journal rankings by surveying a panel of experts in the field (e.g., faculty, department heads, deans, journal editors, and authors) about their perceptions of the quality of particular journals. Although there is an advantage in having the opinions of reputed professionals in the field, the primary limitation of this method is the issue of subjectivity. On the other hand, citation analyses measure a journal's visibility by noting the extent to which its articles are cited in other publications. A commonly used citation method in journal ranking is the JIF, which is calculated as the ratio of the number of citations of articles in a given journal to the number of articles in a set of journals over a specified time period (Lewis, 2008). This method is often used to evaluate a journal's significance compared with other publications listed by the Institute for Scientific Information (JCR, 2008).
Although the JIF is considered by some to be an objective method for evaluating journal quality, a number of scholars have identified pitfalls and cautions of using the JIF as a useful and credible measure of the quality and impact of journals (e.g., Johnstone, 2007; Straub & Anderson, 2010; Togia & Tsigilis, 2006). Of particular concern are the limited number of journals represented in the JCR (e.g., only 11% of peer-reviewed education journals are represented), the influence of extraneous factors (e.g., time to publication, type of discipline), and the degree to which JIFs are subject to manipulation (Togia & Tsigilis, 2006). Haslam and Laham (2010) found a quality--quantity trade-off in which quantity of publications by a faculty member, not quality, was associated with impact factor, a further affirmation that JIF is not a direct report of quality. Still others have reported that the JIF is being used for faculty evaluation in a way that was expressly eschewed by its developer (O'Connor, 2010; Togia & Tsigilis, 2006).
The counseling profession and its professional journals are not exempt from the potential pitfalls of embracing the JIF as a quality measure of its professional journals. Although Barrio Minton et al. (2008) found that counselor educators in programs accredited by the Council for Accreditation of Counseling and Related Educational Programs published more than 60% of their peer-reviewed articles in the 15 journals affiliated with the American Counseling Association (ACA) over a 10-year period, only five of these journals are indexed in JCR and only one journal (Journal of Counseling & Development [JCD]) emerged in the 10 most common publication venues reported by the authors. Although professional counseling journals may be central to the profession, the 2008 JIFs seem to suggest that even those five professional counseling journals indexed within the JCR have little to no impact (JCD = 0.61, Journal of Multicultural Counseling and Development [JMCD] = 0.46, Measurement and Evaluation in Counseling and Development [MECD] = 0.69, Journal of Employment Counseling [JEC] = 0.59, and The Career Development Quarterly [CDQ] = 1.13). In 2008, JIFs ranged from 0.00 to 16.22 for all the journals at that time (JCR, 2008). Together, these data raise questions regarding the degree to which the JCR is representative of journal quality and influence for professional counseling journals that must maintain a strong practitioner focus while building an empirical literature base for the profession.
To date, there has been no bibliometric inquiry regarding the relative influence of different journals within professional counseling. Indeed, some individuals may be reluctant to explore citation trends within professional counseling journals out of fear that results of such studies may be used in ways that are harmful to smaller, practitioner-oriented journals or to the individuals who publish in them. At the same time, counselor educators may be required to provide evidence that their publications in critical professional counseling journals are indeed rigorous and meaningful to the field. In the absence of a published, data-driven resource to support their claims, these …
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Relative Influence of Professional Counseling Journals. Contributors: Fernando, Delini M. - Author, Minton, Casey A. Barrio - Author. Journal title: Journal of Counseling and Development : JCD. Volume: 89. Issue: 4 Publication date: Fall 2011. Page number: 423+. © American Counseling Association Summer 2010. COPYRIGHT 2011 Gale Group.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.