Academic journal article Canadian Psychology

Perspectives of Internship Training Directors on the Use of Progress Monitoring Measures

Academic journal article Canadian Psychology

Perspectives of Internship Training Directors on the Use of Progress Monitoring Measures

Article excerpt

In the past decade, there has been an increased focus on the ongoing assessment of client change through the use of tools referred to as progress monitoring (PM) measures (e.g., Lambert, 2010). PM measures are designed to be completed by a client on a routine basis, providing clinicians with systematic feedback on client functioning that can be used to inform clinical decisions (see Overington & Ionita, 2012 for a review of measures). A growing body of research supports the use of various PM measures in practice as a means of improving treatment outcomes, particularly for clients who are not progressing as expected (e.g., Connolly Gibbons et al., 2015; Lambert & Shimokawa, 2011; Shimokawa, Lambert, & Smart, 2010; Wampold, 2015).

Unfortunately, there is a well-documented gap between research findings and practice in psychology (e.g., Cautin, 2011; Kazdin, 2008). Despite evidence supporting the benefits of continual client monitoring, surveys suggest that the majority of clinicians in North America have not incorporated this practice into their clinical work on a routine basis (Hatfield & Ogles, 2004; Ionita & Fitzpatrick, 2014; Phelps, Eisman, & Kohout, 1998). Among experienced clinicians, Hatfield and Ogles (2004) found that PM users had significantly more training in outcome assessment than nonusers. This suggests that graduate training in assessment may play a critical role in understanding how usage develops.

Unfortunately, a recent survey of clinical psychology programs accredited by the American Psychological Association (APA) found that only slightly more than half of these programs offered assessment courses that covered the monitoring and evaluation of treatment effectiveness (Ready & Veague, 2014). If trainees do not receive training in the use of PM measures in their doctoral programs, it is possible that they could receive it as part of their internship training. Predoctoral internship training represents the culmination of the doctoral training experience and clinicians report that this training is essential preparation for the workforce (e.g., Stedman, Hatch, Schoenfeld, & Keilin, 2005). Mours, Campbell, Gathercoal, and Peterson (2009) surveyed training directors (TDs) from APA-accredited internship-training programs. They found that 47% of internship programs were routinely incorporating some type of outcome measure for evaluating treatments. The most frequently selected reasons for using such a measure included tracking client progress, determining whether treatment changes are required, evaluating the program, assessing strengths and weaknesses of the client, and engaging in ethical practice. Conversely, barriers to use included a lack of resources, additional paperwork and time requirements, burden on clients, and the perception that they are unhelpful. Mours et al. also questioned nonusers about factors that might facilitate their use of outcome measures in the future. Training directors identified the following as facilitators: more time, increased funds and resources, evidence supporting the validity of a given measure, and additional training in outcome measures.

The Mours et al. (2009) findings were based on data collected in 2006. Since that time, there has been considerable research that supports PM measure usage. This new evidence may have significant impacts on several of the factors that Mours et al. found affected PM usage. For example, almost 40% of TDs not using measures at their sites reported that the implementation of measures would be facilitated by more evidence of their efficacy in practice. Studies in the past decade, including meta- and megaanalyses (Lambert & Shimokawa, 2011; Shimokawa et al., 2010) demonstrate strong support for some measures. As a result, limited empirical evidence may no longer represent a major barrier to usage. In addition, in their 2006 data, Mours et al. reported that almost 30% of TDs not using measures at their sites indicated that having available computerized scoring options would facilitate implementation of these tools. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.