Academic journal article American Journal of Pharmaceutical Education

Towards an Operational Definition of Clinical Competency in Pharmacy

Academic journal article American Journal of Pharmaceutical Education

Towards an Operational Definition of Clinical Competency in Pharmacy

Article excerpt

INTRODUCTION

Competency-based education focuses on integrating competency into all facets of training and assessment. (1) In experiential education, the objectives of competency-based education are to appraise student performance in the clinical setting and to determine if the student is sufficiently competent to enter into professional practice. (2) However, there is no widely accepted method to evaluate whether these assessment programs actually discriminate between competent and noncompetent students. These objectives are especially elusive for the advanced pharmacy practice experiences (APPEs). The 2008 American Association of Colleges of Pharmacy (AACP) president and the American College of Clinical Pharmacy (ACCP) Educational Affairs Committee called for a standard APPE assessment instrument. (3,4) Despite best intentions, several issues must still be addressed to realize valid and reliable assessment of student performance during APPEs across practice settings and preceptors.

First, volunteer preceptors assess student performance during APPEs in a majority of pharmacy programs and are responsible for acting as gatekeepers to practice. These preceptors are expected to assess students' readiness to enter into practice based on a comparison of observed behavior with professional performance standards (eg, the Center for the Advancement of Pharmaceutical Education (CAPE) Educational Outcomes). However, performance levels are not sufficiently operationalized to rate a student as competent or not competent using CAPE or any other educational performance gold standard. In the volunteer preceptor model, the preceptor's practice experience positively affects the quality of student performance ratings, and those preceptors who are judged as better clinicians are usually better at rating the job performance of others. (5) Additionally, preceptors with little experience or substandard clinical skills have greater idiosyncratic assessment scores that increase score variation. (6) For example, student assessments differ between nursing educators and nonfaculty nursing clinical preceptors. (7,8) Educators, who often develop assessment instruments, may have a different concept of competency compared to practicing clinicians. In short, experts' assessments are likely to be better than non-experts' assessments, even with standardized tools.

Next, lack of standardization has an impact on the variability of an individual preceptor's assessments. Pharmacy preceptors are urged to assess students' competence based on a comparison of observed behavior with professional performance standards. (9) However, individual performance standards are as diverse as the number of instruments and preceptors. Preceptors commonly use an intuitive decision-making framework to assess students, commenting that "... 'gut feeling' seems to represent their cognitive integration of those characteristics into a decision about the overall adequacy of performance." (10) Cross and Hicks concluded that preceptors commonly use implicit criteria in the decision making process; preceptors would ask themselves if they would hire this student for their decision-making framework rather than use clinically-based objective measures. (11) Alexander's heuristic model found that preceptors assessed whether, in their opinion, the student's performance was representative of the desirable characteristics of an entry-level practitioner. (12) However, these decision-making frameworks are affected by impressions of previous students and, especially, a personal perception of what constitutes an entry-level practitioner.

Finally, pharmacy education and its accreditors (eg, the American Council for Pharmacy Education (ACPE)) provide little guidance on acceptable preceptor inter-rater reliability, and the acceptability may be as diverse as the number of competency assessment instruments. Inter-rater reliability refers to the degree to which assessment scores are consistent and provide clear information useful for differentiating between individual students. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.