Validating a Standardized Patient Assessment Tool Using Published Professional Standards
Costello, Ellen, Plack, Margaret, Maring, Joyce, Journal of Physical Therapy Education
Background and Purpose. Standardized patient (SP) encounters are used to teach and assess various professional skills. Although well established in medical education, the validity of assessing student performance during SP encounters has not been adequately addressed in physical therapist education. This paper presents a method for validating an SP assessment tool for use in a high-stakes exam within a Doctor of Physical Therapy program using published professional standards.
Method Description and Evaluation. Faculty in a physical therapist education program developed an assessment tool for the first of 4 SP encounters students must pass to progress in the curriculum. Criteria were matched to specific statements in the Guide to Physical Therapist Practice, A Normative Model of Physical Therapist Professional Education, and Minimum Required Skills of Physical Therapists Graduates at Entry-Level. A survey was developed listing each assessment criteria with its corresponding statements from the 3 professional documents. Respondents' levels of agreement were assessed using a 4-point Likert scale. The chi-square (X^sup 2^) goodness of fit was used to determine the level of agreement between the assessment criteria and the practice expectation statements. Additional comments were analyzed using qualitative methods.
Outcomes. Thirty academic and clinical educators completed the survey. Ninety-six percent (22/23) of the SP assessment criteria reached statistically significant levels of agreement, with matching statements from at least one of the professional documents. Respondents indicated that the SP assessment tool was either more detailed or specific than the published professional documents; however, it was generally less comprehensive.
Discussion. Faculty concluded that the SP assessment criteria were generally consistent with practice expectations and thus a valid form of measurement. Differences can be attributed to the purposes of professional publications and the specific SP encounter.
Conclusion. SP encounters can help to ensure the safety of aspiring clinicians prior to clinical practice. The method presented uses published professional expectations to validate a high-stake assessment of student clinical competence during SP encounters. Using published professional documents to validate classroom assessment tools has the added benefit of connecting classroom teaching and assessment to practice.
Key Words: Standardized patient, Assessment, Validity, Content validity (lecture, lab, and clinical).
BACKGROUND AND PURPOSE
Assessment of clinical performance is an integral part of a health care professional education program. Developing valid assessment tools based on established professional expectations are especially critical to the safe transition of aspiring clinicians from the classroom to the clinical environment. Standardized patients (SP) or simulated patients, developed by Barrows1,2 in the 1960s, have been used as a method to teach and assess undergraduate medical student clinical skills and performance. Since the early 1980s, SP methodology also has been used in high- stakes clinical examinations, which determine students' advancement in their undergraduate medical education or residency training programs. An SP examination involves the use of actors (ie, SPs) coached by faculty to accurately and reliably portray signs and symptoms associated with a particular diagnosis or dysfunction. Students examine the patient (SP) and may be required to diagnose and/ or outline a plan of care based on their findings. The SP also has been used in objective, structured clinical examinations (OSCEs); limited performance assessments with multiple timed stations (510 minutes) in which the student performs specific tasks;35 and in more comprehensive assessments (clinical practice examinations) in which the student interacts with the SP in a less- structured environment.1'6
Trained by the faculty, the SP typically evaluates student performance with a checklist and also may provide qualitative feedback in the form of written comments or direct SPstudent feedback. The validity and reliability of SP assessment as an educational outcome in medicine has been well established.742 Widespread acceptance of SP methodology occurred in the late 1980s and early 1990s as the Macy Foundation funded its use in a variety of medical schools to assess undergraduate medical student clinical competency13 Additionally, adoption of SP methodology was facilitated by the American Association of Medical Colleges in 1992 when the proceedings from its Consensus Conference on the use of standardized patients in the teaching and evaluation of clinical skills were published in Academic Medicine.14 Ultimately, the United States Medical Licensing Examination adopted the use of SP methodology in 2004 to assess clinical skill performance, which has facilitated its use as a teaching and assessment tool in undergraduate medical school curriculum.15
Although the health education literature refers to the use of SPs in medicine and other disciplines,1619 its use in physical therapist (PT) education programs is in the early stages of development. Practical examinations have long been used in physical therapist professional (entry-level) education,20 23 and the patient is usually portrayed by a fellow physical therapist student or faculty member. However, the familiarity of the student examinee with the SP may affect their performance. The examinee may be uncomfortable role playing with someone that they know, particularly if it is a faculty member. Conversely, a fellow student acting as the patient may unconsciously assist the examinee and unknowingly provide cues, making accurate assessment of the examinees performance difficult.24 Thus, the validity of this assessment process is hard to measure. Based on its long-time use in medical education, the use of SPs to assess clinical competence for physical therapist students seems a viable alternative.
SPs have been used in physical therapist education programs to teach a variety of important professional skills, including communication skills and patient interviewing techniques,25,26 professional core values,3 and confidence and skills in performing a physical examination.26 Paparella-Pitzel et al27 reported that 30% of surveyed US and Canadian physical therapist education programs were using SPs in their curriculum. Advantages of using SPs rather than authentic patients include the ability to standardize the patient case, the ease of availability of "patients," and the cost-effectiveness.24'28 Moreover, student feedback regarding the use of SPs in testing situations in physical therapist education programs has been positive. Black and Marcoux24 reported that the students stated the experience was ". . . more real," and ". . . had far more impact than previous simulations with other students."24 Costs associated with the use of SP methodology vary regionally and depend on the degree of institutional support (eg, space and equipment). Generally SPs are paid at an hourly rate ranging from $15 to $20 per hour.29
Despite the fact that the validity and reliability of SP assessment in medical education has been well established,712 few studies have addressed SP assessment in physical therapist education. Only 3 published studies were found describing the use of the SP in evaluating physical therapist student clinical performance30 or clinical competence.6,31 Ladyshewsky et al30 established the construct validity of an assessment tool by comparing student physical therapist performance to practicing physical therapist performance in their examination of an SP. The assessment tool successfully discriminated between student performance and practicing clinician performance, one method of assessing its validity32 The investigators also noted that both students and physical therapists found the SP authentic. Panzarella and Manyon6 used an integrated standardized patient case to assess student physical therapist clinical competence. The assessment was performed by the SP, the students themselves, an expert evaluator, and a criterion evaluator (researcher) using 2 clinical cases. Interrater reliability was assessed and found to be acceptable for the expert evaluator and criterion evaluator for scales that were dichotomous, but mean scores varied significantly between raters on items requiring a 4-point rating scale.
Although Panzarella and Manyon6 used multiple evaluators to triangulate the data - traditionally, SPs alone have assessed student performance. Concerns have been noted regarding the use of checklists by the SP as the sole means of assessing student performance.33,34 Rose and Wilkerson33 suggested that the sole use of SP generated checklists and numerical ratings to assess medical student performance may overlook material that may form the basis for the ongoing development of student clinical competence, such as qualitative SP feedback that addresses the evolution of the students professional role. Additionally, Hodges et al34 reported that binary checklists may not be capturing increasing levels of experience of the student s progression in an undergraduate medical curricula. Given the high- stake nature of many of these SP encounters, having valid assessment instruments is essential.
Content validity has been described as the degree in which an instrument measures a theoretical construct32; in this case, clinical competency. One way to validate a clinical competency assessment tool is to compare its content to documented professional practice expectations.35 Published professional guidelines have been used by other health care educators as the conceptual framework for organizing competencies to evaluate student clinical performance.36 Established professional practice guidelines well known to physical therapist educators are the Guide to Physical Therapist Practice (the Guide),37 A Normative Model of Physical Therapist Professional Education: Version 2004 (Normative Model),38 and Minimum Required Skills of Physical Therapists Graduates at Entry-Level (Minimum Skills).39 Since these guidelines in part form the basis for many physical therapist education program curricula and theoretically should describe standards of current practice, they were chosen as the reference criterion to validate the SP tool. Stickley35 used this method to validate a student physical therapist clinical education performance tool, the Physical Therapist Manual for the Assessment of Clinical Skills (PT MACS), and found that the student behaviors described in the tool did indeed describe the behaviors needed for success in a clinical education experience. Therefore, the purpose of this paper is to present a model for validating SP assessment instruments for use in a high- stakes comprehensive exam within a Doctor of Physical Therapy (DPT) program using published standards within the profession.
METHOD/MODEL DESCRIPTION AND EVALUATION
Faculty members in the Program in Physical Therapy at The George Washington University use a comprehensive approach to assessing student clinical competence that includes SPs. Students are required to successfully meet the criteria associated with an SP examination in order to progress in their course of study in 4 of the 8 semesters. SP assessment tools, constructed and scored by program faculty, are used during the examinations. The SP examination at the conclusion of the first semester was selected as an appropriate starting point to validate the program expectations for student performance during the SP encounter. Students participate in a clinical immersion experience the following semester, thus the first SP examination is considered critical in assessing student readiness for the upcoming clinical experience.
Development of the SP assessment tool.
The faculty devised 2 simple patient cases in which SPs were trained to portray impairments and functional limitations associated with a particular diagnosis. The cases were simple ? o st- fracture and immobilization scenarios that required the students to use their history-taking skills, conduct a systems review, conduct isolated manual muscle testing and range- of-motion assessments, and instruct the patient in gait training. The cases had been piloted in the previous academic cycle and minor changes were made based on student performance and faculty recommendations.
A corresponding assessment tool was developed in which student performance and practice expectations were based on the didactic and laboratory content covered across the curriculum during the students first semester of study in a DPT program. Program faculty met at least 3 times to discuss and agree upon skills and content areas to be ineluded in the assessment tool. Since the SP examination is competency based, a simple Likert scale was chosen to reflect student performance. Student performance was evaluated as follows: 2 = performed thoroughly and correctly; 1= performed partially/needs improvement; and 0 = performed incorrectly or not at all. The SP assessment tool included the following skills and procedures: examination skills (history taking and systems review); performance of selected tests and measures (eg, manual muscle testing and goniometry); performance of selected intervention techniques (eg, gait and transfer training); communication skills (eg, patient interaction and professionalism); and session management (eg, managing resources and time). The program faculty graded student performance using the assessment tool while watching the student examine and treat the SP via video monitors at a nearby location. Although in undergraduate medical education, the SP typically grades student performance - physical therapy faculty elected to grade the performance due to the high- stakes nature of the exam and the level of detail required using the assessment tool.
Development of the validation survey. Three physical therapy faculty members with well over 30 years of combined educational experience independently matched each of the 23 SP assessment tool items, developed by the PT program faculty, to specific practice expectation statements found in each of the published professional guidelines. A match was determined if at least 2 out of the 3 faculty members identified the same practice expectation from the reference guides for the SP assessment tool item. A dialogue ensued until consensus was met among all 3 members. If no agreed upon match between the assessment tool item and the professional practice guidelines existed, this was noted by the researchers. Once the agreed upon matches were identified, the survey was developed reflecting these matches. An example of an agreed upon match between the professional practice guidelines and one of the SP assessment tool items is shown in Figure 1. SP assessment tool items that had no match with professional practice guidelines were left blank in the survey (Appendix 1).
The survey asked respondents to indicate "how well the practice expectation/ skill used to assess student performance corresponds with the practice expectation/skill taken from the professional literature." A 4-point Likert scale was used as follows: 4 = corresponds very well with the concept; 3 = corresponds well with the concept; 2 = does not correspond well with the concept; and 1= very poor correspondence with the concept. Some assessment tool items were grouped together under more global subheadings for ease of analysis. For example, "examines posture and symmetry, examines gross ROM, examines gross muscle strength, and examines height and weight were included under the subheading of "Musculoskeletal Screen." The major assessment tool headings (patient examination, systems review, tests and measures, interventions, interaction and professionalism, and session management) included as few as one item and as many as 8.
Each survey item contained a comment section for qualitative remarks. A seniorlevel faculty member with extensive knowledge of survey development then reviewed the survey and minor revisions were made to ensure clarity. The study was approved by The George Washington University's Institutional Review Board. Return of the survey indicated informed consent to participate in the study.
Participants. In the validation process, 2 methods were used to recruit a convenient sample of educators. Program directors from a list of the 210 US accredited professional physical therapist programs listed on the American Physical Therapy Association (APTA) Web site in 2006 were recruited. To increase the sample size, invitations were sent to individuals subscribing to APTA's educational listserv in the fall of 2006 (approximately 600 members). Although the listserv subscribers may have included all program directors, it was assumed that additional respondents from the listserv would have experience in physical therapist education or experience in mentoring student physical therapists in the practice environment and would be an appropriate source to assist in validating educational tools used to assess clinical competency. Additionally, as physical therapist educators, they would be familiar with the professional practice documents used in the validation process.
Data Collection. Program directors were contacted via postal mail and listserv subscribers via e-mail with an invitation letter to participate in the survey in the fall of 2006 (Appendix 2). A hard copy of the survey was included with the initial invitation to all program directors. Listserv respondents expressing interest were mailed a hard copy of the survey. Each individual was asked to complete the survey and return it in the selfaddressed, stamped envelope within 3 weeks. Participants were instructed not to write their name or address on the survey or the envelope to maintain confidentiality. A follow-up reminder to complete the survey was sent via postal mail or e-mail 4 weeks after the initial recruitment letter.
Data analysis. In this study, descriptive statistics were used to analyze the demographics of the subjects surveyed. The chi- square (X2) goodness of fit32 was used to determine the level of agreement between the SP assessment tool practice expectations and published professional practice guidelines as determined by the subjects. Observed frequencies were compared to frequencies expected by chance.32 The number of X2 tests varied for each major heading depending on the number of items grouped in that category; tests ranged from 1 to 8. The categories or headings reflected practice expectations relating to patient/client management described by the Guide.37 The family- wise alpha level was set at P < .05 and was adjusted for each category or heading with Bonferronis correction to protect against the increased likelihood of a Type I error associated with multiple tests on the same data. Standardized residuals were used to examine the contribution of each response selection to the overall value of X2. Absolute values > 2.00 were considered significant to the overall value of X2. Positive standardized residual values indicated that the proportion of observed frequencies was greater than expected by chance. Negative values were indicators that the proportion of frequencies was less than would be expected by chance. Faculty examined results to determine if the categories indicating agreement between the matching statements had residual values > 2.00 for the significant X2 goodness of fit items, therefore supporting the inclusion of that item in the assessment tool.
Additional comments were analyzed using qualitative methods. Statements were reviewed for patterns and clusters of meaning. Codes were developed by consensus of the 3 researchers. Two of the researchers independently coded the data using the defined codes and Kappa statistics were calculated to ensure reliability40
Twenty- one program directors responded to the initial postal mail request to participate in the survey, resulting in a 10% postal mail return rate. Out of 47 individuals from the education listserv that requested a hard copy of the survey, 9 responded. As the survey was de- identified and did not request information on current employment, we were unable to ascertain if the 9 listserv respondents were program directors, program faculty, or clinical instructors. However, only one respondent cited 0 years of teaching in professional physical therapist education, thus it appeared that 29 of the 30 respondents were program directors or taught in PT education programs. As the listserv also included program directors, a listserv response rate was not calculated.
The average age of the respondents was 46.8 years (+ 8.7) and the mean number of years respondents were licensed as physical therapists was 22.7 years (±8.6). Sixty-three percent of respondents earned a Bachelor of Science degree in physical therapy and all respondents earned a graduate degree prior to their participation in this survey. The average number of years teaching in a PT education program was 12.1 (± 6.2), and the average number of years of experience as a clinical instructor was 7.4 (± 6.5). Table 1 and Table 2 summarize the experience and degrees of the survey respondents.
No matching criteria statements were initially identified by the researchers in the Guide for 6 of the 23 SP assessment tool items: obtaining consent to treat; hand washing prior to patient contact; optimizing patient safety; use of proper body mechanics; interaction/professionalism; and effective session management. Additionally, there was no identified match in Minimal Skills related to effective session management.
For the rest of the statements with identified matches, the median score was calculated for all survey responses for each practice expectation on the SP scoring instrument and matching criteria described in the professional practice documents (Table 3).
Using the chi- square (X2) goodness of fit test, 96% (22/23) skills listed on the SP assessment tool reached a statistically significant level of agreement with the matching criteria from at least one of the selected professional practice documents (Table 3). Sixty-one percent (14/23) of the skills listed on the assessment tool were agreed on at a statistically significant level when compared to statements in the Normative Model; and 65% (15/23) in the Guide; and 83% (19/23) in Minimum Skills. Thirty- nine percent (9/23) of the items on the SP assessment tool matched corresponding statements in all 3 selected documents; 35% (8/23) matched 2 selected documents; and 22% (5/23) matched one selected document. The standardized residuals indicate that all the categories or items that had a statistically significant level of agreement on the matching expectations taken from the selected documents had more than the expected number (positive residual value > 2.00) of responses in the "corresponds very well with the concept" or "corresponds well with the concept" cells.
Participants offered 75 brief qualitative comments. Most comments were related to the Normative Model (44/75) followed by the Guide (17/75) and finally Minimum Skills (14/75). In assessing interrater reliability of the coding schema, a Kappa statistic of 90.6% was achieved, which indicates excellent agreement. Approximately 15 codes were identified, which merged into 4 categories and resulted in 2 major themes: (1) The SP criteria were less comprehensive than the expectations established in the professional literature; (2) The SP criteria were more specific than outlined in the established professional literature (Figure 2).
Exemplary comments that suggest that the SP assessment tool was less comprehensive as compared to the professional documents include: "[In the Normative Model] consent is only one aspect of legal practice standards"; "[In the Normative Model] hand washing is only one of many steps in infection control"; and "The [manual muscle test] item above doesn't indicate all items as listed [in the Guide]!' Conversely, participants noted some criteria on the SP assessment tool that were more specific than outlined in the professional documents, as summarized by this participants comment: "Often times, the SP expected behaviors are much more precise and detailed than those described in the specific professional documents." More specific examples included: "The [Normative Model] does not define the parameters of screen completely"; "The [Normative Model] is not very specific"; "The [Normative Model] item is broader than MMT"; "The [Guide] is broad versus specific"; and "The [Normative Model] is much less specific than your SP behaviors."
The use of SPs in physical therapist education is advocated as an effective learning tool to facilitate the transition from the classroom to the clinic.24 However, in order to consider the routine use of the SP encounter to assess a students ability to deliver safe and effective care to prospective patients, it is important that faculty develop assessment instruments that are valid and based on established professional expectations.
The model presented used established documents within the profession to determine the content validity of an assessment tool developed by faculty specifically to measure student competence in performing required clinical skills during an SP encounter. A matching statement was identified by the researchers for each of the SP tool items in at least one of the 3 professional documents; however, 6 SP tool items had no corresponding statements in one or 2 of the professional guidelines. This may be explained in part by the overall purpose of these documents. The Guide describes physical therapist practice and their role in health care, standardizes terminology, describes tests and measures and interventions, and delineates the preferred practice patterns.37 Specific safety precautions such as hand washing and proper body mechanics are not prescriptively identified in the Guide; rather, these skills are implicit in providing effective patient care. Similarly, "session management" is addressed in an administrative section underlying overall management and organizational operations rather than described in specific practice patterns.37 Likewise, Minimal Skills describes the "essential skill [s] that every physical therapist graduate should be competent in performing on patients."39^ Effective session management is an integrated set of tasks required of all clinicians rather than a single skill and thus is not addressed in the document.
All but one expectation (session management) matched a corresponding expectation in at least one professional document at a statistically significant level; therefore, the researchers concluded that the criteria used to assess student competence were generally consistent with practice expectations and thus a valid form of measurement. In comparing the criteria on the SP assessment tool to statements in professional documentation, Minimum Skills had the fewest number of statements that did not match SP skills at a statistically significant level; followed by the Guide; and finally the Normative Model, with 9 statements that did not reach statistical significance.
The 9 criteria on the SP assessment tool that did not match statements in the Normative Model at a statistically significant level pertained to communication, screening examination, tests and measures, and session management. This lack of agreement may be a function of the documents purpose. The Normative Model was developed as a consensus document to guide educators in professional PT education. Although the practice expectations are identified, the content areas are only broadly defined using sample behavioral objectives. In addition, these objectives are stated as terminal objectives or expectations upon graduation. Further, it may not be surprising that the Normative Model had the fewest number of statistically significant matches as future revisions are currently under consideration as noted in conversation with J. Gandy, PT, PhD (September 2010).
Overall, the Minimum Skills document was better aligned with the specific skills required on this SP assessment tool, with a total of 83% of items matching at a statistically significant level. The format of this document lends itself to lists of examination techniques and interventions that are easily identified and reflect the instruction and learning that occurred during the student's first semester in our professional program.
Participants also observed these differences between criteria on the SP assessment tool and statements in our professional documentation as noted in the qualitative comments. They indicated that a number of the criteria on the assessment tool were more specific than noted in the professional documents. Stickley35 made a similar observation when validating a clinical education assessment tool. As an expectation of professional practice, this type of global statement or integrated set of expectations is acceptable; however, more specific and objective statements are essential in both teaching and reliably assessing student performance. Conversely, at times, participants noted that some of the criteria on the SP assessment tool were rather limited and less comprehensive than established expectations in the profession. This may be a function of the placement of this particular SP encounter (ie, end of the first semester of the DPT curriculum). Early in the curriculum students are expected to have a limited repertoire of skills, which will continue to develop across the curriculum. In addition, no one SP encounter can be expected to address all expectations of professional practice.
Assessment of student performance during an SP encounter has been used during high- stakes clinical competency examinations, particularly in medical education. As we consider the use of SPs in physical therapist education, to ensure decisions are fair and the required skills reflect accepted norms for professional practice, steps should be taken to validate the assessment tools used by comparing the required skills to established professional guidelines. This paper presents a method of using published professional practice expectations to validate an instrument used to assess student clinical competence using SPs. Using published professional documents to validate classroom assessment tools has the added benefit of connecting classroom teaching and assessment to practice; however, some degree of professional judgment on the part of the faculty is essential. Faculty must consider the objectives of the assessment as well the level of the students professional development in designing effective SP encounters and valid assessment tools. Additional published professional documents such as the Standards of Practice for Physical Therapy41 and APTAs Guide for Professional Conduct^2 may be considered in the validation procedure depending on the level of student learning.
One limitation of this model is its use of a small sample of convenience and a single instrument unique to one DPT program; however, the methods presented here can be generalized to the validation of other SP assessment tools as well as a wide variety of assessment instruments used in professional curricula. Validation of other assessment tools would involve identification of relevant, corresponding professional guidelines to serve as the foundation for the assessment criteria in addition to faculty input regarding the specific objectives for a particular competency examination. Although the validation process is time consuming, information shared between physical therapist education programs regarding SP assessment tools could streamline the process and ultimately improve standardization of student assessment within the educational community. Future studies should incorporate this method and others to further validate instruments used in physical therapist education curricula to ensure that students are fully prepared to engage in practice before progressing to the clinical environment.
Valid tools are critical to assessment in all educational curricula; however, they are particularly critical to ensure that aspiring clinicians are safe to engage with patients before entering the clinical environment. Using valid tools to assess student clinical performance during standardized patient encounters is one means of ensuring safe practice. This paper presents a method of using published expectations of the profession to validate an instrument used to assess an SP encounter within the curriculum. Using published professional documents to validate classroom assessment tools has the added benefit of connecting classroom teaching and assessment to practice.
1. Barrows HS, Williams RG, Moy HM. A comprehensive performance -based assessment of fourth-year students' clinical skills. / Med Educ. 1987;62:805-809.
2. Ferrell BG. A critical elements approach to developing checklists for a clinical performance examination. Med Educ Online.l996;l:5. http://journals.sfu.ca/coaction/index.php/ meo/article/viewFile/4286/4477. Accessed September 11, 2010.
3. Harden R, Gleeson F. Assessment of clinical competence using an objective structure clinical examination. Med Educ. 1979;13:41-54.
4. Wessel J, Williams R, Finch E, Gemus M. Reliability and validity of an objective structured clinical examination for physical therapy students. J Allied Health. 2003 ;3 2(4): 266- 269.
5. Regehr G, MacRae H, Reznick RK, Szalay D. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73:993-997.
6. Panzarella KJ, Manyon AT. Using the integrated standardized patient examination to assess clinical competence in physical therapist students. J Phys Ther Educ. 2008;22(3):24-32.
7. Bardes CL, Colliver JA, Alonso DR, Swartz MH. Validity of standardized-patient examination scores as an indication of faculty observed ratings. Acad Med. 1996;71(1):SS82-SS83.
8. Badger LW, DeGruy F, Hartman J, et al. Stability of standardized patients' performance in a study of clinical decision making. Fam Med. 1995;27:126-131.
9. Rutal PJ, Gulfiniti JV, McGeah AM, Leko EO, Koff NA, Witzke DV Predicative validity of a required multidisciplinary standardizedpatient examination. Acad Med. 1992;64:S60-S62.
10 Swanson DB, Stillman PL. Use of standardized patients for teaching and assessing clinical skills. Eval Health Prof . 1990;13(1):79-103.
11. Tamblyn RM, Klass DJ, Schnabl GK, Kopelow ML. Sources of unreliability and bias in standardized-patient rating. Teach Learn Med. 1991;3(2):74-85.
12. Tamblyn RM, Klass DK, Schnabl GK, Kopelow ML. Can standardized patients predict realpatient satisfaction with the doctor-patient relationship? Teach Learn Med. 1994;6(l):36-44.
13. Wallace P. Following the threads of an innovation: the history of standardized patients in medical education. Caduceus. 1997;13:5-28.
14. Caelleigh AS, Mast TA. Proceedings of the AAMC's consensus conference on the use of standardized patients in the teaching and evaluation of clinical skills. Acad Med. 1993;68:437-483.
15. United States Medical Licensing Examination. 2010 USMLE Bulletin: Examination content. http://www.usmle.org/General_Information/ bulletin/20 10/content.html. Accessed September 11, 2010.
16. Robinson-Smith G, Bradley PK, Meakim C. Evaluating the use of standardized patients in undergraduate psychiatric nursing experiences. Clin Simul Nursing. 2009;5(6):E203-E211.
17. Shawler C. Standardized patients: a creative teaching strategy for psychiatric-mental health nurse practitioner students. J Nurs Educ. 2008;47(ll):528-532.
18. Schoonheim-Klein M, Muijtjens A, Habets L, Manogue M, van der Vleuten C, van der Velden U Who will pass the dental OSCE? Comparison of the angoff and the borderline regression standard setting methods. Eur J Dent Educ. 2009;13(3):162-171.
19. Henry BW, Duellman MC, Smith TJ. Nutrition-based standardized patient sessions increased counseling awareness and confidence among dietetic interns. Top Clin Nutr. 2009;24(l):25-34.
20. Ford G, Mazzone M, Taylor K. Effect of computer-assisted instruction versus traditional modes of instruction on student learning of musculoskeletal special tests. fPhys Ther Educ. 2005;19(2):22-30.
21. Balogun JA. Predictors of academic and clinical performance in a baccalaureate physical therapy program. Phys Ther. 1988;68(2):238242.
22. Kelly D, Brown D, Perritt L, Gardner D. A descriptive study comparing achievement of clinical education objectives and clinical performance between students participating in traditional and mock clinics. J Phys Ther Educ. 1996;10(1):26-31.
23. Kloth L, Morrison M. Supervised versus independent student laboratories. Phys Ther. 1983;63(2):225-228.
24. Black B, Marcoux BC. Feasibility of using standardized patients in a physical therapist education program: a pilot study. J Phys Ther Educ. 2002;16(2):49-56.
25. Ladyshewsky R, Gotjamanos E. Communication skill development in health professional education: the use of standardized patients in combination with a peer assessment strategy. J Allied Health. 1997;26(4):177-186.
26. Hale LS, Lewis K, Eckert RM, Wilson CM5 Smith B. Standardized patients and multidisciplinary classroom instruction for physical therapist students to improve interviewing skills and attitudes about diabetes. J Phys Ther Educ. 2006;20(l):22-27.
27. Paparella-Pitzel S, Edmond S, De Caro C. The use of standardized patients in physical therapist education programs. J Phys Ther Educ. 2009;23(2):15-21.
28. Howley LD, Martindale J. The efficacy of standardized patient feedback in clinical teaching: a mixed method analysis. Med Educ Online. 2004,9: 18. http://lib-journals3.1ib.sfu.ca:8104/ index.php/meo/article/viewFile/4356/4538. Accessed September 11, 2010.
29. Klamen DL, Yudkowsky R. Using standardized patients for formative feedback in an introduction to psychotherapy course. Acad Psych. 2002;26:168-172.
30. Ladyshewsky R, Jones M, Baker R, Nelson L. Evaluating clinical performance in physical therapy with simulated patients. J Phys Ther Educ. 2000;14(l):31-37.
31. Panzarella KJ, Manyon AT. A model for integrated assessment of clinical competence. J Allied Health. 2007;36(3):157-164.
32. Portney L, Watkins M, eds. Foundations of Clinical Research: Applications to Practice. 3rd ed. Saddle River, NJ: Pearson Education Ine; 2009.
33. Rose M, Wilkerson L. Widening the lens on standardized patient assessment: what the encounter can reveal about the development of clinical competence. Acad Med. 2001;76(8):856-859.
34. Hodges B, Regehr G, McNaughton N, Tiberius R. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74(10):1129-1134.
35. Stickley LA. Content validity of a clinical education performance tool: the physical therapist manual for the assessment of clinical skills. J Allied Health. 2005;34:24-30.
36. Kaiser KL, Rudolph EJ. Achieving clarity in evaluation of community/public health nurse generalist competencies through development of a clinical performance evaluation tool. Public Health Nurs. 2003;20:216-227.
37. American Physical Therapy Association. Guide to Physical Therapist Practice: Second Edition. Alexandria, VA: American Physical Therapy Association; 2003.
38. American Physical Therapy Association. A Normative Model of Physical Therapist Professional Education: Version 2004. Alexandria, VA: American Physical Therapy Association; 2004.
39. American Physical Therapy Association. Minimum required skills of physical therapist graduates at entry-level. http://www.apta.org/AM/ Template.cfm?Section=Home&CONTENTID =40622&TEMPLATE=/CM/ContentDisplay. cfm. Accessed September 11, 2010.
40. Creswell JW Qualitative Inquiry and Research Design: Choosing Among Five Traditions. Thousand Oaks, CA: Sage Publications; 1998.
41. American Physical Therapy Association. Standards of Practice for Physical Therapy (HOD S06-03-09-10). http://www.apta.org/ AM/Template.cfm?Section=Policies_and_ Bylaws2&TEMPLATE=/CM/ContentDisplay. cfm&CONTENTID=25517. Accessed September 11, 2010.
42. American Physical Therapy Association. APTA Guide for Professional Conduct. http://www.apta.org/AM/Template.cfm? Section=Home&Template=/CM/HTMLDisplay.cfm&ContentlD =24781. Accessed September 11, 2010.
Ellen Costello, PT, PhD, Margaret Plack, PT, EdD, and Joyce Maring, PT, EdD
Ellen Costello is an assistant professor in the Program in Physical Therapy in the School of Medicine and Health Sciences at The George Washington University, 900 23rd Street, NW, Washington, DC 20037 (email@example.com). Please address all correspondence to Ellen Costello.
Margaret Plack is an associate professor and the senior associate dean of the Health Sciences Programs at The George Washington University.
Joyce Maring is an associate professor and program director in the Program in Physical Therapy at The George Washington University.
This study was approved by The George Washington University's Institutional Review Board.
Received February 26, 2010, and accepted December 5, 2010.
Appendix 2. Sample letter of invitation
Dear Physical Therapy Educator,
A simulated patient or standardized patient (SP) is widely used in medical and health related professional programs as a formative and/or summative assessment of the student's clinical performance. Its use in physical therapy educational programs is not well documented in the literature. An SP assessment involves the use of actors coached by faculty as patients/clients. The actor is able to accurately and reliably portray signs and symptoms associated with a particular diagnosis. The physical therapy student is asked to evaluate the "patient" using the patient client management model.
The faculty at the George Washington University are in the process of validating a tool used to assess the student's clinical performance following their first semester of an entry-level physical therapy doctoral program. The practice expectations and skills chosen for the rubric were based on the didactic and laboratory content covered during the student's first semester of study. The goal is to create a valid and reliable SP assessment tool that can be shared with and used by those teaching in entry level educational programs.
In order to validate the SP assessment tool, the faculty at The George Washington University is asking for your assistance as a physical therapy educator. The survey will take approximately 20 minutes of your time. If you are interested in participating in the survey, please respond to this email inquiry and a hard copy of the survey will be mailed to you directly. We will ask that you return the survey within three weeks of its receipt. Thank you in advance for your willingness to participate in this important project.…
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Validating a Standardized Patient Assessment Tool Using Published Professional Standards. Contributors: Costello, Ellen - Author, Plack, Margaret - Author, Maring, Joyce - Author. Journal title: Journal of Physical Therapy Education. Volume: 25. Issue: 3 Publication date: Fall 2011. Page number: 30+. © Journal of Physical Therapy Education Fall 1999. Provided by ProQuest LLC. All Rights Reserved.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.