By Terry, Ken
Medical Economics , Vol. 73, No. 13
Surveys are becoming more thorough and sophisticated. We asked experts and group administrators for tips on using them.
Group practices are measuring patient satisfaction as never before. Competition and pressure from health plans and employers are the two main reasons. Also, employers are demanding data on "quality"-and patient satisfaction is the most easily accessible quality measure.
In addition, groups are doing largescale, sophisticated surveys to get data they can use themselves for quality improvement. Such surveys are usually doctor-specific. Not only can physicians be called on the carpet if they score low in certain areas, but their compensation may also be tied to the survey results (see page 106).
Managed-care organizations conduct patient-satisfaction surveys, too. But if the MCOs share any of the results with medical groups, it's generally the group-wide data only. And that information, say observers, is often gathered from such small samples that it's meaningless on the individual-physician level. Some observers also say that the surveys are conducted so long after the last visit that patients are unlikely to recall significant details of their encounters.
Nevertheless, the National Committee for Quality Assurance requires HMOs to deliver patient-satisfaction data as a condition for accreditation. So while the NCQA hasn't yet endorsed a particular survey (it's working on that now), the plans must either conduct surveys or have the practices themselves do it. Since no practice wants to be judged on the basis of a flawed survey, the NCQA mandate gives groups another powerful reason to canvass their own patients.
What are the best ways to design and conduct a patient-satisfaction survey? Here's advice from experts, including group practice administrators.
Should you design your own survey?
If you're unwilling to take on this task, there are hundreds of vendors who will. Typically, marketing firms do the mailing or calling, whereas consultants design surveys and sampling protocols and tabulate data from the responses.
Some of these companies have also built national databases that can be used to compare a group's patient-satisfaction ratings with those of similar practices. But comparisons require use of the same questionnaire and survey method (see page 108).
Parkside Associates, a Chicago-based consulting firm, has conducted patientsatisfaction surveys for more than 200 practices, ranging from very small ones to large groups such as Houston's KelseySeybold Clinic and Detroit's Henry Ford Medical Group, part of the Henry Ford Health System. Jerry Seibert, Parkside's president, is skeptical of the surveys that practices design themselves, noting that most practices of under 50 physicians that measure patient satisfaction choose to go this route. Frequently, he says, homegrown surveys are no more than cards left out on counter tops for patients to pick up, check off a few items, and scribble a comment.
This isn't true in all cases. For instance, the Fallon Clinic in Worcester, Mass., has canvassed patients since 1984. Initially, Fallon used focus groups to decide which questions to ask. Over time, the clinic has fine-tuned its IO-question mail survey to delve into particular areas of concern, such as waiting times for appointments, thoroughness of exams, and courtesy of office staff. Group administrator Lee Beaudoin feels the surveys have generated valuable data.
Many other groups employ a standard nine-item survey developed by outcomes researchers John Ware and Ron Hays. Known as the Medical Outcomes Study Visit-Specific Questionnaire, or VSQ for short, this instrument can give practices a good overall idea of how they're doing, though it's short on specifics, notes Allyson Ross Davies, a Boston-based survey consultant. "The survey provides one or two questions per key area and simply asks, `Are we doing okay?' "
Variants of this in-office survey are being used by the Health Outcomes Institute in Bloomington, Minn. …