As groups increasingly rely on patients' opinions, physicians are beginning to question the accuracy of the ratings.
Like it or not-and many doctors don't-patient-satisfaction surveys are increasingly common among large and small groups alike. Demanded by HMOs and employers, ratings of physicians' "people skills" are an essential marketing tool for groups to win managedcare contracts.
But while group administrators and marketing managers embrace patient-satisfaction surveys, doctors are beginning to question the accuracy and fairness of the ratings. After all, they're the ones with their reputations on the line.
Not surprisingly, those who score poorly in the surveys often dismiss the results as mere opinion, and therefore invalid. Others resent being rated by what they consider a .popularity contest" that ignores their clinical skills. But as one group administrator (who prefers anonymity) comments: "Physicians who object most strenuously are usually the less enlightened ones who still believe that doctors have patients. They don't recognize that it's the patients who have doctors."
The gripe that patient surveys paint an incomplete picture of physicians' skills is not off-base. "You have to combine patient-satisfaction ratings with data on clinical outcomes to evaluate a doctor," says Donald Fisher, CEO of the American Medical Group Association, whose survey questionnaire is used by 25 big group practices with 2,000 physicians. "In fact, if I had to choose a doctor based on a single measure, I'd pick the one with the best outcomes, even if his people skills weren't that great. But that's not to say that patient communication isn't vitally important."
The trouble is that many groups are in such a hurry to get patient-satisfaction results, they use slapdash and flawed surveys. "Doctors should be nervous about these surveys," says Jerry Seibert, whose firm, Parkside Associates, based in Itasca, Ill., has more than 200 client groups. "Most aren't conducted scientifically. The group just comes up with a few questions, lets a summer intern conduct the survey, and uses the results to rate the doctors."
Nor do groups conduct the surveys frequently enough to ensure legitimate responses. "It's important to look at these ratings over time," says the AMGA's Fisher, "and to compare them with data from other groups of the same size around the country. If you survey your patients once a year, it's just a one-time snapshot. It's better than doing nothing at all, but it's not a reliable way to measure physicians."
Bette Waddington, a Medical Group Management Association consultant who helps groups design and conduct patient-satisfaction surveys, agrees that doing them only once or twice a year isn't good enough. "If you really want to improve the quality of care," she says, "surveys should be done quarterly or even more often. Ideally, you should survey patients every day for a couple of months, then analyze the results and discuss them with the physicians to give them a chance to change their behavior. Then you should do the surveys again a few months later to measure the results of any changes."
Ratings vary from group to group, doctor to doctor There is a tendency to think patient-satisfaction surveys provide a common denominator on which to judge physicians, but they don't. "It's not fair to compare all the doctors in a multispecialty group to each other," says Seibert. "The ratings can differ significantly from specialty to specialty." Surgeons, for example, see most of their patients only for pre- and post-op visits, and rarely establish the solid relationships that lead to high ratings. Oncologists, however, have more opportunities to build rapport with patients.
And some doctors get dinged-or praised-for circumstances beyond their control. Cardiologists generally receive higher scores than FPs because they don't take many walk-ins. FPs often do, which increases the waiting time-and resentmentamong their patients, thus lowering their scores. …