Are Patient-Satisfaction Surveys Fair to Doctors?
Rice, Berkeley, Medical Economics
As groups increasingly rely on patients' opinions, physicians are beginning to question the accuracy of the ratings.
Like it or not-and many doctors don't-patient-satisfaction surveys are increasingly common among large and small groups alike. Demanded by HMOs and employers, ratings of physicians' "people skills" are an essential marketing tool for groups to win managedcare contracts.
But while group administrators and marketing managers embrace patient-satisfaction surveys, doctors are beginning to question the accuracy and fairness of the ratings. After all, they're the ones with their reputations on the line.
Not surprisingly, those who score poorly in the surveys often dismiss the results as mere opinion, and therefore invalid. Others resent being rated by what they consider a .popularity contest" that ignores their clinical skills. But as one group administrator (who prefers anonymity) comments: "Physicians who object most strenuously are usually the less enlightened ones who still believe that doctors have patients. They don't recognize that it's the patients who have doctors."
The gripe that patient surveys paint an incomplete picture of physicians' skills is not off-base. "You have to combine patient-satisfaction ratings with data on clinical outcomes to evaluate a doctor," says Donald Fisher, CEO of the American Medical Group Association, whose survey questionnaire is used by 25 big group practices with 2,000 physicians. "In fact, if I had to choose a doctor based on a single measure, I'd pick the one with the best outcomes, even if his people skills weren't that great. But that's not to say that patient communication isn't vitally important."
The trouble is that many groups are in such a hurry to get patient-satisfaction results, they use slapdash and flawed surveys. "Doctors should be nervous about these surveys," says Jerry Seibert, whose firm, Parkside Associates, based in Itasca, Ill., has more than 200 client groups. "Most aren't conducted scientifically. The group just comes up with a few questions, lets a summer intern conduct the survey, and uses the results to rate the doctors."
Nor do groups conduct the surveys frequently enough to ensure legitimate responses. "It's important to look at these ratings over time," says the AMGA's Fisher, "and to compare them with data from other groups of the same size around the country. If you survey your patients once a year, it's just a one-time snapshot. It's better than doing nothing at all, but it's not a reliable way to measure physicians."
Bette Waddington, a Medical Group Management Association consultant who helps groups design and conduct patient-satisfaction surveys, agrees that doing them only once or twice a year isn't good enough. "If you really want to improve the quality of care," she says, "surveys should be done quarterly or even more often. Ideally, you should survey patients every day for a couple of months, then analyze the results and discuss them with the physicians to give them a chance to change their behavior. Then you should do the surveys again a few months later to measure the results of any changes."
Ratings vary from group to group, doctor to doctor There is a tendency to think patient-satisfaction surveys provide a common denominator on which to judge physicians, but they don't. "It's not fair to compare all the doctors in a multispecialty group to each other," says Seibert. "The ratings can differ significantly from specialty to specialty." Surgeons, for example, see most of their patients only for pre- and post-op visits, and rarely establish the solid relationships that lead to high ratings. Oncologists, however, have more opportunities to build rapport with patients.
And some doctors get dinged-or praised-for circumstances beyond their control. Cardiologists generally receive higher scores than FPs because they don't take many walk-ins. FPs often do, which increases the waiting time-and resentmentamong their patients, thus lowering their scores.
A well-designed survey, however, separates the doctor-patient relationship from possible irritants like billing, scheduling, and waiting time. "Unsophisticated surveys overemphasize front-office factors, which affects physician ratings," says Seibert. "As a result, they don't give a 'clean' measure of the doctor-patient relationship. That's a serious problem, particularly if the survey results are going to affect the doctor's bonus."
Group size and setting can also affect satisfaction ratings. Typically, says Seibert, the larger the group the lower the ratings. "That may be due to bureaucracy or to a loss of personal contact in the big groups," he suggests. "Both make the patient feel he's just an ID number rather than a person." And because patients in urban and suburban groups are more transient, they have less time to form a satisfying bond with their physician. Doctors in rural areas may see the same families for generations, and thus build more trusting relationships.
Is it fair to compare doctors who see lots of patients-and who therefore have less time for each one-with physicians who aren't as busy? "More important than how much time a doctor spends with a patient is what occurs during that time," says Seibert. "An effective doctor can establish good contact and communication with a patient in
A typical satisfaction survey form queries patients on their office visit. only a few minutes. Another doctor may spend 15 minutes, but if he doesn't listen, or treats the patient rudely, he's not going to score as well in the survey."
The severity of a patient's illness may also affect the ratings. Doctors whose specialties attract the sickest patients may be unfairly penalized in the surveys if those patients blame them for their illness. Again, a good survey can adjust for such variables.
Even more unpopular and problematic than an in-house survey is one conducted by an HMO. The HMO usually polls only the patients enrolled in its plan-which may be a small and unrepresentative sample of the group's total practice. And if those patients hate their HMO because of its poor coverage or payment problems, when they're surveyed they may take out their anger on their doctor because he's the only one they have contact with.
That's all the more reason a group should perfect its own survey, according to Seibert. "If the group relies on the HMO's survey and the doctors get poor ratings, the HMO will drop the group," he says. But if the group can counter with better ratings from its own survey, the doctors have grounds for questioning the accuracy of the HMO's patient poll.
It's also important for groups to compare their satisfaction results to those of other groups with a similar volume of managed care. "Our research shows that managed-care patients are consistently less satisfied than others," says Seibert. "If one group has 60 percent of its patients in plans, its survey results are likely to be worse than those of a group with less managed care."
But for all the pitfalls of patient-satisfaction surveys, they do provide a revealing look at the strength of the patient-doctor relationship. Patients tend to overlook long waits for appointments or other problems if they really like their doctor. On the other hand, doctors with the weakest personal relationships will bear the brunt of patients' irritation and resentment-and have the lowest scores. So yes, bonding with patients is important, say doctors. But they argue that a quickie questionnaire can't really measure the quality of the subtle and subjective doctor-patient relationship. Besides, some patients just won't like a certain doctor no matter what he does, and it's unrealistic to expect every physician to get along with every patient.
Get over it, say consultants. Patient-satisfaction surveys are a popularity contest, but that isn't necessarily bad, according to Seibert. "In today's competitive marketplace, you can't get away with poor people skills," he says. "If you don't have good relationships with your patients, they may switch to other physicians, and you won't get the opportunity to show what good clinical skills you have."
Patients' evaluation of their medical care may be subjective, but so what? asks Susan Cejka, president of Cejka & Co., a physician recruitment and compensation firm based in St. Louis. "Their perception of that care is what matters," she says.
So how can doctors curry patients' favor and adjust to their many personality types? Seibert thinks it's a skill doctors must learn. "Some patients just want you to tell them what to do," he says. "Others want you to explain in detail how you reached your conclusion. A good physician needs to know when as well as how to explain something. Those who can are the ones who get the outstanding ratings. They've learned how to adapt their communication style to the needs of different patients."
Groups report satisfaction with patient surveys
Besides using survey data to win HMO contracts, groups are also studying the results internally to improve doctorpatient relationships. The medical director may sit down with low-scoring physicians and discuss specific ways to improve. Some groups offer training programs and CME courses in patient relations. Others take a hands-off approach, letting physicians figure out how to better their scores.
But the quickest way to get the physicians to be more patient-friendly is to base a portion of their annual bonuses on patient-satisfaction results, according to Cejka. "You get what you pay for," she says.
These three groups have experience with patient-satisfaction surveys. Here's how they make use of the data:
The Bonaventure Medical Group
A primary-care group with eight offices in the Chicago area, Bonaventure conducts continual patient-satisfaction surveys for each of its 35 physicians. "We use a standard national survey firm," says vice president for operations Joan Gilhooly, "so we have plenty of comparative data by specialty and geographic area. That takes a lot of the subjectivity out of the ratings."
Bonaventure's patient survey contains 40 to 50 questions on the entire practice, with seven items specifically related to physician-patient contact. "We mail the questionnaires out every week to avoid the chance of a oneweek aberration," Gilhooly explains. "A doctor who's very busy during flu season, for example, may have lower ratings those weeks. "We select 15 patients per doctor per week, and send the forms out one week after their visits. Two weeks later we send all those patients reminder cards. We average returns of about 40 percent."
Every quarter, the data are collected and analyzed, providing the basis for a report on the entire group and on each physician. "The survey's a major investment in time and money," says Gilhooly, "but we think it's worth it. Those ratings are a critical indicator of the quality of our service, and we use the results in our marketing efforts and for physician compensation."
Based on the survey ratings, Bonaventure's physicians are eligible for quarterly bonuses that average $1,000. Doctors with aboveaverage scores can earn up to $2,000 per quarter; those with poor ratings earn less than $1,000. While that's not a huge sum, it does provide an incentive to score well on the surveys.
Most physicians appreciate getting feedback from patients, according to Gilhooly. "But some feel their patients have an ax to grind," she explains. "Others complain that the survey measures only their relations with their patients and not the quality of their medical care. We admit that's probably true, but most of our physicians agree that the patients' perception of that care-whether it's accurate or not-is really important."
Bonaventure physicians who score poorly in the survey receive counseling from the group's medical director. They may also be urged-but not forced-to attend CME courses on patient relations. "Only two doctors have consistently received low scores since we started doing the surveys," says Gilhooly. "One of them is no longer with us."
Palo Alto Medical Clinic
This 160-doctor California clinic has conducted patient-satisfaction surveys for the past three years. Once a year, they call 40 patients per physician and ask eight questions about the doctor-patient relationship. Those with low scores aren't automatically set down as poor communicators, however. Each doctor's case is reviewed for mitigating factors. For example, if a doctor is a relatively new member of the group, her patient base may not have stabilized yet. Or if a physician didn't grow up in the U.S., he might have linguistic or cultural difficulty in communicating with patients.
Patient-satisfaction ratings are combined with separate measures for teamwork and cost-effective care in determining the amount of the physician's annual bonus. While most doctors get a basic bonusroughly 5 percent of total compensationtop performers get a little more.
At Palo Alto, some of the biggest critics of the patient-satisfaction ratings are the primary-care doctors. Diane Stewart, the clinic's director of quality management, concedes they have a point.
"Much of our primary-care practiceincluding 60 percent of our pediatric visitsoperates on a same-day demand," she says. "So patients are seeing a variety of doctors instead of bonding with one primary-care physician. We're not sure, but we think that may be driving down their patient-satisfaction scores."
Less valid is the complaint that physicians with fewer patients can afford to spend more time with them, giving them an unfair advantage in establishing trust and communication.
"We've studied that factor," says Stewart, "and we found the average number of patient visits per day doesn't seem to make much difference in the satisfaction ratings. In fact, some of the top performers are also the busiest. Even with only five minutes, they still manage to make their patients feel they're getting full attention."
Palo Alto's department heads meet with low-scoring physicians and offer them various options to improve their people skills, including CME courses and informal meetings with the clinic's top performers. "We provide these channels for selfimprovement," says Stewart, "and we hope they take advantage of them. But we don't force them to; it's all voluntary. After all, these are pretty smart people. Once you make it clear that there's a problem, they're usually willing to try to improve."
Dean Medical Center
At this giant practice based in Madison, Wis., patient-satisfaction surveys play an important role in a competitive market. "In the Madison area, 60 percent of our patients are enrolled in managedcare plans," says Sheryl Thies, Dean's vice-president for marketing. We have plenty of competition from other groups for HMO contracts, and we have an active employers' coalition. Both are very interested in our survey results."
Dean conducts an annual survey on its 400 doctors, located at 35 sites around the state. The 13-question forms are handed out to patients when they come in for a visit and collected when they leave. The ratings are based on 50 completed forms per physician.
"To be fair, it's important to put these scores into proper context," says Thies. "We compare individual physicians to their peers in their own departments and sites," Thies explains, "and to their peers in the entire Center and throughout the country. We do find differences between our sites: Some draw older or sicker patients, and their ratings tend to be lower-but not always.
"Some specialties, like urology, seem to score lower across the country. Oncologists and cardiologists do well on surveys because of the `halo effect' " she explains.
"They do their magic on patients with life-threatening illnesses, and when it works, they snatch them from death's door. If things don't work out, the patients may not be around to complain on the next survey."
Dean only recently began polling patients, so the results won't affect physician compensation until the survey's validity is established. But the group is using the patient-satisfaction data to improve doctor-patient relations.
"We sit down with each department at every site and go over the ratings," says Thies. "Each physician gets his or her own scores, plus the average scores for the department and the entire Center. Since this process is still new, there's a certain amount of trepidation among doctors. Those who do well are naturally pleased; those who don't wonder why. At this point, we leave it up to them and their department heads to figure out how to improve those scores."
Sometimes the solution is relatively painless. One Dean doctor had high scores except for one category: the amount of time he spent with each patient. It turned out his nurse would break in and cut off any visit whenever the doctor was running more than five minutes behind schedule. This naturally left his patients feeling short-changed, which they remembered when they were surveyed. A brief discussion with the nurse and the department head fixed the problem.
"We use the ratings not just to attract new patients," Thies explains, "but also to keep the ones we've already got. It's our way of making sure they feel they're getting good value."
What to ask before your group surveys your patients
Often physicians are presented with patient-satisfaction data without an explanation of how the
survey was conducted. Consequently, they have no way to judge the survey's validity or reliability.
"Patient-satisfaction surveys aren't rocket science," says consultant Jerry Seibert, "but there are good ways and sloppy ways to do them. If physicians' compensation is linked to customer satisfaction, they should make sure the survey is asking the right questions and producing accurate information."
Here are a few basic questions to ask:
Who's doing the survey? Is it being conducted in-house, by an outside consultant, or by an HMO? Each may have different goals, which can influence the survey's results and interpretation.
Is the questionnaire designed for your group? If not, are the questions relevant to your own practice? Are they specific enough to provide meaningful guidance for improvement?
How many patients are surveyed for each physician and how many respond? Is the sample size large enough to produce accurate ratings?
Who's being surveyed: patients you've seen recently, or just anyone on your list? Are they a representative sample of your practice?
How are the patients being surveyed? Do they fill out cards in the waiting room before seeing the physician or after? How long after an office visit are patients ask to participate in a mail or phone survey? Is the mail-in survey anonymous?
Do the questions really focus on the doctorpatient relationship? Or do they cover front-office procedures and practice conditions over which doctors have little control?
Are the ratings being compared with those for physicians in the same specialty? In similar groups? In similar geographic settings?
Mr If the survey results affect compensation, are they combined with other measures of productivity or clinical effectiveness?…
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Are Patient-Satisfaction Surveys Fair to Doctors?. Contributors: Rice, Berkeley - Author. Magazine title: Medical Economics. Volume: 73. Issue: 23 Publication date: December 9, 1996. Page number: 55+. © Advanstar Communications, Inc. Jan 23, 2009. Provided by ProQuest LLC. All Rights Reserved.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.