Robert Wachter is widely regarded as a leading figure in the modern
patient safety
movement. Together with Dr. Lee Goldman, he coined the
term "hospitalist" in an influential 1996 essay in The New England
Journal of Medicine. His most recent book, Understanding Patient
Safety, (McGraw-Hill, 2008) examines the factors that have contributed
to what is often described as "an epidemic" facing American hospitals.
His posts appear semi-regularly on THCB and on his own blog "Wachter’s World."
So Zagat will now be rating doctors, using the methods it perfected helping you find the best sushi in Brooklyn Heights. What’s next, Consumer Reports rating grad schools? Fodor rating auto mechanics?
Whatever you think of Zagat’s cross-dressing, it again demonstrates
the bottomless market for doctor rankings. HealthGrades, the Colorado
company that breathlessly delivers its “200,000 Americans died from
medical errors in 200X!” pronouncements every year (grabbing a bunch of headlines, despite the fact that this report is based on measures that were not intended for this purpose and really aren’t measuring deaths from errors), appears to be doing quite well,
thank you, largely fueled by its doctor ratings. And every metropolis’s
city magazine has its “[Your City’s Name Goes Here]’s Best Doctors”
issue, based almost entirely on peer surveys. Most docs scoff at these
ratings (particularly docs like me who haven’t made their city’s list),
but they clearly move magazines. [I’ll discuss hospital rankings,
especially US News & World Report’s Best Hospitals list, in a future posting.]
Clearly, real people want to know who is a good doctor. But how should we be approaching this task?
have the privilege of serving on the board of the American Board of Internal Medicine. ABIM, and the other specialty boards, have generally taken their charge to be to determine “competence” (with board pass rates generally above 90%, a pretty low bar) and then not to differentiate further. A doc is either board certified or she’s not. End of report.
ABIM’s
recently finalized strategic plan includes a commitment to make public
more information about diplomates if and when it feels such
distinctions are scientifically valid and dissemination would promote
high quality care. Dr. Kevin Weiss, the new president of the American Board of Medical Specialties
(the umbrella organization for all the specialty boards) might go even
further, faster. He recently went on record as favoring having the
Boards enter the doctor ranking business – not just determining
competence, but differentiating excellence from not-so-much. In a
recent talk to the ABIM board in Dallas, Dr. Weiss held a copy of Dallas Magazine’s
Best Doctors issue and dramatically observed that if the Boards don’t
get into this game, others – with far less allegiance to scientific and
psychometric Truth – will. Needless to say, his remarks generated a wee
bit of controversy.
I’m also on Google’s Healthcare Advisory Board. [Note that my
comments about ABIM and Google represent my own opinions, not those of
these fine organizations, and do not divulge any trade secrets. You
decide whether to buy more Google stock on your own.] Anyhooo, it
wouldn’t surprise you to learn that Google is also thinking
about what contribution it can make to the doctor rating “space.” But
how to balance consumer rankings (a la Zagat), which will invariably
tilt toward bedside manner and office amenities (not unimportant
things, but ones that may be quite different from clinical acumen),
with more meaningful assessments of clinical competence? And, as I discussed earlier this month,
even when you add standard process and outcome measures to the brew,
we’re still stuck scratching our heads about how to factor in clinical
knowledge and decision making, things that today’s quality measures
completely whiff on.
The stakes are immense, and a balanced approach is more likely to
bear fruit than any single peephole.
Ultimately, if I’m choosing a doc for me or a loved one, I’d like to
know it all: bedside manner (4 stars from Zagat), structural measures
(is the doctor’s office computerized?), process measures (are diabetics
getting statins appropriately?), surrogate outcomes (what’s the average
hemoglobin A1c?), and hard outcomes (what are the risk-adjusted
mortality or hospitalization rates?). And then I’d like the appropriate
specialty board (ABIM, American Board of Surgery, etc.) to tell me
whether the physician is meaningfully engaged in quality improvement
activities, and how well he or she did on the certifying exam – the
best measure we have of knowledge and clinical judgment. Yes, you heard
me right: I’d like the Board to tell me whether the doc was in 5th
percentile on the certifying exam or the 87th. It doesn’t pass the
smell test to say that we consider both these board certified docs to
be undifferentiate-able. In this new era of transparency, if we
physicians would want that information before choosing a doc for
ourselves (and I sure would), then I believe that patients should have
access to it as well.
And then I’d like Google or somebody else
to put all of this together into an attractive, user-friendly page that
pops up when I type “Best Doctor Diabetes San Francisco” into a search
engine, along with directions to the office, a link to his appointment
calendar… and a parking spot.
Coming soon? The people have spoken, and the people have an uncanny way of getting what they want.
Bob Wachter
Categories: Uncategorized
Jamie Zaita is a bad doctor. Don’t go to her! She doesn’t care about patient’s well being. She treat them like cattle.
Medical Statistics Courses for doctors and clinicians can be downloaded or ordered on DVD from http://www.Learn-Medical-Statistics.com
I think this would have to be handled with great caution. Patients are often inappropriately dissatisfied (“doctor did not prescribe my analgesics that I take daily for headaches
I prefer J.D. Powers ratings to Consumer Reports. The former is based on consumer experience, the latter based on expert opinion. I trust consumers lots more than I trust experts.
I am a little surprised that someone as smart as Dr. Wachter (as well as many debaters) want to enhance consumerism in health care.
There are (and always will be) patients that are more demanding and make greater efforts to find the doctor they think will be best (for specialty, insurance and area in question). Some patients use reasonable criteria (e.g. recommendations by physicians and/or intelligent friends with first hand experience), others probably not (for instance, one patient told me that my MPH was a major reason to come see me).
The criteria listed by Dr. Wachter sound good on first look, but on closer inspection, are rather doubtful (board scores: I think everyone knows that there are good and bad test takers, esp. MC questions – I happen to be a rather good one, but I know excellent physicians who are just average). Convenience is a factor, but may matter more for your PCP, not so much for your surgical consultant … same, to some degree, for bedside manners (of course with minimum requirements).
Physicians go through an extensive process of selection and certification, by multiple institutions and repeatedly (specialty boards are just one step). Everyone being certified should be competent … if Dr. Wachter makes the point that this is not always the case, I would tend to agree, but I think the solution is in adjusting the certification process, not some rather arbitrary ranking that almost certainly will be misunderstood and misused, and certainly cannot be individualized (I practised in a community with 2 specialty physicians, both competent, one was a beloved patient hugger and aggressive prescriber, the other one more distant, more conservative, with expertise in one subspecialty – what would you do with any kind of ranking here?). Are there doctors not communicating well? Sure. Make communication part of the specialty board – have a candidate explain a complex medical situation to a hypothetical hmong immigrant or a patient actor playing a construction worker. If they fail only that part, let them practice, have them take communication classes and have them come back in a year.
As mentioned above, there are patients who have the ability and will to choose good doctors, and others who don’t. Both kinds of patients merit protection by a set of reasonable standards.
Now, one could argue: rankings based on patient satisfaction and/or convenience do not hurt and give the patient useful information … I would disagree. Any kind of convenience rating will be misread by a seizable minority as a quality ranking. Re. patient satisfaction … I think this would have to be handled with great caution. Patients are often inappropriately dissatisfied (“doctor did not prescribe my analgesics that I take daily for headaches”, “doctor did not listen closely to exact description of symptom 42 on my list”, “doctor did not comply with my disability request” etc.) (Side note: Just in case someone argues that all this could be handled to the patient’s satisfaction just with a better communication style, I would doubt that this is coming from someone practicing and seeing a significant number of difficult patients.) Satisfaction ratings and reaction to complaints may give the patient undue leverage … of course, we have to live with that to some degree (since satisfaction is of course a good and important issue), but we do not have to enhance that and unreasonably tip the balance to the consumer’s favor. What do I mean by that in practical terms? My 700+ regional MSG does satisfaction surveys; about 90-95% of my ratings (general and by specific issue) are in the 8-10/10; the remainder tends to be disgruntled patients that very often have one of the above issues (I can see this because I can read transcriptions of their free text comments). This considerably lowers my average and mean satisfaction scores. Should I prescribe more opioids or accomodate inappropriate requests for disability? I don’t think so.
“and effectively informs patients about the risks, advantages, and alternatives for a broad range of surgical procedures,”
Is the patient competent enough to make those decisions – or was this written by lawyers for lawyers?
Care to share the five key factors?
Intriguing. Our company, which runs two insurance companies (one for pediatricians, one for ob/gyns), created an “informed consent” program for ob/gyns. (Incidentally, the program saves our ob/gyns an average of 300 hours a year – worth about $120,000 in billings – and effectively informs patients about the risks, advantages, and alternatives for a broad range of surgical procedures, while simultaneously providing the physician with a comprehensive report on the patient’s listening and reactions.) At the end of our informed consent programs, we ask patients to “rate” their doctors for five key factors. We are collecting (literally) tens of thousands of reactions by patients to their physicians, in tremendous detail, about those elements which matter the most to patients. If any of you are interested, contact me at erosov@mdmc.us for more information. Best, Gene Rosov, Pres.
Lets not forget that individuals have to deal with 3rd parties – healthcare companies. What role should they play?
Along those lines, if anyone can help support the project at payorwiki.com (a mediawiki) please do add and refer others to contribute.
I’m hopeful with enough community support, we can make a difference.
Thank you
J
I wouldn’t dismiss patient satisfaction ratings as a passing fad. This has been done for 20 years in healthcare. Hospitals have collected and used patient satisfaction results for a number of important strategic purposes including service offerings, building decisions, and marketing messages.
Couple of interesing ideas on how to make patient satisfaction ratings impactful besides CMS reporting:
– Tieing this information into online provider directories and health search results for individual docs.
– Use this data along with some other pieces to structure tiered provider networks instead of utilizing on strict utilization numbers sprinkled with a few quality ratings.
– Expanding upon initial findings that higher satisfaction ratings may lead to lower malpractice rates and go to malpractice insurers. Get malpractice insurers to use this as a piece of the criteria they use to offer malpractice rates to physicians (they already lower rates by a few % points if physicians at a practice take risk advoidance course specificed by the malpractice insureer)
– Use patient satisfaction ratings as a way for a practice to divy up bonus money to their docs at the end of the year.
“Why don’t we get nurses to rate doctors, that would be information from the trenches.”
This is already happening at NursesRecommendDoctors.com. One of at least a dozen companies popping up in the patient satisfaction ratings field for physicians.
What a refreshing discussion, how to rate physicians not whether to rate. That’s a big change.
When physicians adopted the medical business model, they were warned by Arnold Relman, where this train was headed. In a business model, good/fair customer service is a reasonable expectation. I don’t believe physicians have much to fear because Americans have adapted to poor and bad customer service in a variety of service industries. I feel certain we will adjust to bad customer service in healthcare even when we can document in “science” it is bad service.
Patients/customers are smarter than we (editorial not royal) give them credit. Poor folks without access to healthcare will endure most anything for care. See Sara Corbett’s “Patients without Borders,” NYTimes Mag on Nov. 18, 2007.
I’m waiting for the source that will combine price and quality information and give me a value rating so I will know how to spend my HSA dollars when I can find an insurance plan that will sell me one.
What a refreshing discussion, how to rate physicians not whether to rate. That’s a big change.
When physicians adopted the medical business model, they were warned by Arnold Relman, where this train was headed. In a business model, good/fair customer service is a reasonable expectation. I don’t believe physicians have much to fear because Americans have adapted to poor and bad customer service in a variety of service industries. I feel certain we will adjust to bad customer service in healthcare even when we can document in “science” it is bad service.
Patients/customers are smarter than we (editorial not royal) give them credit. Poor folks without access to healthcare will endure most anything for care. See Sara Corbett’s “Patients without Borders,” NYTimes Mag on Nov. 18, 2007.
I’m waiting for the source that will combine price and quality information and give me a value rating so I will know how to spend my HSA dollars when I can find an insurance plan that will sell me one.
Why don’t we get nurses to rate doctors, that would be information from the trenches.
I agree with you 100% on this one, Peter. I would like a rating system that would speak to the doctor’s clinical ability, communication skills, and the cost-effectiveness of his or her practice pattern. Good bedside manner and not keeping patients waiting too long for their appointments are desirable, but I would trade a lower score on those two measures for a higher score on the first three. Input from nurses, including a suggestion as to whether or not they would go to a given doctor themselves or recommend him/her to a family member would be extremely useful.
I agree with Matt, satisfaction ratings/surveys are a mine field and once established change those being rated to play-to-the-survey. First you are asking non-medical, non-technical patients, many with little education and communication skills, to rate their experience. Their experience may only be understood by them to be, “he was soooo kind, soooo polite, and surely I felt better when I left the office”. Sorry, but that’s not what I want to know. I think we’ve all seen and done satisfaction surveys, does anyone think the questions make any sense or really allow you to communicate YOUR experience.
“Strongly Agree”, “Agree”, “Disagree”,”Strongly Disagree”, give me a break. As usual the questions are vague and intended to lead you in a certain direction.
And what do you interpret from a 50% satisfaction – use the doc or not? Maybe someone could give me some sample questions so I had a better understanding of how this may work. Why don’t we get nurses to rate doctors, that would be information from the trenches.
This conversation misses the point about patient satisfaction ratings. The play in patient satisfaction ratings for physicians is what CMS is pondering. CMS has already mandated that hosptials will have to begin collecting patient satisfaction ratings as of July 2007 through a standardized instrument (the HCAHPS survey instrument). If hospitals fail to meet this requirement, they will be subject to a fiscal penalty of 2 percent of their Medicare payments for fiscal year 2008. Plus, this data will be made public starting in March 2008 on the CMS website.
This wasn’t a big adjustment for hospitals since they already have been using independent vendors such as Press Ganey, NRC, or PRC to administer and collect data on patient satisfaction for the past 10-15 years. Now everyone just utilizes the HCAHPS survey and adds on questions as appropriate for feedback on hospital operations and strategic planning needs.
Where the CAHPS stuff is relevant for physicians is that there is an extremely strong likelihood that CMS is going to mandate that physicians report patient satisfaction ratings utilizing the CAHPS Clinician and Group Survey instrument in some form in the near future. Basically the CAHPS Clinician and Group Survey is a vetted and psychometrically sound survey instrument that has been field-tested in numerous physician practices across the U.S. already. It would allow for standarized comparisons of individual physicians or physician practices based upon patient satisfaction scores.
There are a couple of issues here though. The first issue is that CMS would likely take some heat for disclosing patient satisfaction scores at the individual physician level on their website. The other main issue is that this would potentially impose a noticeable cost burden on individual physicians. AHRQ recommends that a minimum of 45 patient satisfaction surveys are needed per physician. Some physician practices may try to do this internally but I doubt it. I would imagine that they would go to independent vendors of different varieties and flavors (Note: a nice business opportunity).
Back to the response rate – Assuming that you even get a response rate of 50% to a patient satisfaction survey from their patients (which is being incredibly generous), then an independent vendor is going to need to contact 90 individual Medicare patients.
If the independent vendor is utilizing a traditional survey method (e.g., paper), you might be talking about a cost of probably $5-$7 dollars a survey charged to the physicians by the independent vendor to handle the survey process. 90 * $5-$7 dollars = $450-$630 dollars a year. Individual physicians will rightly so be pretty upset by this.
Independent vendors could go with an alternative approach utilizing the Internet (which is probably 90-95% cheaper than paper adminstration) but you run into the issue of response bias with Medicare patients. Other approaches include utilizing a kiosk in the office or a Tablet PC but some of these alternative approaches may be impacted by what CMS states as their policy on acceptable survey administration.
Basically if you add it up patient satisfaction ratings for physicans are here to stay and likely to get a dramatic boost when CMS decides to mandate reporting of this data for physicians. The real issue though is just how valid is patient satisfaction an indicator of clinical quality. The literature appears pretty mixed although there are some positive findings around the issue of adherence.
In this new era of transparency, if we physicians would want that information before choosing a doc for ourselves (and I sure would), then I believe that patients should have access to it as well.
What a concept! And a breath a fresh air.
Yes! I would love to see Zagat in the physicain ranking business.
Outcomes are great – we all want quality care – but I am looking for measures to determine if a doctor acts as if they CARE about their patinets.
I am all for ranking that help improve behavior – and as consumers of healthcare we go to physciains to guide us in becomeing well. If all my doctor can do is write a perscription – not give ‘helpful suggestions’ such as food to eat/stay away from, reccomend I exercise – they are not helping me become well. If another physician can write the same perscription add some tips, and say “I hope you feel better”, give them more points!
(I say this after a disappointing visit to a PCP for stomach ache – i was given a perscpription and had to pry any additional/smart advice out of him – my mom gives better stomach health advice!)
Vijay raises a very good point. The “Best Doctor Diabetes San Francisco” is unlikely to be the best in all types of diabetics or the best for all kinds of complications. Matching patients to the best providers for their exact situation is the ultimate goal, with the obvious exception of primary care, where continuity may trump “bestness” for a new problem outside the PCP’s expertise.
But I really question the focus on individual providers. Where organizations of multiple providers exist, some higher organizational unit should be the focus of quality measurement (practice, IMG, hospital, etc). A ratings system can never capture all the nuances a peer coworker sees, and if you punish the whole organization with a low rating, coworkers may have the best chance of figuring out how to shed or improve their low-performering partners. They’ll also know if the reason one doctor in their clinic looks bad on, say, hemoglobin A1c is that he or she has developed expertise in treating particularly difficult or complex patient population (say, the ones who were doing especially badly under other doctors’ care). Getting rid of this doc wouldn’t fix anything…the “poor performance” associated with his or her patients would just distribute among the remaining docs…and each patient would do even worse. Worse yet, we’d all have to pity the difficult or complex diabetic who misses this doctor in an internet search.
Rating individual doctors through a search engine will always be a very blunt instrument (not to mention a nice generator of lawsuits…you can close a bad restaurant and open another, but you can’t “close” yourself as a doctor even if you improve). Farm out the individual-level quality incentives to the organizations. Where there isn’t an applicable organization, individual-level measures will have to do…though I suspect that docs will seek cover from capricious and unstable quality measurements. This will do quite a bit to help medicine transition from a cottage industry to one that is more organized.
Bob,
While I think your approach to developing further rankings may drive some traffic, I’m not sure that the further standardization of medicine driven by peer ranking or automated metric ranking is what patients truly desire.
Having done quite a bit of work on consumerism in healthcare while at McKinsey, I came away with the thought that consumers want healthcare to be more convenient, more responsive to their needs, and visits need to have a reasonable cost. As I’ve looked at retail clinics, it becomes clear to me that consumers are willing to pay quite a bit more for better service.
Rather than comparing doctors against each other, why can’t we help patients find the provider that works best for their needs?
We’re taking that approach at Health Shoppr, and I look forward to unveiling it to the group in mid-2008.