Health knowledge made personal
Join this community!
› Share page:
Search posts:

At the heart of the matter

Posted Oct 21 2008 12:51am
Liz Cooney, on White Coats Notes, offers a summary of a new article in the New England Journal of Medicine by Doctors Thomas Lee, David F. Torchiana, and James E. Lock. It is called "Is Zero the Ideal Death Rate?" Here are some excerpts from her report:

[Dr. Lee]is concerned that public reporting of mortality rates for individual cardiac surgeons carries unintended, perverse consequences. He fears that surgeons might hesitate to operate on high-risk patients if they are seeking a perfect performance record, he and two colleagues write in tomorrow's issue of the journal.

"If you are being ranked, you may walk away from a patient who's very sick, even though that patient may be at high risk for surgery but even higher risk with medicine" as treatment, he said in an interview. "When so few patients can swing things for you being ranked, we're worried about that effect on the decision-making process."

[The authors say that] reporting on cardiac surgery by institution makes sense, with individual reports available only to those hospitals. Massachusetts recently joined New York, New Jersey and Pennsylvania in publicly reporting death rates for individual cardiac surgeons.

Two elements make individual reports undesirable, they said. The first problem is that risk-adjustment methods intended to account for how sick a patient is do not include variables such as socioeconomic status. The second problem is the small sample size. If the average death rate after coronary artery bypass surgery is 2 percent, one or two deaths among the 200 operations a surgeon performs can make a large difference in that surgeon's ranking, the authors say. Lee said a better way to report performance would be the measures the federal government chose when it rated hospitals recently: better than expected, as expected, and worse than expected.

"I worry about having a patient with diabetes who's doing very poorly. They may have a 20 percent mortality rate with surgery but an 80 percent mortality rate without surgery," he said. "I don't want to have to beg surgeons to operate."

I am not quoting from the actual NEJM article, because Liz's summary is what members of the public are more likely to see. So I recognize that some of the subtleties in the article may not be fully presented. To my mind, it raises tons of questions.

First, is the premise correct, that doctors will stop taking high-degree-of-difficulty patients if their clinical results are made public? I am not sure how to test that statistically, but when I have raised the issue at BIDMC, the response was, "If you are a good enough surgeon to take those kind of cases, you will still take them. If you are not -- or if you are so afraid of your "numbers" -- you shouldn't be taking them anyway."

Second, if we can't make the results of individual doctors public, what basis is there for referring doctors and patients to choose among surgeons? We fall back on anecdotal or reputational methods -- the methods used today -- which have no statistically valid quantitative basis and are therefore subject to errors of a different type.

Third, a hospital-wide rate doesn't help me choose a surgeon. It helps me choose a hospital, for sure, but it doesn't tell me which surgeon in that hospital offers me the best record of success.

Fourth, if we do want to use hospital-wide rates, there is currently a system in place that moves along the path suggested by the authors. Back on April 6, I posted a column entitled Surgical Gag Order. Here's the pertinent excerpt:

The American College of Surgeons, the preeminent surgical organization in the country, has developed a superb program to measure the relative quality of surgical outcomes in hospital programs. It is called NSQIP (National Surgical Quality Improvement Program) and is described in this Congressionaltestimonyby F. Dean Griffen, MD, FACS, Chair of the ACS Patient Safety and Professional Liability Committee.

What makes this program so rigorous and thoughtful is that it is a "prospective, peer-controlled, validated database to quantify 30-day risk-adjusted surgical outcomes, allowing valid comparison of outcomes among the hospitals now in the program." In English, what this means is that it produces an accurate calculation of a hospital's expected versus actual surgical outcomes. So, if your hospital has an index of "1" for, say vascular surgery, it means that you are getting results that would be expected for your particular mix of patients. If your NSQIP number is greater or less than "1", it means you are doing worse or better than expected, respectively.

I am inferring from Liz's article that this is the kind of ranking recommended by the authors in the NEJM article. Here's the catch. The American College of Surgeons will not permit the results to be made public.

So here's our Catch-22: No reporting method is statistically good enough to be made public. But if a method is statistically good enough, we won't allow it to be made public.

The medical profession simply has to get better at this issue. If they don't trust the public to understand these numbers, how about just giving them to referring primary care doctors? Certainly, they can trust their colleagues in medicine to have enough judgment to use them wisely and correctly.

We hear a lot about insurance companies wanting to support higher quality care. When is an insurance company going to demand that the hospitals in its network provide these data to referring doctors in its network? How about this for an idea? If a hospital doesn't choose to provide the data, it can still stay in the network, but the patient's co-pay would be increased by a factor of ten if he or she chooses that hospital.

I have been in many industries before arriving in health care, but I am hard-pressed to remember one that is so intent on preserving the "priesthood" of the profession. The medical community is expert at many things, but particularly at raising stumbling blocks and objections to methods to inform the public and be held accountable. Meanwhile, they are quick to engage in protectionist behavior to keep others out of their field. The insurers, fearful of introducing products that require real-time clinical data from dominant providers in their network, stand by and are complicit.

And then they wonder why state legislatures pass laws about reporting and accountability.
Post a comment
Write a comment: