Massachusetts seems to be leading the way in public reporting of hospital quality measures. A couple of days ago, Health Care For All and the Massachusetts Coalition for the Prevention of Medical Errors held a conference on the Hospital Standardized Mortality Ratio (HSMR), a measure purported to risk-adjust accurately based on Medicare data.
OK, let's get it out of the way: "but my patients are different..."
The whole idea of publishing HSMR for the purpose of making hospital comparisons can become sticky very quickly. First, there is the assumption that it is possible to make imperfect data available to consumers in the context of asymmetrical information. The data is technical and most consumers are not armed to understand their strengths and weaknesses.
The simplest and most important technical component of mortality rates is the fact that patients are not the same from hospital to hospital. It is difficult for the lay public to understand the adjustments that go into the HSMR and the implications for the quality ratings they will read. But it is certainly not impossible.
The advantages of the HSMR measure is that it relies on mortality, not adverse events or morbidity or costs or anything else. Mortality is a solid end-point, it is well-understood and not likely to be manipulated. Mortality data is also complete and can be cross-checked with other sources of death records.
On the other hand, it appears to be about 80% accurate. What does that mean for the naysayers? Is it accurate enough to make comparisons between various hospitals? The incentive is for a hospital with a minor advantage to be very loud about how much better they are than the cross-town competitor. How easy is it to explain the concept of "significant difference" to the public, or to a marketing executive for that matter? There is plenty of ammunition for the doubters. What I remember from my research into Medicare risk-adjustment a few years ago was that no system could predict more than 50% of the variance in Medicare spending for the subsequent year. Try explaining that one to the general public.
Of course there is a difference between a solid end-point like mortality and one as fluid as Medicare expenditures, but it would appear to me to be enough for hospital and large provider groups to resist.
The reality of most rating systems is that most everyone is in the middle. The real purpose is to identify problems at one end of the bell curve and the truly outstanding at the other end. The major advantage of HSMR is that "what gets measured gets done" and there will undoubtedly be a positive movement in the performance of all hospitals whose performance is measured.
I fall in with Paul Levy on the side of publishing HSMR but the devil is in the details. The explanation of ratings and public education efforts will be the most important elements of any effort to make complicated data safely interpretable by patients without harming the reputation of hospitals, the majority of which make a good faith effort to provide quality care.