Educating patients, (a.k.a. “consumers”), to make the “best” health care choices has been a fundamental principle in some health reforms schools, including those advocating for more high-deductible health plans. While this concept makes sense in economic theories, it also requires belief that patients can and will make good use of the information available to them - particularly when they are ill.
Another fundamental necessity for making such consumer-directed healthcare work to improve quality and lower costs is that the information provided to people is meaningful and accurate. A study published in the November/December 2008 issue of Health Affairs illustrates the complexity of providing accurate information.
This study was based upon the very simple question, “How easy would it be for a patient in the Boston area to find the “best” hospital by using different quality rating services?” And the results were pretty fascinating: 5 different ratings systems designed to provide the public with quality information about individual hospitals didn’t agree on which were the best hospitals overall or even for specific conditions - even when the measure was death from a specific condition. For example, the article notes, “Neither the observed mortality rates nor the observed/predicted rates were consistent across [rating systems for Acute Myocardial Infarction]. The hospital ranked first by HealthGrades had the second highest observed mortality; it ranked seventh according to Mass QC [a state government run information system]. Conversely, Health Grades’ seventh-ranked hospital (the only hospital ranked statistically worse than average) was ranked first by Mass QC. This same hospital was ranked fifth in the nation by U.S. News and World Report for cardiology.”
Similarly the rating systems didn’t agree on the quality of care for heart bypass surgery:
Conclusions: The study’s authors don’t just throw their hands up and profess rating systems to be complete failures. Rather they note that transparent quality information reporting can stimulate quality improvement activities within individual institutions, and they also make three recommendations:
Hospitals should embrace quality reporting and make sure that the data collected and the design of the analyses truly reflect the quality of care
Patient experiences must be a meaningful part of quality of care assessments
Quality rating systems must improve how they account for differences in the severity of illness of patients, (i.e. risk adjustment of the data), and for random variations
They conclude that more standardization of data collection methods, analysis and reporting may help improve the value of quality information and comparisons. These would be positive steps towards providing individual patients with better information they could use to compare their local hospitals rather than just to rate individual hospitals against national or regional averages.