Health knowledge made personal
Join this community!
› Share page:
Go
Search posts:

A Grumpy View of Comparative Effectiveness Research: What You Won't Read in the New England Journal of Medicine

Posted May 07 2009 8:30pm
The Disease Management Care Blog found three predictable articles on comparative effectiveness research (CER) in the latest [May 17 Volume 360 (19)] New England Journal of Medicine. You’ll need a subscription to read them. Don’t have one? Don't have the time? Well, thanks to the DMCB you don't have a problem, because there is a brief one paragraph summary on each:

The first is written by Alan Garber, from the Center for Health Policy at Stanford University and Sean Tunis of the Center for Medical Technology Policy. They address the criticism ‘of some’ that CER will be unable to account for individual patient differences. This will stymie the kind of personalized medicine (PM) that uses (for example) age, co-existing conditions, individual preferences and other variables to that can be used to tailor therapy. Using genomic testing as one arena where CER and PM might clash, they reassure readers that trials on the use of genetic testing have helped patients and physicians better understand when they do and do not help guide therapy with a high level of granularity. More trials built on this kind of success should help better inform these kinds of issues, especially if they are paired from the insights possible from large observational databases.

The second is by Jerry Avorn of Harvard. Funding CER at a level of 1/20th of 1% of our annual healthcare spend doesn’t seem unreasonable to him. What is unreasonable are opponents to CER who are cloaking themselves in moral and policy ‘nonfacts,’ falsely warning in Orwellian tones about government intrusions into the doctor-patient relationship, cookbook medicine and rationing. Thanks goodness Congress resisted those nutcases. He asks our just citizenry of righteous persuasion to remain focused on the task and not be distracted by continued disinformation.

Aanand Naik and Laura Petersen of Baylor ask if CER will lead to better care. They are optimistic it will, because there are some examples of the rapid translation of clinical research into mainstream practice. One is the time for heart attack victims to go from the emergency room door to percutaneous coronary intervention (PCI). They note this success was based on an emerging ‘implementation science’ that included the dissemination of tool kits. They advocate that CER include the creation of ‘connections’ between researchers and practitioners that go from the science of recommendation to the science of implementation.

The curmudgeonly DMCB agrees with all of these guys. It is not a Grumpy nutcase and it is not being defensive. Nonetheless, some reasonable skepticism about CER may be warranted:

No one has done CER on CER: While the door-to-PCI anecdote is a good example of how well things can work, the DMCB recalls the tool kits were ultimately disseminated by the American College of Cardiology without the help of the Feds. Given the ‘ unknown unknowns’ about the execution of CER as currently envisioned and its impact on day-to-day practice, the DMCB wonders if the oppositionally-minded Trogdolyte opponents of CER really need to worry all that much.

Minimizes other research by industry: While 1/20th of 1% of comparatively small, it is comparatively huge versus the budget previously devoted to clinical research by other entities, including employers and disease management organizations. As a result, other worthwhile science may wither on the vine in the rush to get a piece of this new Federal largess or be pushed aside, even if the concepts or findings are useful. As was pointed out in The Age of the Unthinkable, real breakthroughs are thanks to mavericks working outside the funding mainstream. CER could crush them and it certainly won't help them.

There are the means and then there are the tails: Critics of research that ultimately compares an outcome based on a better average value in an intervention group compared to an average value in the comparison group have a point. Clinical research is based on the clustering of detectable patient outcomes around the mean, while smaller numbers of patients in both groups may get a lot better or worse. While the authors in the New England Journal assure us that subgroup analyses will help spot the outliers, the DMCB isn’t so sure. For example, what is the optimum level of blood glucose control in a Type 1 diabetic night-time truck driver with post-traumatic stress disorder who won’t take his blood pressure pills?

Will never assume a high level of confidence, only high probability: Comparative effectiveness research implies a degree of confident scientific certainty that has served medicine’s branding for decades. The fact is, however, that CER is only as good as the last published study. It represents the most recent estimate of a likelihood that a treatment is better and, like the promise to fix the SGR, is subject to change at anytime.

Rapid, efficient or relevant? Hardly. It takes months to even plan large multi-site clinical trials, years to recruit patients and collect the data and even more time to analyze it. By the time we begin to see any good results, there will not only be new untested treatment options but the Republicans will be in charge.

Information monopoly: Care to guess who will be the greatest beneficiaries of CER? Academics in our largest and bestest medical institutions will build on their pattern of getting the lion’s share of the funding, designing the research and releasing the data in the same old hidebound ways. That style will continue to inform policy at the Federal level, which will lead to more programs that continue to feed the tertiary academic medical-industrial complex. And as an aside, it will perpetuate the New England Journal’s business model. No wonder they neglected to have that 4th article that presented a contrary point of view.
Post a comment
Write a comment: