Here's what the DMCB found out when it lifted the abstract gown and examined the patient-manuscript itself The authors reported the outcomes of a single clinic with a "stable workforce, strong leadership and history of successful quality improvement" that cared for about 9200 patients and which was selected to pilot a "prototype" medical home. This clinic's results were compared to 19 other Group Health primary care clinics without a medical home approach to patient care. Investing in the medical home was not cheap, because the prototype clinic had to hire additional physicians, medical assistants, licensed practical nurses, physician assistants, nurse practitioners, registered nurses and pharmacists. It also appears that the clinic "downsized" its primary care patient panels to reduce physician workloads.
Using vigorous statistical methods to neutralize baseline differences, the authors compared over 200,000 usual care patients to the medical home's 7,000 continuously enrolled patients. The medical home was associated with fewer emergency room visits and inpatient admissions with greater numbers of specialty physician visits. When the costs were added up, this is what the authors found "When costs are totaled across all types of care and adjusted for case-mix and baseline costs, we estimate a total savings of approximately $10.30 per member per month, a result that approaches statistical significance, p=.08, meaning that the difference could still be due to chance." (bolding from DMCB)
In other words, the savings failed to meet the conventional threshold for statistical significance.
Despite the negative finding, the authors forged on and added up the cost of the program and compared it to the savings "...we can estimate return on investment associated with the prototype at 21 months at 1:51" In addition to its failure to achieve statistical significance (let alone mention that in the abstract), there are several possible weaknesses with the study that went unmentioned in the "Lessons Learned" and Policy Implications" sections of Reid et al's Health Affairs publication: 1) the authors chose their strongest clinic (making its generalizability to other Group Health clinics suspect), 2) it's not clear which patients were dropped from the panel prior to institution of the PCMH, 3) despite statistical attempts to neutralize any known sources of bias, this was a non-randomized study and could have been influenced by unknown or unreported factors, and 4) what works at Group Health - an integrated delivery system in the Northwest - isn't necessarily going to work in any primary care clinic in Dade Country Florida, McAllen Texas or Mobile Alabama.
The DMCB wonders why its friends in policy circles and academia continue give the PCMH a pass. Based on these data, the PCMH is still not ready for prime time and should be confined to pilot testing. The DMCB is not the only curmudgeon that feels that way: check out this thorough review of the possible sources of bias and the testy response of the study's lead author.
Speaking of pilots, check out what Rhode Island Blue Cross Blue Shield is up to . The insurer is directly paying for the salaries of nurses that are being dropped into network primary care physicians' offices. While the Group Health article preaches about investing in primary care, clinical leadership, change management, electronic records, transformation, educational reform and the like, the folks in Ocean State have come up an approach that is a quadruple threat: it's 1) generalizable (could work in multiple settings), 2) adaptive (care management nurses are a supremely flexible species) 3) probably cheaper than " redesigning " primary care sites and 4) preserves the core value of non-physician coaches dedicated to engaging patients in their own care. That used to be called disease management, but it's really become a hybrid "2.0" version of the medical home that, if it works, may also deserve the attention of Health Affairs.