Recommended Reading: Recent Legal Scholarship on Decision-Making about Health and Healthcare
Posted Jul 21 2011 2:30am
In Health Choices: Regulatory Design and Processing Modes of Health Decisions , Orly Lobel and On Amir briefly summarize a fascinating series of experiments they have conducted with the support of the Robert Wood Johnson Foundation that test individuals’ ability to make decisions about their health in the face of cognitive depletion or overload. A longer version will be available in September, but the summary is well worth reading. Lobel and Amir begin by reviewing a line of research demonstrating, perhaps unsurprisingly, that “psychological depletion caused by a prior task” in one study it was eating radishes while resisting cookies–leads to a reduced ability to exercise “executive control” and “persist in demanding cognitive activities.” As the authors note, applying this research to the context of health related decision-making is important because patients and providers alike are frequently asked to process information about relative risk and make reasoned, reasonable decisions under conditions of cognitive depletion or overload.
To test their hypothesis that “absent sufficient resources for executive functions individuals will take more risk in their [health-related] decisions,” Lobel and Amir conducted a lab experiment with approximately 700 participants and a web-based one with over 3000 participants, including 300 medical doctors. The findings from their studies support the conclusion that depletion affects people’s ability to process risk, albeit not in entirely intuitive or predictable ways. For example, when parents are cognitively depleted, they become more risk averse regarding vaccinating their children, but when policymakers are cognitively depleted, they become less risk averse regarding population-wide vaccination. When consumers are in a state of attention and focus, a long list of potential side effects will deter them from using a new drug. When cognitively overloaded, though, they paid “less attention to warning lists the longer they were.” The implications of Lobel and Amir’s work are many, varied, and vast; I am looking forward to reading the full paper when it comes out in September.
I also highly recommend Christopher Tarver Robertson’s Biased Advice , which was published in the Emory Law Journal earlier this year. Robertson conducted a series of experiments that built on the groundbreaking 2005 study by Daylian M. Cain, George Loewenstein & Don A. Moore evaluating the effect of a conflict of interest, and of disclosure of the conflict, on the quality of advice given by advisors regarding the number of coins in a jar and on the accuracy of advisees’ estimation of the number of coins in the jar. Cain and his colleagues’ most surprising finding was that when a conflict existed, disclosing it caused the accuracy of advisees’ estimates to decline, in part because advisors gave more biased advice when their conflicts were disclosed than when they were not disclosed.
Among other questions, Robertson examined whether a more concrete disclosure about an advisor’s bias–that is, that “prior research has shown that advisors paid in this way tend to give advice that is $7.68 higher on average than the advice of advisors who are paid based on accuracy”–would aid advisees. It did not, because advisees did not do what “one would hope and expect” and simply subtract $7.68 from the advisor’s estimate. Rather, it appears that they “used the bias disclosure not as a mechanism of calibrating their reliance more precisely, but rather as a strengthened warning suggesting that the advice is altogether worthless.”
On the other hand, disclosing to advisees that an advisor was paid based on the accuracy of the advisee’s estimate–i.e. that the advisor’s financial interest aligned with that of the advisee–led advisees to rely more heavily on the expert’s advice and, as a result, to more accurately estimate the number of coins in the jar. Disclosure of a conflict of interest is also likely to be valuable where advisees can seek out another advisor; a second, unconflicted, opinion dramatically increased the accuracy of advisees’ estimates.
Robertson’s research is important and interesting. As he observes, one of the reasons that it is so difficult to rein in health care costs is that “[t]he health care industry is characterized by radically distributed decision making, with each patient deciding upon her own course of treatment within the range of treatments offered by providers and covered by public and private insurers.” Improving individual decisions may be key to bending the cost curve. Robertson’s research suggests that a disclosure mandate could help under certain circumstances, where, for example, there is “epistemic charlatanism” and a physician’s disclosure of a financial conflict of interest would lead a patient to reject the physician’s not-so-expert recommendations. Robertson emphasizes, however, that disclosure does not improve layperson decision-making nearly as much as unbiased advice does.