Health knowledge made personal
Join this community!
› Share page:
Go
Search posts:

Test, Learn, Adapt - High-Quality, Low-Cost Evaluation and Rapid, Iterative Experimentation for HIT

Posted Aug 25 2013 6:24pm
In a recent New York Times Economic View article , Richard H. Thaler, the author of the best selling book "Nudge" weaves a path from behavioral science straight to a range of initiatives in the US government and the UK that may hold clues to better means of developing and implementing complex software systems for our health care system.

Thaler is a professor of economics and behavioral science at the Booth School of  Business at the University of Chicago. In an August 25, 2013 piece entitled "Public Policies, Made to Fit People," Thaler cites 3 initiatives by the White House Office of Science and Technology and their work in forming a cross-behavioral group called the Social and Behavioral Science Team. He says this is part of a larger effort to "use evidence and innovation to promote government performance and efficiency." He mentions zero co-payments for prescription medications as one example.

But the lessons are contained in his July 7th Time article entitled " Watching Behavior Before Writing the Rules ." In this piece, Thaler asks us to envision a "Council of Behavioral Scientist Advisers" who, like the Council of  Economic Advisors, provided counsel to the president. He asks "what might emerge from such a group?" He goes on to describe the work of the U.K's Behavioral Insights Team .

Thaler writes that the White House Office of Science and Technology Policy  has established a small office to pursue similar activities. Many of the general activities are listed in a memoranda issued on July 26 whose subject is " Next Steps in the Evidence and Innovation Agenda ." This 14-page memorandum is a "must read." It a describes means by which data can be harnessed and suggests high-quality, low-cost evaluations and rapid, iterative experimentation.

Let's look at the UK Behavioral Insights Team's work more closely. Thaler lists two self-described "Team Mantras"
  • If you want to encourage some activity, make it easy.
  • You can’t make evidence-based policy decisions without evidence.
The central document describing their approach is entitled " Test, Learn, and Adapt: Developing Public Policy with Randomised Controlled Trials . "

Here are some notes / quotations I recorded and found useful. I urge interested parties to read the whole paper
  • Testing an intervention means ensuring that you have put in place robust measures that enable you to evaluate the effectiveness or otherwise of the intervention. 
  • Learning is about analyzing the outcome of the intervention, so that you can identify ʻwhat worksʼ and whether or not the effect size is great enough to offer good value for money. 
  • Adapting means using this learning to modify the intervention (if necessary), so that we are continually refining the way in which the policy is designed and implemented. 
The full steps in their "test, learn, and adapt" approach are described on page 5 of the paper.
  • Identify two or more policy interventions to compare (e.g. old vs new policy; different variations of a policy). 
  • Determine the outcome that the policy is intended to influence and how it will be measured in the trial. 
  • Decide on the randomisation unit: whether to randomise to intervention and control groups at the level of individuals, institutions (e.g. schools), or geographical areas (e.g. local authorities). 
  • Determine how many units (people, institutions, or areas) are required for robust results. 
  • Assign each unit to one of the policy interventions, using a robust randomisation method. 
  • Introduce the policy interventions to the assigned groups. 
  • Measure the results and determine the impact of the policy interventions. 
  • Adapt your policy intervention to reflect your findings. 
Return to Step 1 to continually improve your understanding of what works.

The report has a great figure demonstrating marked discrepancy between the number of randomized clinical trials in health (Cochrane Library) and the far fewer number of trials in social welfare, education, crime, and justice.
The report gives examples of how their small trial approach works. They showed, for example, that text messaging would increase payment of fines prior to a bailiff visiting the home. Baseline payment (no bailiff, no text) was 5%. A larger sample (N=3,633) showed that standardized text with the individual's name would increase the response rate to 33%. 
They also demonstrate when programs don't work. In a back to work program, "similar trials failed to show a positive impact if a number of more costly interventions were applied." In fact, they argue the need to re-enroll every two weeks actually leads to a reduced time to find work. Less resource-intensive methods were consistent with a principle that lowing the cost of checking in actually os correlated with longer unemployment intervals.

Another section is fairly consistent with the arguments made in the U.S.  Next Steps in the Evidence and Innovation Agenda  memorandum. The UK authors state"The growing interest in the use of RCTs as an important tool of policymaking and practice resonate with broader currents of thinking. When money is short it is essential to make sure that it is being spent on approaches that work, and even small marginal improvements in cost effectiveness are precious. RCTʼs are an extremely powerful tool to pinpoint cost-effectiveness – and flush out low-value spend."
The authors give many examples of how such small trials are conducted by Amazon, eBay, Netflix,  Wikipedia fund-raising, and Delta airlines web site design. (The report includes citations.)

A strong case is also made for rapid iteration. Pointing to systems theory, they state that "Many leading thinkers have concluded that in complex systems, from biological ecosystems to modern economies, much progress – if not most – occurs through a process of trial and error. Economies and ecosystems that become too dominated by a narrow a range of practices, species or companies are more vulnerable to failure than more diverse systems."Small randomized trials, they suggest, may play a vital role in policy evolution particularly if they are coupled with a culture of open data and transparency.
Each of the examples provided in the UK report reinforce the belief in the power of small, randomized tests to develop more useful and effective health information technologies. This spirit is seen in the more recent " Next Steps in the Evidence and Innovation Agenda " White House Memorandum.
As we move from the deployment of EHRs to making EHRs maximally effective and efficient, those of us who have the fortune of developing our own EHRs can accelerate and formalize our current culture of rapid innovation cycles. Why shouldn't every new computer human interface project be thought of as a series of randomized experiments where at every stage subtle changes are made to foster decisions that will lead to an evolutionary increase in organizational effectiveness? This is not a "big bang" approach but rather an approach that envisions a general path and, through progressive series of small bur formal experiments, takes us to a more effective care delivery and financing environment.
In an upcoming chapter on "The Future of Biomedical Informatics" in the Shortliffe/Cimino textbook of biomedical informatics, my colleagues and I argued that every decision in patient care should be fully informed and that every patient interaction should contribute to the corpus of biological knowledge. We aren't sure how to get there, but the spirit of the work described herein may offer a clue. For example, the OSTP-sponsored  December 2010 PCAST Report on health information technology  was considered too radical and too much a "big bang" to be palatable to industry or to the clinical community. Perhaps the PCAST vision can be further  re-defined as a series of incremental steps aligned with the  Next Steps in the Evidence and Innovation Agenda  memoranda. In fact, that seems to be what is happening - but more through committee than through evidence.

December 2010 PCAST Report on health information technology .




Post a comment
Write a comment: