An e-mail recently sent to members of OpenMinds.com raised three questions about the implementation of evidence based practice (EBP) in the field of psychotherapy.
As a clinical psychologist and health IT inventor, I've been focused on this issue for the past 25 years. In the mid to late 90's, there was heated debate about psychotherapy outcomes research and the science of EBP. I argued that such research and EBP implementation are essential if our field is to become increasingly cost-effective (i.e., ever higher value to the patient/consumer), but I was condemned by those believing that psychotherapy is more of "an art than a science," as well as by those rejecting the clinical use of computers.
Following is my reply to the three questions.
1. If change at the provider organization is key, how do we increase systematic approaches to EBP implementation?
First you need by-in by practitioners and organizational leaders. Based on my past experience, this won't be easy. Next, collaboration between clinicians, researchers, patient/client representatives, and health IT technicians is important. The focus of such collaboration should be on defining meaningful measures of treatment efficacy and convenient ways to collect data in everyday clinical practice (as well as in controlled research studies).
Consider that in an APA Monitor (10/93) article, Robert Perloff, a past president of the American Psychological Association, described his "dream of a clearinghouse where clinicians can report observations and hypotheses and give ideas to researchers, who design clinical trials the practitioners can use. Electronic communications with computers and telephones could be used, and the whole project would be in the public interest." Sixteen years later, this vision remains unrealized!
2. What provider systems are 'best in show' for effective implementation of EBPs?
I claim that it is too early to select best in show systems since objective #1 has not been achieved. Nevertheless, what we need are low-cost, convenient, secure, and useful health IT systems that collect and analyze comprehensive biopsychosocial data (that take into account one's physiology, psychology, and mind-body connection). This data collection and analysis ought to be done at the beginning and end of treatment for assessing outcomes of different patient/client cohorts (groups), as well as periodically during treatment to assess care processes. The information on each patient resulting from these analyses should be presented to the practitioners and patients in a manner that fosters effective and efficient treatment planning and delivery. In addition, predetermined data sets should by de-identified (to protect patient privacy) and shipped to research organizations where they are aggregated, studied, transformed into evidence-based practice guidelines, and disseminated to the practitioners and patients in understandable language. And there should be ample opportunities to study novel therapeutic approaches.
I know how to do this technologically, but technology alone cannot create and evolve the guidelines.
Consider that in an APA Monitor (5/94) article entitled, "Outcomes Measurement is Debated by Profession," Michael Lambert, Ph.D., professor of psychology at Bringham Young University, found that " in 348 outcomes studies done over a period of five years, researchers used 1,430 distinct outcome measures 840 of the measures only once." This, the article went on to say, makes it difficult to compare studies, creates " an atmosphere of 'chaos,'" and evidences a great need to " find the most sensitive way of measuring change." In the same article, Larry Beutler, Ph.D., professor of education and psychology at the University of California, Santa Barbara, underlined the importance of " developing consensus on what to measure and what criteria tests should meet to ensure compatibility of [research] results."
Yet, some 15 years later, we're back to having the same discussion!
3. In an era of resource constraints, how do we change reimbursement to support the implementation and continued use of EBP?
For one thing, practitioners and organizations ought to be compensated for collecting the data and using the information they yield to guide clinical decisions. Second, practitioners and organizations delivering high-value services--cost-effective care (higher quality at lower cost)--ought to be paid more than those providing lower-value care.
This is easier said than done because we simply don't know what treatment approaches are of greatest value to particular patients. The situation hasn't changed much in the past 15 years. Consider the following:
The issue of treatment decision-making was addressed in an APA Monitor (10/95) article entitled, "What Treatments Have Proven Effective," in which David Barlow, Ph.D., head of the Phobia and Anxiety Clinic the State University of New York at Albany, said, "We are far from the notion of specific treatment for specific problems."
In a Consumer Reports Magazine (11/95) article entitled, "Mental Health: Does Therapy Work?," a survey of four thousand readers yielded mostly favorable results. These findings, however, were marred by the fact that they could not answer the question, "When a person needs psychotherapy, how much do they need?" According to the report, "That has become a critical question both for clinicians and for the insurers that pay for therapy. And it's a hard one to answer While brief therapy often helps, there's no way to tell whether 30 or 40 sessions, or even more, would be even more effective."
Meaningful change doesn't come easily. There are many things that can be done to continuously improve psychotherapy's value, but widespread resistance can only be expected, and a good deal of work is required to hammer out the details. In the long run, practitioners and organizations who embrace the effort will come out on top, and so will their patients.
But is the mental health field up to the challenge? I really don't know!