In the blog and Twitter worlds this past week, there has been quite a kerfuffle over an article written by Sharon Begley in Newsweek titled " Why Psychologists Reject Science." In her editorial, described by opponents as "inflammatory," Begley writes that
many clinicians fail to "use the interventions for which there is the strongest evidence of efficacy" and "give more weight to their personal experiences than to science." As a result, patients have no assurance that their "treatment will be informed by science." Walter Mischel of Columbia University, who wrote an accompanying editorial, is even more scathing. "The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment," he told me, and there is a "widening gulf between clinical practice and science."
If that isn't the throwing of the gauntlet (or at least a few couch cushions), I don't know what is.
Dr. Katherine Nordal of the American Psychological Association responded to Ms. Begley in the a blog post titled " Taking Issue With Newsweek":
As psychologists, we do embrace our science and research base, but we also understand the importance of the therapeutic relationship to healing and growth. We care about helping our patients improve the overall quality of their lives, and we are not narrowly focused on eliminating one particular symptom (even though getting rid of a symptom is part of improving quality of life.) We combine our understanding of the research with how to best understand the patients who come into our offices with their complicated problems. We work collaboratively to achieve the goals that are important to them.
I'm guessing most of you can guess what I feel about the issue: namely, that Begley has a point, and a very good one at that. After all, by-the-book cognitive behavioral therapy can seem very sterile and clinical. What do worksheets have to do with life? Tell me why I'm like this! What about my feelings?
Of course, CBT isn't necessarily sterile, nor is any other evidence-based treatment (EBT). But "evidence-based treatment" is becoming quite the buzzword, and although it's good that people are beginning to recognize the importance of using treatments that we know work, there's no guarantee that therapists who say they use EBT actually use EBT. I've met with a therapist who said she did CBT, and all too soon, it was "tell me about your mother" and "let's work on the real issues," while I continue to exercise for X hours a day and go batty. I suppose I could (and perhaps should) have reminded her that I wasn't lacking insight, I was lacking skills, but I was exhausted and I just stopped going. I'm not above or against helping educate treatment providers, but if I'm going to be doing that much work, she should be paying ME.
(I get that this sounds tremendously arrogant, but I had been desperately searching for help after I moved to DC and found no good providers that had evening/weekend hours, and therapist X was near my office, so I went. I was so beyond frustrated at this point that my patience was essentially gone.)
Dr. Nordal does have some good points, though. It's not easy to translate research into clinical practice. Research studies are very prescribed, there are typically limitations on who can participate in these studies, and adapting the therapies to best help the client is pretty much out of the question. And a good therapist should be able to tweak the evidence-based treatments to best help their clients get better.
But the touchy-feely part of being a therapist seems to be getting in the way of some therapists implementing these evidence-based approaches.
Can people get better without EBTs? Sure. People's symptoms improved when they took snake oil at the beginning of the last century, but we know now that many times, symptoms wax and wane over time. Their improvement had nothing to do with the snake oil and everything to do with the body's immune system kicking in. Is there evidence for psychotherapy? Yes. Is the evidence base as thorough as it is for other therapies? Not exactly.
The argument that science is limited because it does not tell us about each individual is frustrating for multiple reasons. First of all, nobody is saying that it does tell us about all individuals. It tells us, on average, which treatments produce the best effects for a particular diagnosis. Some individuals will fit the norm, others will not. Backers of empirically supported treatments do not argue that everyone will respond the same way to the same treatment. They instead argue that, when making a treatment decision, we should start with the treatment with the most empirical support, regularly assess progress, and adjust our treatment choice as needed if the client does not respond as expected. Certainly, people vary in their values, desired outcomes, personalities, and many other variables that could potentially influence the outcome of treatment. The problem is, we do not have any systematic way of determining who those people are ahead of time, so if we just use our judgment to determine who is unlikely to respond, we are in fact simply guessing and will, on average, provide less effective care, even if we guess correctly on a couple of occasions in which empirical data would have led us astray. Allowing judgment to overrule empirical data is likely to lead to clinicians simply overruling any data that contradict their beliefs while trumpeting data that support their cause.
students are required to complete multiple research methods and statistics courses, conduct empirical thesis and dissertation research projects among other additional grounding courses and experiences in the science of psychology.
But knowing science isn't the same as understanding the value of science. I could teach someone how to use the biostatistics computer programs I used in grad school, and they could, in theory, "do" science. They could know what kind of tests to run, and how to enter the data, and how to devise the tests. But the numbers are largely meaningless unless you understand how to use them. I've had plenty of therapists who probably know plenty about statistics and research methods, and although that's very useful, it doesn't always translate into better clinical practice.
There is a moral imperative to turn the craft of psychology — in danger of falling, Freud-like, out of fashion — into a robust and valued science informed by the best available research and economic evidence.
(I owe so many people thanks for providing these links, that I don't even know where to start. So if I interact with you on Twitter, you have my eternal gratitude!)