Health knowledge made personal
Join this community!
› Share page:
Search posts:

National Public Radio on Prozac: Education or Spin Control?

Posted Jun 15 2009 6:29am

Jonathan Leo, Ph.D.


On March 26, 2008, the NPR sponsored radio show The Infinite Mind presented “Prozac Nation: Revisited.” The program focused on the efficacy and safety of antidepressants. Fred Goodwin, host of the show, is an esteemed psychiatrist who once served as Director of the National Institutes of Mental Health (NIMH). The guests were: Andrew Leuchter, Director of the Laboratory of Brain, Behavior, and Pharmacology at the University of California at Los Angeles (UCLA); Peter Pitts, a former FDA official; and Nada Stotland, President of the American Psychiatric Association (APA).

Following the show there were several concerns aired about conflicts of interests on the show. Some were surprised to learn that The Infinite Mind is partially funded by Eli Lilly, manufacturer of Prozac, Cymbalta, and Zyprexa. There was also concern regarding the guests, particularly Dr. Pitts, who left the FDA to become a paid consultant for the pharmaceutical industry. While there were several media reports of these conflicts of interest, there has been limited discussion regarding the actual content of the show. The following essay argues that much of the important information presented on this show was problematic. In some cases, facts were presented that are simply incongruent with the peer-reviewed scientific literature; in others, it seems like crucially important information was either omitted or ‘spun’. If the following analysis is valid, one might reasonably consider whether the existing conflicts of interest influenced the public presentation of this material.

The overriding theme of the show was to challenge the FDA’s recent black box warning of a potential link between the antidepressants and suicide. In addition, the show’s host and guests addressed recent studies suggesting that the efficacy of the SSRIs is limited.

When FDA examines the data from clinical trials their job is to weigh the benefits versus the side effects. Every drug has adverse effects but hopefully the benefits outweigh those undesirable adverse effects. In the case of the SSRIs, in both the case reports and the controlled clinical trials there appears to be a suggestion that the SSRIs have contributed to increased suicidality. When one combines the possibility of increased suicidality with limited efficacy, many would question whether the developing brains of children and adolescents should be exposed to these medications. The question is, were listeners of The Infinite Mind given an accurate presentation of the data surrounding these issues?

So that readers can judge for themselves, in the discussion below, next to the most problematic statements made on the show, the figures and tables from the scientific literature that contradict those statements are presented. In front of the quotes there is a notation for the time in the show when the statement was made. Much of the information presented here was previously forwarded to NPR.


Goodwin’s first interview was with Andrew Leuchter. They both critique the FDA black box warning. Goodwin’s remarks about the FDA black box warning are somewhat stronger than Leuchter’s. According to Goodwin the FDA warning is misguided, while Leuchter sees problematic case studies regarding suicidality but no real evidence behind the warning. They do not discuss the concerns of regulatory agencies outside of the U.S. - for instance, the German equivalent of the FDA was concerned that Prozac could cause suicidal behavior as of 1986 .

For both of Goodwin and Leuchter, the overriding problem with the FDA’s reasoning is that no one committed suicide while they were in the trials. In their words:

Andrew Leuchter discussing the FDA Black Box Warning (1:17): In those studies where this increase[d] thoughts about suicide and agitation were noted, actually, there were no suicides - that people thought about it more but they didn’t act on it. It’s something that can be overblown.

Throughout the show the host and guests go back and forth between data relating to the children and data relating to adults. In Leuchter’s statement above about suicides he should have qualified it by saying that there were no suicides in the pediatric studies, as there were suicides in the adult studies (As Leuchter’s statement stands now it is misleading to anyone unfamiliar with the psychiatric literature). The fact that there were no pediatric suicides in the FDA’s database may be seen as comforting, but it needs to be examined in context. 19-year old Traci Johnson committed suicide in a trial of Cymbalta for urinary incontinence. While she was legally an adult, some would say that this still sends a signal regarding the potential impact of antidepressants on the teenage brain.

Neither Goodwin nor Leuchter mention the fact that suicidal patients are routinely excluded from SSRI trials, which could affect the suicide rates found in these trials; in real-world psychiatric practice, such patients cannot be excluded. Also, children in SSRI trials are monitored closely, which may help prevent completed suicide, while in the real world, monitoring will vary. Given these confounds, and the fact that these trials are generally small and underpowered, other evidence might be considered when assessing the iatrogenic risk of SSRI medications in the real world. For instance, are SSRIs associated with suicidality in the adult population?

Goodwin makes reference to the adult data later in the show:

Goodwin (44:27) In fact, the FDA database there was zero – no suicide at all in any of the antidepressant trials – 35,000 patients.

This is simply wrong. This 2003 article puts the number of suicides in SSRI trials at 77 , with 0.59% of SSRI trial participants committing suicide. Rates vary depending on what group of studies are examined, but there have certainly been suicides in FDA trials. For instance, see this table from the 1991 Paxil Safety Review taken from Dr. Joseph Glenmullen’s report:

FDA 1991 Paxil Safety Review
There is no doubt that at least according to the FDA and GSK that some people in the trials committed suicide. Note that the table documents 5 completed suicides in the Paxil group. The debate is not about whether there were any suicides or not but about the significance of five in the Paxil group compared to two in the placebo group. And indeed it is complicated because, as we are learning now, apparently some of the suicides listed under “placebo” did not occur during the actual trial but before and after the trial. In fact, the Senate is investigating Glaxo SmithKline regarding allegations that they have concealed evidence of a Paxil-suicidality link since 1989 by committing research fraud.

Rather than question Leuchter about his statements, Goodwin backs him up:

Goodwin (2:38): There is no credible scientific evidence linking antidepressants to violence or suicide.

This seems to be a somewhat flippant and off-hand statement about a significantly large body of data, and, again, is at odds with the FDA, as presumably the FDA believes that their decision to place a box warning on the SSRIs was based on an analysis of the scientific evidence. Even GSK acknowledges the risk of suicidality in certain populations .

Consider this table from a 2003 paper by David Healy. All these studies, performed by the companies, and submitted to the FDA, started with a group of depressed patients and divided them in half. Half were given an antidepressant, and half were given a placebo, and according to the table, there were more suicides and suicidal events in the drug group compared to the placebo group. Since both groups were diagnosed with depression to begin with, it suggests that the problem is the drug and not the depression. Furthermore, this is not an isolated finding, but is the case for almost every single antidepressant studied ─ in adults, no less (Healy, 2003). While the pharmaceutical companies have argued for years that the problem is the disease, not the drug, this table reveals this is not the case. Data such as these provoked the FDA to issue the black-box warning for SSRI antidepressants.


Later in the show, Goodwin interviews Peter Pitts, who once worked for the FDA. At the time of this interview, he was employed by a PR firm which represents Eli Lilly, Glaxo SmithKline, and Pfizer- in other words, the major antidepressant manufacturers. Unfortunately, he never disclosed this, and he was presented to listeners as a ‘Former FDA official”. Pitts was present at the FDA hearings in 2004 that resulted in the Black Box Warning. Much of the discussion between Pitts and Goodwin centers on a recent paper published in the fall of 2007 claiming that the Black Box Warning has led to a drop in antidepressant use and subsequent increase in the suicide rate.

Pitts (42:12): When I was at the FDA I was one of the staff members that sat down with two dozen moms whose children had committed suicide. And these children had all been on antidepressants. It took me about five seconds to understand that these moms did not need from me a fifty minute dissertation on underpowered studies what they needed to do was to tell me their stories. And I listened and it moved me profoundly. And what I really understood was that these moms needed to understand is what medicines can do and what they can’t do…

One could argue that these moms understand exactly what these medications can do: They can provoke suicidal behavior in youth. Some of these mothers actually understand the technical issues quite well.

Pitts: And the result was that a lot of doctors stopped prescribing the medications because they are afraid of liability. And a lot of parents stopped giving their children medicines that had been prescribed. And not surprisingly one or two years later very large studies came out to show that in fact teen suicidality had shown an increase – so there was really a direct correlation. And I think the media bears a lot of the responsibility for that…..

According to Dr. Pitts the only reason doctors would not prescribe these medicines is because they are worried about lawsuits. He never mentions the possibility that a doctor might withhold these medications based on a rational scientific decision making process or principles of evidence-based medicine .

Goodwin (44:45): And in fact what happened, as you said, in the year or two after that the CDC confirmed that the actual rate of deaths by suicide went up 17% that paralleled precisely the 17% drop in prescriptions for these drugs, apparently in reaction to this label.” Earlier in the program (11:48) he also said “The CDC report that in the same year following that black box warning there would have been a substantial increase in real suicides - not in suicidality but in actual deaths.

Since both Goodwin and Pitts are basing so much of their beliefs on the CDC data it is worthwhile to look at the study in a little detail. The study in question was authored by Robert Gibbons and John Mann, and was published in the American Journal of Psychiatry, and partially funded by Pfizer, the manufacturer of Zoloft. The study looked at two variables: SSRI prescription rates and suicide rates and compared these in various age groups. By placing graphs of the two variables side-by-side the authors believe the data suggests that a drop-off in prescribing caused by the black box warning has led to increased suicide rates.

The initial media reports seemed to side with the authors’ claims. Yet, several days following the publication of the study, problems with the study became evident, with some commentators even questioning whether the American Journal of Psychiatry should withdraw the paper. In September of 2007 (six months before Prozac Revisited aired) The New York Times published an article entitled, “Experts Question Study of Youth Suicide Rates” citing concerns from other experts. The Boston Globe also published an article highlighting the problems with the study , which was eventually followed up by an Op-Ed piece entitled: “Suicide Rates as a Public Relations Tool.”

When Drs. Pitts and Goodwin claim a causal connection between the black box warning and increased suicide, these claims are not supported by science. The New York Times reported that according to lead author Robert Gibbons, “the data from the United States that he and his colleagues analyzed did not support a causal link between prescription rates and suicide in 2004.” Why? Looking at the graph above, the increase in suicides in 2004 is clearly visible. However, the black box warning was not issued until October of 2004. Claiming that a warning issued in October 2004 raised suicide rates for all of 2004 simply doesn’t make sense. When the data for 2005 were examined (the first full calendar year following the warning), suicide rates actually fell slightly . Thus, Dr. Pitts and Goodwin are making claims that are in direct contradiction to the data. Other commentators have argued that they just aren’t telling the truth.


The issue of whether or not the SSRIs have caused some people to take their own lives is complicated, to say the least. A major hurdle to making an educated decision about a potential link is the current nature of the medical publishing process as: Some of the information that would aid interested parties has not even been published; some of it has only seen the light of day because of court proceedings; and some of it is only available to regulators and not the general public. The goal in this essay has not been to provide a definitive answer to the question of whether or not the SSRIs are linked to suicidality. Rather it is about the presentation of the issue by a supposedly unbiased major media outlet and whether or not that outlet presented a full and open discussion about the topic. No doubt, for all the data presented above there are controversies and substantial room for debate regarding these issues (see Antonuccio, 2008; March et al., 2004) , but it does not appear that listeners of The Infinite Mind were given a fair flavor of that debate.

One of the strengths of shows broadcast by NPR is that on most topics there are a variety of perspectives advanced. Usually there are individuals with extremely different points of view (ideology, political affiliation, etc.) discussing the different ways of looking at an issues. This is what NPR is known for and why it is a preferred source of news for so many people- it serves the public interest rather than corporate interests. By this standard, this show failed terribly. Not only was inaccurate information disseminated, but critically important information was omitted. For instance, throughout the entire show, not a single sentence of criticism was directed at the pharmaceutical industry. This is despite the fact that there is ongoing concern regarding research fraud in SSRI trials (specifically regarding the miscoding of suicidality resulting in a lower suicide rate; that ghost-written articles have downplayed signs of suicidal behavior in children and adolescents prescribed antidepressants; and that pharmaceutical companies are accused of withholding data that might have shed light on these exact issues). Whether coincidental or not, the views expressed on The Infinite Mind were almost universally congruent with the interests of pharmaceutical companies that manufacture antidepressants.

Following the publication of the show, and the subsequent controversy about the conflicts of interest, the NPR ombudsman weighed in on the issue. For the most part she was responding to an article on by Jeanne Lenzer and Shannon Brownlee who pointed out that all four experts on the show had taken money from pharmaceutical companies. The Slate authors suggest that for such a complex issue the show should have had included guests with different points of view. But in reply to their suggestion, Bill Lichtenstein, the producer of The Infinite Mind stated (as quoted by the ombudsman):

“There are no scientific, clinical or epidemiological research or studies that indicate a direct link [between antidepressants and suicide/violence].”

In one brief sentence, Lichtenstein exemplifies the problem with the attitude of those involved with the show – “smug” might be the best term to describe it. Lichtenstein’s cursory statement, that there is simply not another side to these issues, could only come from someone who is unfamiliar with the data. Contrast Lichtenstein’s statement with that of Thomas Laughren, the director of the division of psychiatry products at the FDA. At the FDA Hearing in 2004, in reference to the use of antidepressants in children, Laughren stated: “The data in aggregate indicate an increased risk of suicidality in pediatric patients.”


Both Goodwin and Leuchter take great exception with the latest meta-analysis which examined the trials designed to get the medications approved by the FDA. The study found that the SSRIs barely beat placebo. Although they do not mention the study’s authors by name, presumably they are referring to the studies with Irving Kirsch as the lead author. Kirsch and his colleagues have published several influential meta-analyses with the most recent published in 2008 . Kirsch has made a name for pooling the clinical trial data, submitted to the FDA as part of the approval process for the antidepressants, and analyzing it as a single data set. What has elevated his studies above the standard meta-analysis is that in addition to the published trials, he and his colleagues have taken advantage of the Freedom of Information Act to gain access to the unpublished trials. It is common for pharmaceutical companies to withhold studies that find that their products are ineffective, and so it isn’t possible to reach accurate conclusions by examining only the published data. All of Kirsch’s studies have shown that in company sponsored trials the antidepressants have only performed marginally better than placebo – a performance, or lack thereof, that, were it not for Kirsch, most people would probably be unaware of. For most of the antidepressants studied there is only about a two-point difference on the Hamilton depression rating scale between the drug and the placebo groups. Other psychiatric researchers have found similar results.

In contrast to critics of the clinical trial process, who claim that the companies plan their studies to give their drug the best possible chance of coming out ahead, Goodwin and Leuchter have a somewhat unique take on the trials. Their take is basically analogous to an Olympic Long Jumper who wins the gold but declares that on his last jump he could have jumped even further but wanted to save his energy.

Goodwin (17:27): Now there has also been stuff in the press recently that sort of implies that maybe antidepressants don’t work. There is [sic] a couple of review articles. What is your take on these studies that seem to imply that maybe these drugs don’t really work at all?

Leuchter: Yeah, once every few years somebody puts out one of those articles that looks at a bunch of studies and says antidepressants don’t work. My take is the same as yours: That those studies really are a gross misinterpretation of the scientific literature.

Goodwin. And it’s a cycle.(Laughing)

Leuchter: Yeah. It is. Exactly. (Also laughing)

Goodwin (18:24): Could you explain to our listeners - and these are complex statistical concepts here- but could you explain to the listeners briefly how they are grossly misleading and how they are wrongly interpreted.

Leuchter: Sure. I’d be happy to. Most of these studies – called these meta analytical studies – take a bunch of studies and pool them together. They are looking at studies that are done by pharmaceutical companies to prove that their drugs actually work. In order to get a medication to market, one of the companies has to do a study and say “Okay” we are going to look at a number of depressed patients and show that taking our drug is better than taking a placebo. So it’s a relatively low bar. All you have to prove is that people get somewhat better. So the studies are designed very conservatively just to show that the medication has an effect. They are not designed to show that people get well. They are not designed to show long term improvement in function. They are not designed to show that people have less suicidality. They are just designed to show that the medication does work.

Goodwin: That it is not placebo…

Goodwin and Leuchter seem to be implying that the companies were convinced their drugs would beat placebo even before starting the trials. They must have been so confident of the drugs’ efficacy that, rather than design a study to determine the drugs true efficacy, because speed was more important, they set the bar low so they could get the study out the door and get their drug to market. With so much money riding on the FDA approval process if the companies had been unsure of the efficacy presumably they would have been more careful with their experimental design.

It seems odd that Goodwin and Leuchter believe that their characterization of the trials is a defense of the process, when it surely creates problems for those involved in the trials. Some might see Leuchter’s defense of the trials as really more of a confession. At the very least, the university professors who authored these published papers, who are not employed by SSRI manufacturers, and who probably felt they were involved in a true scientific enterprise, might take exception to a portrayal of the studies as largely commercial enterprises.

While Leuchter believes the Kirsch meta-analysis was misleading it is important to point out that Kirsch never said he did anything other than pool the FDA database. In a sense, Kirsch was just following the FDA’s lead. What is apparently flawed, at least according to Leuchter, is the public’s misunderstanding of the science behind clinical trials, as most people probably assumed that, because they were called scientific studies, that, indeed, they really were scientific studies – that they involved curious scientists in search of an answer to a question. According to Leuchter, in the case of antidepressants this was not the case. Instead, the studies were done by companies whose primary goal was to get the drug approved in the simplest way possible. For Leuchter this is apparently common knowledge for clinical trial researchers. However, I am not sure the general public knew this – until now.

But is Leuchter’s view of the trials really an accurate portrayal? Consider the experimental design for one of the two studies used to get Prozac approved for children. Like the other clinical trials of antidepressants in children there was a placebo run-in phase, and any children who responded robustly to placebo were removed from the study. In addition, this study also had a run-in phase to pre-select for drug responders (Emslie, Heiligenstein et al., 2002; Dubitsky, 2004). All the Prozac-treated children in this study were given 10 mg for the first week and children who did not respond, or who had negative responses, could then be dropped from the study (p. 1206). At the start of week two, the dose was increased to 20 mg. The subsequent statistical analysis only used children who had had at least one week of treatment with 20 mg (p. 1208). Thus, before the study even started, there was a mechanism in place to maximize any difference between the drug and placebo groups– the placebo group was pre-selected for non-responders, while the drug group was pre-selected for responders. Yet even with this advantage, for the prospectively defined primary outcome measure, 65 percent of the children on Prozac had a beneficial response compared to 53 percent of the placebo patients, a result which was not statistically significant. It was only by looking at other measures that clinical significance was found; on the patient- and parent-rated scales there was no advantage to Prozac, but on one of the clinician-rated scales there was a slight advantage to Prozac. In other words, if the opinions of the children and their parents were considered, Prozac did not work, but on the rating scales given by the researchers funded by the makers of Prozac, Prozac had a small advantage. Although Russell Katz of the FDA wrote, “one could argue that this post hoc choice of primary outcome is inappropriate,” in the end the FDA accepted the post hoc change and approved Prozac for children in January 2003 (Center for Drug Evaluation and Research, 2002, p. 13).

It was the only antidepressant that the FDA ever approved for use in childhood depression. If the bar had been set any higher, and it was not set high, then Prozac would never have been approved for children. As it was Prozac barely made it. Does this sound like a trial that was designed to give Prozac every advantage possible? Or does it sound like a clinical trial done by researchers who were so confident in Prozac’s value that they simply wanted to conduct a quick study to get it approved?

Or take the clinical trials for the use of Paxil in childhood depression. A major study, referred to as “ Study 329 ” and apparently ghostwritten (secretly written by paid employees of the makers of Paxil), was published in The Journal of the American Academy of Child and Adolescent Psychiatry. In the published paper the authors stated that Paxil was safe and effective. Yet after internal company documents arose suggesting a problem (Paxil had apparently induced suicidal ideation, agitation, and hallucinations in mildly depressed children), the FDA demanded access to the primary data, and upon further examination the FDA decided that the benefits did not outweigh the risks. When the shenanigans surrounding the clinical trials of SSRIs in children were eventually made public , the subsequent editorial in The Lancet referred to the research into the use of SSRIs in children as, “confusion, manipulation, and institutional failure…”

Leuchter (19:22): Exactly. So then to take those studies, which are simply designed to show that a drug is better than a placebo, and they do that, the studies routinely do that, and to take those studies and say “Ah” these drugs are barely better than placebo is kind of a silly approach. Really. Because that’s all the study was designed to do. If you take a look at the better studies, like the STAR-D study, which is the largest study of depression ever done in this country

Goodwin: The NIMH one

Leuchter: Exactly. This was done at fourteen sites around the country. I was the director for the Los Angeles center of the study. What we showed was that these medications unquestionably improve depressive symptoms, decrease anxiety, get people back to work, improve their function, improve their social relations, decrease disabilities. The benefits just go on and on and on. And it’s those kind of long term – year long studies - that show these medication works. Those are the one we need to look at. Not these very brief low dose antidepressant studies, which only showed that drug is better than placebo. If I only looked at those studies I would say “Why would I take those?”

The last statement in the quote from Leuchter above is somewhat confusing because when Leuchter says that based on the short term trials he wouldn’t use the drugs he is basically agreeing with Kirsch – something I am not sure he means to be doing. His statement muddies the waters because if he agrees with the Kirsch findings, then why did he summarily dismiss the study in his earlier discussion? Does he disagree with publicizing the results? The problem with the short-term trials not providing enough evidence to justify the use of SSRIs is that these are the same trials that were used to justify FDA approval, pharmaceutical company marketing campaigns, and the medical profession’s use of the drugs. (Certainly, when the short-term trials that Kirsch examined were the only ones in existence, the psychiatric profession was still extremely enthusiastic about the effectiveness of SSRIs- based on the exact studies that Leuchter writes off).

If Prozac and the other SSRIs were approved based on flawed studies then what does this say about the entire drug approval process in our country? If this is a regular practice for the FDA then it could explain the recent rash of drug problems. Take Vioxx as an example. The short term trials for Vioxx, which the FDA based their approval on, were not confirmed by more in-depth studies. A more cautious approach by the FDA and the medical community might have led to a more sound decision to begin with – and fewer casualties. Yet, for the SSRIs, according to Leuchter, there is a happy ending because recent studies have shown that the antidepressants work.

But are we really so sure that the recent studies he refers to actually support the use of the SSRIs? Leuchter cites the STAR-D study which was a major multisite study sponsored by NIMH. If a patient in STAR-D did not improve on one antidepressant they were switched to another one. However, there is a major problem with using STAR-D to justify the use of the SSRIs: It was not a placebo controlled trial. The researchers could have compared antidepressant treatment to psychotherapy, placebo, exercise, self-help, stress reduction, or no treatment at all, which would have yielded useful data on how SSRIs work compare to other approaches. Since they did not, they simply cannot use their data to argue that SSRIs are efficacious. Many of the patients did get better, but they did not design their experiment to isolate the efficacy of SSRIs.

Leuchter claims that the conclusions of Kirsch et al. “really are a gross misinterpretation of the scientific literature” needs to be strongly questioned. What rigorous scientific literature contradicts the idea that SSRIs hold only a minor advantage over placebo? Consider that one long-term comparative study found that patients who were prescribed Zoloft had worse long-term outcomes than those who simply exercised. A discussion of SSRI efficacy grounded in science would seemingly need to address data such as these.

As it was, The Infinite Mind did not have a guest with a more critical view who could have pointed out the limitations of the study. The existing critiques of STAR-D seem devastating to the arguments made by Leuchter. While there are not many long term studies of the antidepressants, in these studies the drugs do not fare as well as they do in short term studies. For instance, as Moncrieff and Kirsch have pointed out , in longitudinal studies medicated patients treated for depression have had poorer outcomes compared to the non-medicated patients.

Yet all these discussions about efficacy focus on a point that has already been conceded. Even within mainstream psychiatry, it is now well acknowledged that the difference between placebo and antidepressant drug effect is minimal. When Kirsch’s 2002 meta-analysis was published in Prevention and Treatment the journal also published numerous commentaries , some of which took issue with Kirsch. But the authors that disagreed with him did not disagree with his main findings.

Kirsch et al.’s reply stated : “We are very heartened by the thoughtful response to our article. Unlike some of the responses to a previous meta-analysis of antidepressant drug effects, there is now unanimous agreement among commentators that the mean difference between response to antidepressant drugs and response to inert placebo is very small.” For this point to be overturned, data (not declarations) are required.

The current state of the SSRI debate in the mental health literature is as follows: Some say that for every ten people taking an antidepressant the drug helps one person , at the other end of the spectrum there are some who argue that it helps three . The debate comes down to the significance of these low efficacy numbers; put another way, are SSRIs effective for 10% or 33% of the patients that receive them? The public health argument goes something like so: Helping one out of every ten does not sound very good but if you give the medications to 10 million people then you are helping one million (This may be of little consolation to the nine million people being exposed to potential side-effects). Even the pharmaceutical companies acknowledge that difference between drug and placebo is minimal.

Take this advertisement for Wellbutrin from 2001
This is not generated by a critic of antidepressants, but is an advertisement intended to sell Wellbutrin, produced by the manufacturer and placed on the website. Obviously, the difference between Wellbutrin, Zoloft, and placebo is quite small.


In the second part of the interview, Goodwin interviews Nada Stotland, the head of the American Psychiatric Association (APA). The interview consists mostly of Stotland repeatedly stating that the press is biased. She presents no evidence, nor does she cite any research. However at one point (35:45) she makes the startling declaration, in direct contrast to the FDA’s warning, that: “There was no good reason for the black box warning.”

For a moment put yourself in the shoes of a patient who had just been prescribed an SSRI and whose doctor had carefully explained the Black Box Warning. What would this patient think to hear the President of the American Psychiatric Association declare on NPR that there was no good reason for the Black Box Warning? If the patient went back to their prescribing doctor and asked why the APA and the FDA are at odds what would the doctor say?

In defense of having Stotland on the show, Bill Lichtenstein, the producer of the show stated:

“It would be patently ridiculous, for example, to presume that Dr. Stotland, speaking for all American psychiatrists as president of the APA, would somehow distort the truth because of some past connection to an industry speakers’ bureau.”

While Lichtenstein sees no reason to be skeptical of the idea that someone from the APA might have some underlying conflicts, others are skeptical. A recent New York Times article titled “ Psychiatric Group Faces Scrutiny over Drug Industry Ties, ” reports that the senate is now interested in the APA’s finances, triggered in part by revelations surrounding the incoming APA president’s large amount of stock holdings in Corcept Therapeutics, a pharmaceutical company. (It was initially reported that he did not disclose that he held somewhere in the range of $4 million in stock, but Stanford has said that they knew about the stock and felt it was not a conflict.) According to The Times, in 2006, 30 percent of the APA’s $62 million budget came from pharmaceutical companies.

In a sense the question of suicidality served as a distraction for The Infinite Mind as it kept them from discussing all the other issues that have recently come out regarding the antidepressants. If NPR had really wanted to take a more in-depth view they might have informed their listeners of the following: 1) for many patients it is extremely difficult to stop taking these medications due to withdrawal effects; 2) that the once presumed efficacy of the SSRIs is not what many people think; 3) that there is little direct evidence in support of the Chemical Imbalance Theory of Depression , which has formed the basis for the company marketing programs: 4) that many of the SSRI trials have never been published; and 5) that some have been ghostwritten (meaning that writers paid by pharmaceutical companies secretly authored research articles on SSRIs, although academic psychiatrists are listed as the authors .)

If the readers of this essay decide that NPR did not give its listeners an accurate presentation of the issues surrounding the antidepressants, then a second question, which should be addressed by others, including possibly the NPR ombudsman, is: What made NPR an easy mark for those searching for an outlet to engage in spin-control? The answer, based on NPR’s responses so far , appears to be that The Infinite Mind was an easy mark because they were simply unaware of the issues. Apparently no one involved with the production of The Infinite Mind was familiar enough with the critical literature on antidepressants to take a step back and question the one-sided message they were putting on the air.

Perhaps the most troubling part in all of this is that there is a very easy solution: NPR should just have another show on the same topic and invite the same guests, but this time they should include some professionals who take a more critical view of these issues, basing their content on data rather than assertions. This proposal seems hard to argue with, and certainly there are many interested listeners who would tune in to hear such a discussion.

Open Invitation: Any of the professionals mentioned in this article are invited to respond. Please send any corrections, responses, or rebuttals to . We will be happy to post (or link) to your response.

Also, readers are also invited to submit your comments to

Recommended Reading

David Healy: Let Them Eat Prozac
Charles Medawar: Social Audit

Post a comment
Write a comment:

Related Searches