Publication bias “is the tendency of researchers, editors, and pharmaceutical companies to handle the reporting of experimental results that are positive (i.e. showing a significant finding) differently from results that are negative (i.e. supporting the null hypothesis) or inconclusive, leading to bias in the overall published literature.”
A recent study looked at previous publications (and one unpublished study) and claims that there is publication bias involved in reporting on SSRI’s and repetitive behaviors in autism:
The goal of this study was to examine the efficacy of serotonin receptor inhibitors (SRIs) for the treatment of repetitive behaviors in autism spectrum disorders (ASD). METHODS
Two reviewers searched PubMed and Clinicaltrials.gov for randomized, double-blind, placebo-controlled trials evaluating the efficacy of SRIs for repetitive behaviors in ASD. Our primary outcome was mean improvement in ratings scales of repetitive behavior. Publication bias was assessed by using a funnel plot, the Egger’s test, and a meta-regression of sample size and effect size. RESULTS
Our search identified 5 published and 5 unpublished but completed trials eligible for meta-analysis. Meta-analysis of 5 published and 1 unpublished trial (which provided data) demonstrated a small but significant effect of SRI for the treatment of repetitive behaviors in ASD (standardized mean difference: 0.22 [95% confidence interval: 0.07-0.37], z score = 2.87, P < .005). There was significant evidence of publication bias in all analyses. When Duval and Tweedie’s trim and fill method was used to adjust for the effect of publication bias, there was no longer a significant benefit of SRI for the treatment of repetitive behaviors in ASD (standardized mean difference: 0.12 [95% confidence interval: -0.02 to 0.27]). Secondary analyses demonstrated no significant effect of type of medication, patient age, method of analysis, trial design, or trial duration on reported SRI efficacy. CONCLUSIONS
Meta-analysis of the published literature suggests a small but significant effect of SRI in the treatment of repetitive behaviors in ASD. This effect may be attributable to selective publication of trial results. Without timely, transparent, and complete disclosure of trial results, it remains difficult to determine the efficacy of available medications.
Hello friends -
Totally nasty observation that doesn't have a lot of good options. I ran into a discussion somewhere recently (can't remember where, I thought In the Pipeline, but can't find it now!) about a similar topic, the problem of funding and/or academic position depending on publication in 'high impact' journals as measured by number of papers, as opposed to the quality of the paper. One upshot of this, in some cases were instances of looking for stuff, doing enough optimistic filtering, finding a result, and then writing your paper on that.
Anyway, one of the suggestions I saw offered was to treat research papers more like clinical trials; you have to state up front what you are looking for. It would be somewhat upside down of the existing model, you'd have to tell the journals you'd like to submit to before hand what your expectations of the research are. The idea was that if you found weird stuff that you weren't expecting, you could discuss that in the paper, but it would be apparent that this wasn't the focus of the initial research.
Secondly, I think that this kind of thing speaks towards another potential problem; the natural drive for entities capable of deriving profits from a treatment for autism driving research. Put another way, an argument I used to make was that the barriers of entry of a well designed, gold standard study are so significant there is a tendency to not study a common, cheap, unpatentable supplement. That's bad for everyone, we don't learn about things that might help, and we also don't learn about other stuff said supplementation might be doing; and right or wrong, there is a lot of supplementation going on out there.
@Sullivan - Thanks for posting this. Very nice.
Hey passionlessDrone: the idea of pre-registering clinical trials is one I've been pushing recently. You might have been reading something I wrote but maybe not because I'm not the only one. Anyway here's my thoughts on it: http://neuroskeptic.blogspot.co.uk/2011/05/how-to-fix-science.html
Regarding the paper, it's a great example of why registration of clinical trials is so important. It's the only way anyone knows about the 5 unpublished studies. Of the 5, 2 of them are known to have been negative - see Michelle Dawson's comment on my post http://neuroskeptic.blogspot.co.uk/2012/04/bias-in-studies-of-antidepressants-in.html.
The other 3 presumably were. They'd almost certainly have been published, were they positive.
From a shear monetary/taxpayer point of view, federally funded research needs to be published as we've paid for the information.
From a human perspective, subjects have given themselves for the research effort. It doesn't honor their contribution to withhold the results from the public.
@Neuroskeptic: It was your site that I was reading! Nicely done and good ideas thrown around.
@Anyone: Was this a case of publication bias, or perhaps, 'lack of submittal for publication' bias; i.e., the study was done, results determined, and the paper was not subsequently submitted to a journal because the findings were potentially inconvenient?