If you're like me, you consider hard science to be immutable; after all, that's why it is called hard. You have a well-designed experiment; the results show x, y and z; therefore x, y and z will always be true. And from there you can build on those results.
But a fascinating article in the New Yorker by Jonah Lehrer, turns the notion of hard science into something squishier, more malleable and a lot less permanent.
Lehrer documents the growing realization among some scientists and researchers that many scientific findings, when given a closer look, begin to fall apart, the effects the studies once documented steadily shrinks--sometimes to nothing. "It's as if our facts were losing their truth," Lehrer writes.
One example Lehrer cites is the effects of atypical or second-generation antipsychotics (sold under brand names such as Abilify, Seroquel and Zyprexa). At first, study findings showed the drugs to work really well. But the positive therapeutic results of later studies kept on shrinking - until they all but disappeared.
What is going on, asks Lehrer. Why is the truth fading? And how, then, can we trust any finding? To find out, he turned to researchers who are studying this mysterious phenomenon.
After a catchy new theory or a new way of thinking emerges, the publishing process is tilted towards positive results, and not results that show no effect. This usually wears off after a few years, when that new exciting idea becomes entrenched. It is then that studies showing no effect, or a much smaller effect, start getting published.
Another contributing factor is the "selective reporting" of results, or the data that scientists choose to document in the first place. This is not scientific fraud, says one of the researchers who has studied this, but instead "subtle omissions and unconscious misperceptions as researchers struggle to make sense of their results."
Sometimes this is caused by a cultural bias. The testing of acupuncture is a case in point. In Asian countries where acupuncture is widely accepted, many of the studies that this researcher looked at conclude that acupuncture is effective. In Western countries, acupuncture studies found acupuncture to be effective about half as often. "Our beliefs are a form of blindness," writes Lehrer.
Researchers also try to do studies with findings that will have statistical significance. Few studies with "null" results get published (You probably won't see this article in a journal: Bunions NOT associated with brain tumors or heart disease.)
And finally, there is the role that "sheer randomness" plays in this declining effect. Sheer randomness can sometimes be caused by scientist not controlling for certain variables (either because scientists can't control for them, or don't even think to control for them).
The consequences of these flawed studies are lost time and effort as researchers struggle and fail to replicate and build upon the findings. Sometimes bad medical practices result from them too (think hormone replacement therapy for menopausal women).
So next time you read about an exciting new finding, take it with a grain of salt. The hard science it is built upon may just turn to quicksand after a few years.