Health knowledge made personal
Join this community!
› Share page:
Go
Search posts:

Prozac pregnancy alert: Mo ...

Posted Oct 27 2012 9:08am


Prozac pregnancy alert: Mothers-to-be on anti-depressants are putting babies at risk, warn scientists

One cannot evaluate these claims without virtually redoing their literature survey but I would be surprised if the evidence was anything other than epidemiological  -- and people who are already in poor health are probably more likely to be depressed  -- thus accounting for the results

Thousands of women who take anti-depressants during pregnancy are endangering their unborn babies, researchers have warned.

The widely prescribed pills have been found drastically to raise the odds of miscarriages, premature birth, autism and life-threatening high blood pressure, they say.

Harvard researchers believe far too many women are taking the drugs during pregnancy because their GPs are not aware of the dangers.

They also suspect that drug companies are trying to play down the risks because anti-depressants are so lucrative to them.

They focused on the complications linked to a group of drugs called selective serotonin reuptake inhibitors (SSRIs), which include Prozac and Seroxat.  Between 2 and 3 per cent of pregnant women in the UK are thought to be on these drugs – up to 19,500 every year.

But the researchers have found that they increase the risk of a miscarriage by 17 per cent and more than double the likelihood of pre-eclampsia – high blood pressure during pregnancy – which can be fatal.

They also double the chances of the baby being born premature, or developing autism.  In addition, the researchers say, the babies are more likely to suffer from heart defects and problems with their bowels.

SSRIs treat depression by boosting the level of the ‘happy hormone’ serotonin in the brain. But the researchers believe that serotonin is also getting into the womb and harming the development of the foetus’s brain, lungs, heart and digestive system.

Dr Adam Urato, assistant professor of obstetrics and gynaecology at Tufts University School of Medicine, in Boston, who was involved in the study, said: ‘I am absolutely concerned – very concerned.

‘We are witnessing a large-scale human experiment. Never before have we chemically altered human foetal development on such a large scale.  'And my concern is why I am trying to get the word out to patients, health care providers, and the public.’

Dr Alice Domar, assistant professor in obstetrics, gynaecology and reproductive biology at Harvard Medical School, said there was little evidence the pills effectively treated depression.

She said GPs were handing out prescriptions for the drugs even though depression could be far better treated through exercise, talking therapies and even yoga.

‘These are probably not particularly safe medicines to take during pregnancy,’ she said. ‘We’re not saying that every pregnant woman should go off her medication.  'Obviously you don’t want a pregnant woman to attempt suicide.’

The researchers, who presented their findings to the annual conference of the American Society for Reproductive Medicine in San Diego, California, have analysed more than 100 existing studies looking at the risks of SSRIs.

Their findings are due to be published next week in the respected journal Human Reproduction.

The researchers say that if women take the pills when they are trying for a baby but come off as soon as they find out they are pregnant, it may be too late.

Dr Urato added: ‘Many of the experts in this area receive funding from the anti-depressant majors. These experts continue to downplay the risks of these agents and to promote the benefits of their use in pregnancy.’

A spokesman for the Association of the British Pharmaceutical Industry said: ‘Clinical decisions about the treatment of depression are complex and must be made by clinicians in consultation with individual patients, regardless of whether or not they are pregnant.’

SOURCE





Medical studies with striking results often prove false

A statistical analysis finds that study results showing a 'very large effect' rarely hold up when other researchers try to replicate them

Given the excruciatingly bad methodology and childish logic of so many published studies, this should be no surprise at all


If a medical study seems too good to be true, it probably is, according to a new analysis.

In a statistical analysis of nearly 230,000 trials compiled from a variety of disciplines, study results that claimed a "very large effect" rarely held up when other research teams tried to replicate them, researchers reported in Wednesday's edition of the Journal of the American Medical Assn.

"The effects largely go away; they become much smaller," said Dr. John Ioannidis, the Stanford University researcher who was the report's senior author. "It's likely that most interventions that are effective have modest effects."

Ioannidis and his colleagues came to this conclusion after examining 228,220 trials grouped into more than 85,000 "topics" — collections of studies that paired a single medical intervention (such as taking a non-steroidal anti-inflammatory drug for postoperative pain) with a single outcome (such as experiencing 50% relief over six hours). In 16% of those topics, at least one study in the group claimed that the intervention made patients at least five times more likely to either benefit or suffer compared with control patients who did not receive the treatment.

In at least 90% of those cases, the team found, including data from subsequent trials reduced those odds.

The analysis revealed several reasons to question the significance of the very-large-effect studies, Ioannidis said.

Studies that reported striking results were more likely to be small, with fewer than 100 subjects who experienced fewer than 20 medical events. With such small sample sizes, Ioannidis said, large effects are more likely to be the result of chance.

"Trials need to be of a magnitude that can give useful information," he said.

What's more, the studies that claimed a very large effect tended to measure intermediate effects — for example, whether patients who took a statin drug reduced their levels of bad cholesterol in their blood — rather than incidence of disease or death itself, outcomes that are more meaningful in assessing medical treatments.

The analysis did not examine individual study characteristics, such as whether the experimental methods were flawed.

The report should remind patients, physicians and policymakers not to give too much credence to small, early studies that show huge treatment effects, Ioannidis said.

One such example: the cancer drug Avastin. Clinical trials suggested the drug might double the time breast cancer patients could live with their disease without getting worse. But follow-up studies found no improvements in progression-free survival, overall survival or patients' quality of life. As a result, the U.S. Food and Drug Administration in 2011 withdrew its approval to use the drug to treat breast cancer, though it is still approved to treat several other types of cancer.

With early glowing reports, Ioannidis said, "one should be cautious and wait for a better trial."

Dr. Rita Redberg, a cardiologist at UC San Francisco who was not involved in the study, said devices and drugs frequently get accelerated approval on the basis of small studies that use intermediate end points.  "Perhaps we don't need to be in such a rush to approve them," she said.

The notion that dramatic results don't hold up under closer scrutiny isn't new. Ioannidis, a well-known critic of the methods used in medical research, has written for years about the ways studies published in peer-reviewed journals fall short. (He's perhaps best known for a 2005 essay in the journal PLoS Medicine titled, "Why Most Published Research Findings Are False.")

But the scope of the JAMA analysis sets it apart from Ioannidis' earlier efforts, said Dr. Gordon Guyatt, a clinical epidemiologist at McMaster University in Hamilton, Canada, who was not involved in the work.  "They looked through a lot of stuff," he said.

Despite widespread recognition that big effects are likely to disappear upon further scrutiny, people still "get excited, and misguidedly so" when presented with home-run results, Guyatt said.

He emphasized that modest effects could benefit patients and were often "very important" on a cumulative basis.

SOURCE


Post a comment
Write a comment:

Related Searches