InSight+ Issue 11 / 26 March 2012

CONCERN that published research does not provide doctors with the full evidence base about medications is increasing, with an analysis of antipsychotic trials finding several examples of publication bias.

The research, published in PLoS Medicine, analysed 24 trials registered with the US Food and Drug Administration of eight second-generation antipsychotics. (1)

The study found that four of the trials were never published. Of these, three failed to show that the study drug was significantly better than placebo, and one showed the drug was statistically inferior to the active comparator.

It is the latest example of researchers using data from published and unpublished research in their analysis — and finding discrepancies between the two sources.

An analysis of antidepressant trials by the same researchers in 2008 found that publication bias nearly doubled the apparent proportion of positive trials and increased the apparent effect size of antidepressants by one third. (2)

A recent reanalysis of neuraminidase inhibitors for influenza using primary trial data was conducted after researchers found that 60% of patient data from oseltamivir trials had never been published. The reanalysis found that oseltamivir did not seem to reduce hospitalisations, contrary to the findings of published reports. (3)

One of the authors of the oseltamivir review was Professor Chris Del Mar, professor of public health at Queensland’s Bond University. Professor Del Mar told MJA InSight that the latest research on antipsychotics showed that if doctors only read published research, they would get a biased view of these medications.

“This is another indication that, at the moment, our system for providing information to clinicians about the efficacy of commercially sensitive products is broken. More and more, it seems to look as if medical journals are simply becoming the marketing arm of commercial interests such as the pharmaceutical industry”, he said.

While the association between trial outcome and publication status in the latest research did not reach statistical significance — probably due to the low number of relevant trials — Professor Del Mar said the numbers speak for themselves.

He said although the publication bias in the latest analysis was not as strong as found for antidepressants or neuraminidase inhibitors, it was another example of the problem.

“We’re beginning to see a pattern that what goes through the regulators is not what we see in journals”, he said.

Professor Gordon Parker, Scientia professor of psychiatry at the University of NSW, said the latest analysis of antipsychotics showed that “if we are to rely on the evidence base, then we do need to examine the data from unpublished studies as well as published”.

However, he said the bigger issue was the real-world effectiveness of psychopharmacological drugs.

He cited the 2005 Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study, which unexpectedly found that a first-generation antipsychotic, perphenazine, performed generally as well as newer atypical antipsychotics, with fewer side effects. (4)

“I think the more important question is the one raised by the CATIE study — if short-term efficacy doesn’t really differentiate the old typical and new atypical drugs, then what are the meaningful drug profile differences (eg, rates of tardive dyskinesia, metabolic syndrome) over the years that they are used”, Professor Parker said.

He said pharma-sponsored trials give little useful information because trial patients and real-world clinical patients contrast like apples and oranges, so that the real-world effectiveness and true side-effect profile often took years of naturalistic observation by clinicians to identify.

“If you walk around psych wards, just look at the size of patients — their BMIs are around 30 to 40. There’s an accruing database that shows some of these drugs are linked to metabolic syndrome, diabetes, hypertension.”

Professor Parker called for more clinical effectiveness studies in psychiatry.

“Real-world research would tell a more coherent, informative and discriminating story”, he said.

– Sophie McNamara

1. PLoS Medicine 2012; Online 20 March
2. N Engl J Med 2008; 358: 252-260
3. Cochrane Database Syst Rev 2012; (1): CD008965
4. N Engl J Med 2005; 353: 1209-1223

Posted 26 March 2012

12 thoughts on “Publication bias concerns grow

  1. AdrianB says:

    Looking at Figure 1 in the actual article, for 6 compounds there was full compliance. Only for 2 compounds aripiprazole and ziprasidone, were there two studies missing each i.e. another way MJA and the authors could present the research findings would be to say that 75% of medicines had full publication, but questions have to be raised about 2 remaining medicines.

  2. Sue says:

    The purpose of “peer review” for journal publication is more about suitability for publication than the validity of the findings. That’s why we learn about critical appraisal of the literature – the real “peer review” should be the interpretation by informed readers. We can’t assume that everything that is published in peer-reviewed journals makes valid findings or recommendations – we can only assume that the papers are of sufficient qualifty to be offered to the reading public. It is then our responsiblity to interpret the findings. Knowing that this is onerous and time-consuming, perhaps we need some sort of academic institution or college or AMA tool that appraises the key opinion-leading papers.

  3. Kaunas says:

    Actually, I am wondering about scientific community in general, and about medical research community in particular: nowadays, when outside the window we see the 21st century, in making the research it is still the 20th century, where there are lots of obstacles and criteria to publish the research. Instead of having them, I think, the free internet would be very useful to diminish the criteria for publications and decrease this kind of bias. Aditionally, the use of number of publications as a proxy variable for scientific efficacy (which in turn leads to bigger money as a consequence) makes the whole process of making research simply publication-driven medicine and research. And money-driven too. With all its consequences we face now.

  4. Anonymous says:

    I agree with the comments regarding “Publish or Perish”. It may help if: (1) instead of having so many researchers trying to obtain funding (which will be inadequate) for their projects, have less researchers obtaining grants, larger grants, more researchers working on a smaller number of very highly relevant studies, which would help with minimisation of errors through decreased workload and more checking of data/statistics (2) make detailed protocol publication essential and publication of results of all published protocols also essential. This will help reduce publication bias and mistakes in the literature.

  5. JD says:

    I don’t agree with RayT that this phenomenon is solely due to underfunding (which is a problem particularly in Australia). Having worked in very well-funded overseas institutions, I saw that there was enormous pressure to publish, but (as Dr Wigeson pointed out) little understanding, in many cases, of scientific method, statistics and statistical analysis. Add to this underfunding and you get what I call “Nike research”: “Just do it” and don’t worry too much about the quality of the methodology or resulting data …

  6. Sue says:

    This underlines the importance of going to Journal Clubs or at least keeping up with critical review skills. I have just spent two days at an update on appraisal of research, immersed in positive and negative predictive values, NNT, likelihood ratios and confidence intervals. The detail is not easy to acquire and retain in a busy clinical life. Perhaps the MJA could have a “critical appraisal” series where an expert in research in their field goes through a few key papers each month. This might help to counter the “headline” messages that often make the media, without context or validity.

  7. Thais says:

    Gav raises an important point. As a Public Health Physician and a GP I am finding it increasingly difficult to publish my research, far more so than when I started my career. It seems to be much easier if you are part of a large, well-funded study with well-known names on the panel, but if you are trying to publish moderately interesting research about a small local area on a shoestring budget (as I am) it is very difficult. I am aware of a lot of people like me who are doing really interesting research who can’t get published. If more of us could get published, it might improve the representativeness of the evidence.

  8. Dr Peter Parry says:

    As a psychiatrist I was once shocked to realise I couldn’t fully trust the published literature, now (according to your poll today) I’m one of the majority of readers of this article who are no longer surprised.

    Internal industry documents were released from court cases in the USA, mainly involving atypical antipsychotics, and are available on the internet. I and a US colleague read through over 400 of these and published our findings http://www.springerlink.com/content/b674622731k4850q/?p=780854c9fdb64f98

    As the former chief editor of the BMJ advocated, full disclosure of raw data is necessary – http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020138

  9. Gav says:

    Journal editors are partly to blame here. I don’t believe they are in the pocket of Big Pharma, but they prefer to publish sexy positive trials over a boring negative trial. Since a 20% acceptance rate is common with clinical journals, this is almost inevitable.

  10. John Stokes says:

    Yes Dabigatran is the drug that most concerns me at the moment. It is not being prescribed with true informed consent. How many patients know there is no reversal agent when they have their fall or bleed?

  11. Ray T says:

    This is an inevitable result of underfunding universities, and independent research, and the effect of the “Publish or Perish” syndrome driving researchers to seek funding where they can. I note “acceptable levels ” of cholesterol in published charts have come down since there have been so many lipid lowering drugs available.

  12. Dr M Wigeson says:

    Years ago when I was doing the DPH diploma, the class was asked to each randomly select 10 journal articles and analyse them for accuracy of their statistics, and whether their stated conclusions matched their data. In 9 out of 10 cases (about 280 articles between us) there was a statistical error of some kind – some of them glaring, and some of them seemingly deliberate. This was the published literature that had presumably made it through peer review.
    When you add bias from selectively not publishing anything that contradicts the point the researcher wishes to prove……..
    Our reliance on “evidence-based medicine” makes us vulnerable to these statistical and other biases because many of us do not have the time or the tools to detect these errors, and unfortunately this may mean that patients are not getting the best treatment based on published research. It gets even worse when these biased trials are cited as the basis of support for future investigations – a real house of cards which could come tumbling down if anyone ever challenged the foundation.

Leave a Reply

Your email address will not be published. Required fields are marked *