SYSTEMATIC reviews are often described as the gold standard when it comes to levels of evidence: policymakers rely on them in making decisions about issues as diverse as tobacco control and the management of prison populations.
But not everybody agrees they are a good foundation for policy making, as a recent paper by researchers from the University of California makes clear.
The researchers analysed 59 articles that expressed an opinion about the use of systematic reviews for policy making. Criticisms of the systematic reviews ranged from bias in selection of trials, to dissimilarities in study design, inclusion of small under-powered studies, general methodological flaws, and the risk reviews may not adequately account for publication bias. I have written previously about that last point.
If we’re going to use reviews as a basis for policy making (or clinical decision making, for that matter), it’s important to take heed of such criticisms.
It’s undoubtedly true that not all systematic reviews are created equal — the maxim “garbage in, garbage out” applies here as much as anywhere else.
But what is fascinating about the Californian paper is the researchers’ suggested link between journal articles criticising systematic reviews and the authors’ ties to industry.
Just over a third of articles that supported systematic reviews were found to have an author with a disclosed or undisclosed tie to the pharmaceutical, tobacco or insurance industries, compared with 80% of those that were critical of systematic reviews.
What might be going on here?
Paradoxically, these authors suggest it may be the high quality of the best systematic reviews that leads industry to attack the whole lot of them on methodological grounds.
Well designed reviews — those including unpublished as well as published data — may contradict the results of industry-funded trials, potentially undermining claims about benefits or harms.
“Systematic reviews that are conducted according to strict predefined protocols leave industry with less ability to direct or criticize the review findings”, the authors write. “This may leave industry with less control over the discourse …”
Indeed.
About a third of the critical articles studied were comments, editorials or letters, pieces that are not generally subject to peer review.
Despite that, these articles often went on to be cited in other papers as though they constituted evidence, rather than opinion.
An editorial published in 1999 in the New England Journal of Medicine criticising a systematic review on coronary effects of passive smoking has, for example, been cited 30 times as evidence of the flaws of systematic reviews.
These opinion pieces are also far less likely than other types of articles to include conflict-of-interest disclosures, the Californian researchers write, calling on journals to require more consistent and complete disclosures for all published articles.
Although the authors attempted to identify undisclosed conflicts as part of their research, they know some slipped through the net.
For example, they classified one critical article, a contribution by the late and often controversial Professor Hans Eysenck to the debate about passive smoking, as free of industry ties.
The paper included no disclosure and the researchers’ own efforts did not turn up any undisclosed conflicts, but they cite an article in The Independent alleging Professor Eysenck had received £800 000 in research funding from the tobacco industry.
The conduct of systematic reviews is far from a perfect science, as these authors acknowledge.
There are many criticisms we might make, many improvements we could seek, but those should be inspired by evidence — not opinion, and especially not opinion that may be influenced by commercial interests.
Jane McCredie is a Sydney-based science and medicine writer.