InSight+ Issue 39 / 14 October 2013

HOW a piece of research changes clinical practice, rather than how many times it is cited or which journal it appears in, is the best way to judge its merit, say Australian experts.

Professor Chris Del Mar, professor of public health at Bond University, told MJA InSight that the issue of how research is judged was of pressing concern to all academics.

“Everyone’s trying to find a way of deciding what’s good research and what isn’t”, he said. “Perhaps it’s time to judge a piece of research on its societal impact rather than its scientific impact.”

Dr Virginia Barbour, former chief editor of PLOS Medicine and now its medical editorial director, agreed, saying there needed to be multiple ways of assessing papers both pre- and post-publication.

“None are perfect but all may have some value if it is possible to know exactly what is being assessed”, she told MJA InSight.

Professor Del Mar and Dr Barbour were responding to research published in PLOS Biology that found three methods of assessing the merit of a scientific paper — subjective postpublication peer review, number of citations gained, and the impact factor (IF) of the publishing journal — were “poor, error-prone, biased, and expensive”. (1)

The authors compiled data from two sources — 716 papers from the Wellcome Trust dataset, each scored by two assessors, and 5811 papers from the F1000 database, 1328 of which had been assessed by more than one assessor. All papers were published in 2005.

They compared assessor scores between the two datasets, as well as the correlation between assessor scores and IF scores, and the correlation between the assessor scores and the number of citations. They found that “scientists are poor at judging scientific merit and the likely impact of a paper, and that their judgement is strongly influenced by the journal in which the paper is published”.

“The number of citations a paper accumulates is a poor measure of merit and we argue that although it is likely to be poor, the impact factor, of the journal in which a paper is published, may be the best measure of scientific merit currently available”, the authors concluded.

Professor Del Mar said there were two kinds of journal — researcher-to-researcher, and researcher-to-clinician.

“Researcher-to-researcher papers are cited a lot”, he said. “But researcher-to-clinician papers that actually change clinical practice don’t get cited much because clinicians just go and do it.”

Dr Barbour felt the PLOS Biology paper’s conclusion about the IF was “concerning”.

“I am concerned about their conclusion that the IF is the least bad, especially as they could not really control for the effects of IF on other assessments such as the F1000 reviews”, she said.

“More importantly, I think this paper misses the point about what the scientific literature is about and what is needed in its assessment.

“Any journal-level metric will always be problematic for understanding the ‘value’ of an individual paper. That is why article-level metrics, which are transparent and which assess a range of measures, are so important and represent a substantial step forward in assessment.”

She cited a recent editorial she wrote for PLOS Medicine in which she documented research papers they had published which had considerable societal impact. (2)

“Among these are papers on: why lethal injection is not a humane method of killing, as it probably asphyxiates prisoners, which led to lethal injection being ruled unconstitutional in the state of Tennessee; a method of measuring how bad a war is, which led to a change in NATO procedures in Southern Afghanistan; and a forensic dissection of articles we helped bring to light, though a legal intervention, on how companies manipulate doctors through ghostwritten journal articles”, she wrote.

“The paper on lethal injection has had just eight academic citations to date; this single measure of impact — of either the article or the journal — misses the paper’s demonstrable effect on public policy.”

 

1. PLOS Biology 2013; Online 8 October
2. PLOS Medicine 2013; Online 24 September


Poll

How do you judge the importance of medical research?
  • Relevance to practice (77%, 77 Votes)
  • Journal it appears in (16%, 16 Votes)
  • Number of citations (7%, 7 Votes)

Total Voters: 100

Loading ... Loading ...

7 thoughts on “Judge research on clinical impact

  1. Dr John B. Myers says:

    Interesting. I agree. Editors are to blame for the delay from basic knowledge to clinical application, for not publishing letters critical of “views” or articles about “current understanding”; double blind studies that though done according to scientific method are not done in a clinical context, e.g. controlling for sodium intake in drug studies in which the efficacy of the drugs compared are sodium intake dependent e.g. renin-angiotensin pathway inhibitors or blockers; and long term cardiovascular outcomes e.g. comparing race but not taking sodium intake into account: which is a waste of resources in either case and of little practical relevance, but are published in prestigious clinically relevant journals because of the study design or the prestige or deemed prestige of the authors and reviewers, which perhaps cannot be helped, and lack of prestige of the constructive, though less well known, although informed, critic. In other cases, ours, editors of Proceedings seemed to have a vested interest to not publish what were similar to their own awaited findings.

  2. Ulf Steinvorth says:

    Regardless of how to measure the impact of research it would probably be a good start to make sure that all trials are actually published in the first place to allow accurate assessment of current medical evidence – regardless of whether their results help or hinder the sales of investigated products or procedures. For more information visit http://www.alltrials.net

  3. Communicable Disease Control Directorate says:

    I have some concerns regarding disappointment with the citation rate for the Lethal Injection paper.

    1. They reported that “lethal injection probably asphyxiates prisoners”. This  appears to be opinion rather than fact with the use of the word “probably”. In addition were there qualifiers, such as a large dose of benzodiazepam administered simultaneously,  ensured the person was unconscious when asphyxiated?

    2 The audience to whom the article was directed, is quite small when compared with say an article on “hypoxia in association with asthma” an illness that affect many millions of people.

    2.Dr Barbour and Prof Del Mar were responding to an article in PLOS Biology.  This is one of the more than 8000 open access journals, most purport to be peer reviewed but the majority are not. As noted recently in Science (Who’s Afraid of Peer Review? Z A Bhutta Science vol 342 p60-66 2013), when a fictitious scientific paper that had serious fatal flaws deliberately incorporated, was submitted to different open access journals 157 accepted it and 98 rejected it. Of the 106 journals that appeared to perform any review 70% ultimately accepted the paper.

    There is no single effective measure of judging the merit of research, however when assessing  research  the quality of the journals in which the publications are published (broadly assessed by impact factor), how often they were editorialised, whether the author is the 1st on the paper indicating they did most of the work, or the corresponding author indicating they supervised the research and had the “ideas” and citation index all point to  the quality of research and influence it might have.

  4. Department of Health Victoria Clinicians Health Channel says:

    Complex issue; however, actual or potential clinical impact on change of medical practice in the disciplne concerned would be an important way of measuring the relevance particularly by clinical researchers who are hospital based. It will be different for reseachers who are involved in basic research which may take years to translate into clinical benefit. Therefore, ideal assessment parameter would vary with the type of research…basic or clinical etc.

  5. Dr Kevin ORR says:

    Too often we hear of breakthroughs in the management of nasty diseases and, at the end of the article we are told the treatment will not be available for so many years. Is this so the investigators can maximise the kudos they get for its discovery? Epilimumab is reported as giving advanced melanoma patients an extra 10 years. I know safety and efficacy are important but so is the life of the patient. Significant advances in management should generally be available much earlier.

  6. Dr Ehud Zamir says:

    Bio-Medical research is heavily funded by the public through tax money. The public expects, and somewhat naively believes that this money is well spent on worthwhile research which will have a positive impact on medical practice. The public therefore needs to insist on putting the right incentive systems in place. The current incentive system of publish or perish, impact factor autocracy and excellence in publication, rather than excellence in clinically-meaningful  research, defeats this porpose. It is not patient focused enough. Medical research should be rated and rewarded  based to a great extent on its impact on practice. However, just like putting automatic speed cameras is easier and more profitable than eradicating dangerous driving, counting citations is easier than testing clinical impact. In this sense the academic system is therefore looking for the coin under the lamp post.

    There are potential ways to measure clinical usefulness of a publication. One would be citations in clinical textbooks. Computerised methods can be developed to obtain feedback from Medline users about the usefulness of  publications they read online. 

    Not easy, but certainly possible. The internet will make it happen.

     

  7. Karen Price says:

    It’s good to question merit as there is ALWAYS some sort of Bias operating.  My only concern is that business models are sneaking into Scientific endeavour.  That is, that we need to get a bang for our research buck.  Implementation nevertheless is a worthy goal, but not always immediately achievable depending on the research paper. The merit of judging a single paper this way does not take into account collaborative and reproducible results which have been the cornerstone of Scientific discovery. (Rosalind Franklin)  Using this methodology the Higgs Boson from 1964 would have been deemed unworthy and useless.  So let’s not throw the baby out with the bathwater trying to metricise and apply a rating scale to human endeavour, especially the wonders of Science at its creative best.  The fetish for measuring output has its place but is not universally applicable.  See Robert McNamara and Economics.  Science is not necessarily a production line nor a ladder with some at the “top” and some at the “bottom”  The impact of Andrew Wakefield’s paper for instance,  is astonishing based on the way it changed behaviour.    The focus on a number is too narrow a view.  

     

Leave a Reply

Your email address will not be published. Required fields are marked *