This commentary was first published as a Perspective in the Medical Journal of Australia on 6 February, 2017. It is reprinted here with permission.
IN its 2014–2015 budget, the Australian Government announced the establishment of the $20 billion Medical Research Future Fund (MRFF). The MRFF aims to support health and medical research in Australia to drive innovation, improve delivery of health care, enhance the efficiency and effectiveness of the health system, and contribute to economic growth.
In April 2016, the government announced the creation of the Australian Medical Research Advisory Board to determine the medical research strategy and priorities to guide the funding allocated through the MRFF. Although its mission is clear, the board faces the challenging task of identifying research priorities and allocating the available budget across topics and programs competing for funding.
The criteria for identifying research priorities to guide the government decision making on program level funding, as set out in the MRFF legislation, focus on the ability of research programs to deliver the greatest value for as many Australians as possible. However, there is little mention of how the value of research programs would be objectively, transparently and practically assessed to inform research prioritisation and ensure efficient use of the MRFF budget.
Common approaches to research prioritisation in Australia
Priority areas for medical research in Australia are typically identified through consultations with major stakeholders, such as research funding organisations (eg, the National Health and Medical Research Council [NHMRC] and the Australian Research Council [ARC]), researchers, and through direct consultation with patients and their representatives. Measures of the burden of disease are often considered during this process, based on the notion that focusing research on diseases of high population and cost will deliver high societal value. However, there is often a chasm between national priority areas and the bottom-up approach, whereby individual researchers submit grant applications on the topics of their own interests and compete with other researchers for funding from a limited budget.
Decisions on which specific research programs (eg, clinical trials) to fund are usually made based on the assessments of the merits (eg, scientific rigour, strength of the research team) of the submitted research proposals, according to the opinions and judgments of experts sitting on funding panels. Nevertheless, this approach is based on panel members’ inherently subjective views on the potential value of a piece of research, with little or no reference to explicit estimates of the incremental costs and benefits of proposed research programs. In addition, there is a potential for research duplication due to the lack of coordination across panels in the various funding organisations. Research duplication can also happen when funding is granted to research projects to generate evidence that can be sourced from relevant international research. In this case, resources may be better deployed on other studies or activities, such as dissemination and implementation of findings.
To maximise benefits from research budgets, funding decisions should be based on each research proposal’s ability to provide the best value for money, based on explicit evidence on the proposals’ cost and potential benefits (here, here and here). Even when the research project is on a disease with high burden, it may not be worthwhile if the expected costs of conducting a research study exceed its expected benefits. Similar assessments of benefits and costs have been the standard in guiding funding decisions of other health care investments in Australia (eg, pharmaceuticals and health services). There is, therefore, no reason why research funding should not be subjected to the same scrutiny to achieve efficiency in spending public funds.
Analytical approaches for assessing the value of research
A number of analytic approaches have been proposed to quantify the value of research programs, particularly in research intended to evaluate health care interventions (eg, clinical trials and observational studies). These approaches estimate the expected benefits of research on improving health care, which is expressed as improved health outcomes (eg, survival) or, in terms of monetary benefit, using a willingness to pay value for an additional unit of health outcome (eg, $50 000 per life year gained).
The underlying principle in such approaches is that the overall value of a specific research program can be assessed by comparing its cost against its expected benefits. Research costs include direct costs to set up a study and recruit individuals and opportunity costs for the population who will not benefit from research until the results are implemented. The proposals with the highest expected net benefit would constitute good candidates for research. Two key analytical approaches are the prospective payback of research (PPoR) and the value of information (VoI).
A number of models following the principles of the PPoR approach have been put forward over the past 30 years. Under this approach, the value of a research study is typically inferred from its ability to result in a beneficial change in clinical practice (here and here). In essence, the expected value of research depends on its possible findings, the expected level of the change in practice triggered by the findings, and on the size of the population expected to benefit from that change in practice. The approach is based on well established principles of economic impact analysis, is relatively straightforward and can be undertaken within narrow time frames. However, it has been argued that changing clinical practice can be achieved through other ways, and undertaking research may not be the most cost-effective approach. Moreover, due to the way PPoR estimates the value of research, the approach may advocate prioritising research in areas where there is great scope for change in clinical practice rather than in areas where there is much need for information, but a smaller opportunity for improvements in clinical practice. The approach does not assess whether there is a need for a given research program by considering the level of uncertainty in the available evidence (ie, how much evidence already exists), which may lead to funding unnecessary research.
VoI is an alternative quantitative approach to research prioritisation that has received increased attention. This method has firm foundations in statistical decision theory and provides a systematic approach to estimating the expected value of acquiring new evidence to inform a decision problem. The approach considers the uncertainty in the relevant available evidence, the consequences of this uncertainty (ie, the cost of making a wrong decision), the population that would benefit from the results of the intended research and the expected cost of the research (here, here and here).
Thus, VoI considers both the burden of the disease and the uncertainty in existing evidence to advise whether additional research is potentially worthwhile. This is essential to reduce research duplication and wastage by directing research funds to worthy research programs, and to enhance equity by improving the opportunity of research funding for programs studying rare diseases, where there is a small population but a high information need. Moreover, the value of research estimates obtained using VoI can be adjusted to the expected level of implementation to reflect the impact of research findings on real-world practice (here and here).
VoI analysis is typically conducted with economic evaluations of new technologies and health services to inform funding decisions, mainly using decision-analytic models and computer simulation. Nevertheless, great advances have been made to simplify VoI computation. For instance, the Agency for Healthcare Research and Quality in the United States has issued a working paper on research prioritisation using VoI with minimal modelling. Moreover, Claxton and colleagues demonstrated how VoI analysis can be used to estimate the value of additional research directly from systematic reviews and meta-analyses.
A number of research prioritisation initiatives have tested VoI worldwide. The first application was in 2004 through two pilot projects in the UK, one for the National Coordinating Centre for Health Technology Assessment and another for the National Institute for Health and Care Excellence. In Australia, we have applied VoI analysis to a range of research projects under an NHMRC-funded centre for research excellence, and we have demonstrated the value and practicality of the approach in prioritising research and optimising trial design. From the US, Carlson and colleagues have reported the outcomes of incorporating VoI analysis into a stakeholder-driven research prioritisation process within a program to establish the comparative effectiveness research in cancer genomics. In addition, Bennette and colleagues developed and applied an efficient and customised VoI-based process to prioritise cancer clinical trials within the Southwest Oncology Group.
The way forward
The MRFF Advisory Board needs to develop innovative and flexible frameworks within which the research priorities can be set. A preferred framework would combine both quantitative and qualitative considerations to ensure that research funding is efficient, sustainable and equitable, and at the same time responsive to the clinical needs for high quality and innovative medical research.
A possible option would be to use consultations with major stakeholders and considerations on the burden of disease to identify the broad areas of research funding priority, and to use VoI analysis to assess the value of research programs within each priority topic. To further reduce the burden of VoI analysis, this approach could be reserved for the most costly research projects. Committee discussions may ensue to refine decisions allowing for additional attributes, such as capacity building or targeting disadvantaged groups.
Dr Haitham Tuffaha is from the Centre for Applied Health Economics at Griffith University.
Dr Lazaros Andronis is a lecturer in Health Economics at the Health Economics Unit of the University of Birmingham in the UK.
Professor Paul Scuffham is director of the Centre for Applied Health Economics at Griffith University, and deputy director of the Menzies Health Institute Queensland.
To find a doctor, or a job, to use GP Desktop and Doctors Health, book and track your CPD, and buy textbooks and guidelines, visit doctorportal.