ALL over Australia, acute, primary and aged care services undergo assessments by external surveyors from a variety of accreditation schemes against pre-approved standards.
Accreditation is required for all hospitals since the recent introduction of the National Safety and Quality in Health Service Standards, and has been a prerequisite for federal funding for aged care facilities since the Aged Care Act 1997.
For primary care, accreditation is voluntary and measured against standards developed by the Royal Australian College of General Practitioners. Accredited practices can qualify for a variety of incentives under the Practice Incentives Program (PIP).
Despite widespread implementation in Australia and overseas, in a review of the literature we found no evidence of a comprehensive body of studies that could measure whether the benefits of accreditation outweigh the costs.
Although Australia has led the way in developing cost-effectiveness techniques for assessing drugs and other medical technologies, less research has been done on how to measure the cost-effectiveness of a complex intervention such as accreditation that involves so many different parts of the health system.
We believe this may be partly due to the challenge of identifying the benefits of accreditation, but also some confusion over whether the primary role of accreditation is to provide an external audit to ensure health services comply with minimum safety requirements, or to create a continuous quality improvement tool.
Our literature search did reveal several studies analysing the costs of accreditation. Most of these were self-reported case studies from single hospitals, which identified the problem of isolating the costs of accreditation from the ongoing costs of complying with state and national policies and procedures relating to safety and quality.
The estimated costs ranged from 0.2% to 1.7% of total expenses when averaged over the accreditation cycle (typically 3 years in the US versus 4 years in Australia). These hospital-based figures can be compared to a 2003 Productivity Commission report which estimated the incremental costs of accreditation in primary care at 1.1% of total practice costs.
As with the costs studies, most of the benefit studies we reviewed were from the US. Outcomes were measured across a range of financial metrics, organisational culture, and clinical indicators.
This wide range of outcomes mirrors the changing priorities within the standards over time and also reflects the complexity of accreditation given the extensive range of topics covered by the standards.
One of the problems of measuring accreditation is that it is often implemented nationally, or is so widely implemented that developing a randomised controlled trial is not feasible. The creates problems in study design in that although some of the studies revealed an association between accreditation and better clinical or organisational outcomes, the lack of a proper control group makes it difficult to determine causality.
With significant investment indicated by the cost studies and the current debate about the role of accreditation in health care, we believe developing a framework to identify and measure the costs and benefits of accreditation is needed. This would create a more informed debate and provide a point of reference for measuring and monitoring any reforms in accreditation process.
A more transparent framework would also help ensure that the aims of accreditation were clearly identified and explained to all stakeholders, including patients and consumer bodies, health care workers, and health care system funders and regulators.
Finding one indicator that accurately measures accreditation may not be possible, but the incremental findings from our review can be used to construct the case for having a comprehensive and nationally implemented system to compare hospitals and encourage improvements in patient safety and quality of care.
Dr Virginia Mumford is a visiting fellow and Professor Jeffrey Braithwaite is the foundation director of the Australian Institute of Health Innovation, Faculty of Medicine, University of NSW.
Isaac Brajtman’s comment confirms the point that I made ie that the organisations that benefit most from accreditation are those that use the process to review their operation. The provision of dishonest information, false reports, concealment of known probems and similar activies may appear smart, but in the end it is the organisation and its patients or residents that lose. Surveyors can only prepare a report on the basis of the information they have, and cannot expected to find all the problems in a complex organisation in the time avaialable. Similarly ignoring findings and not taking sensible action on recommendations has led to serious patient care incidents.
Furthermore, although accreditation may be compulsory, active participation in the process of review, honest dialogue with surveyors and sensible responses to findings and recommendations cannot be mandated, just as good management cannot be ensured by fiat. This, to some extent adds weight to the argument against compulsory accreditation. However I am convinced that the benefits remain and what is needed is a sensible regimen of accreditation that encourages full participation, is not too onerous and is conducted by well informed professions form the health care field. The alternative is compulsory bureaucratic “audit”, which will be meaningless and counterproductive.
Accreditaion , in ACF the way it is done is an absolute waste of time and money
To advise that you are coming, creates a situation I have seen in some ACF were forms are quickly filled in, results of urine and other charts cooked to fit in, etc.
You only need to walk in to an ACF unannounced to see whats really going on.
If you really want to assess a place you need a nursing sister, a social worker, and a child of a resident to go in unannounced together,and see the residents,and speak to some.See what their food is like and how it is given (and taken away later) Then somebody can check the ‘books” to see whats been entered .Then check some of the doctors notes too to see how often residents are seen and why the doctors are called outside their routine visits.
There is really no point in wasting the sister in charge’s time going throug reams of notes which really have no bearing on the residents true management.
There is no evidence anywhere that shows accreditation makes any difference to outcome. It is a bureaucratic tool which measures means not ends. The cost is borne by busy practices. The benefit is for accreditors and agencies which might otherwise have no work to do.
Thank you. There can be no cost benefit for misguided bureaucratic non-clinical audit. As said before, not only what can be measured but what needs to be measured determines credit worthiness. Patient happiness is the goal of medical treatment. “Quality and Safety” are bureaucratic devices to ensure they, though non-medically trained, can have a say in the clinical decision making process. Financial consideration is another. Accreditation that is focussed on “Quality and Safety” is suited for products, such as toys and cars and services. Clinical methods and outcomes as the focus of human based accreditation must be focussed on patient Rights and clinical responsibility – neither of which can be interfered with by bureaucratic involvement unless we submit to them, a situation that is not to the medical professional’s nor patient’s advantage.
David raises some important points. While it may be difficult to measure the impact (cost/benefit) of accreditation directly, the benefits of processes measures that are implemented as a result of accreditation (the above example was discharge planning) are very measurable.
Although important, health outcomes are extremely difficult to measure (for all we know, the pricing arrangements at the local supermarket may be more influential on patient outcomes than the health service), and are not suitable to be used for accreditation. After all, we give driving licenses to those who can follow the rules, and respond appropriately to traffic, not the number of crashes they might have in the future.
An alternative to measuring the costs and benefits of accreditation (which also has costs involved) as suggested in this article may be ensuring that accreditation is based on evidence-based guidelines, and further, that health services that do not ‘tick all the boxes’ are not unfairly penalised for not meeting guidelines that were designed for the ‘ideal services.’
Thanks for the article which provokes thoughts.Particular when it is read with the article on audit of patient outcome in the same issue.
Accreditation looks at a number of general indicators of quality but one would ask what is the most important indicator of quality? Probably not the knowledge of what to do at the time of a fire. A practitioner, as well as the community the practitioner works in, would most probably choose the outcome of patients treated Is that part of the accreditation process? Is that part of the health data collected by the State and the Federal agencies?
It is natual that hospitals would question the cost and value of accreditation, but measuring either is likely to be difficult considering the organisational complexity of most hospitals, the time contributed by management and clinicians and difficulty in measuing the effects of any change resulting from accreditation. Acccreditation has improved hospital practice and has prompted or encouraged the widespread introduction of processes such as discharge palanning, proper consent and correct site procedures and formal incident review as well as changing the focus of quality assuance from “we employ the best people so we must be doing well”. Additionally, surveys have uncovered serious and sometimes dangerous issues, which have not been ignored or not appreciated.
Organisations that get the greatest value from accreditation are those that use the process to conduct their own review of their operations and introduce improvements. Audit is not the answer, because although the concept is appealing, Hospitals do not have the data to measure clinical performance over large areas of their operations.
In an area where quality and safety do matter, and outcomes depend on multtiple contributions to complex systems, the health system spends little on ensuring the quality and safety of its operations, and probably a lot less than manufacturers of cars or mobile phones. Systems of accreditation may not be perfect, but they do stimulate improvement and just because we can’t measure the cost and benefit, it does not mean that benfit does not exist. Intuitively, abandoning accreditiation would appear to be a very retrograde step.
As far as I am aware there is no published data to suggest that any form of enforced programs of continuing education (eg MOPS for the RACP) has any demonstrable benefit on patient care/outcomes.
Would the authors suggest scrapping these programs?