THE hospital standardised mortality ratio is a flawed measure of hospital quality, and could be misleading if used in publicly available hospital comparisons, according to an article in the latest MJA.
In a Viewpoint article, the authors argue that although the HSMR has been endorsed by Australian Health Ministers for use as a quality indicator, most quality problems do not cause death and, conversely, most deaths are not due to poor-quality care. (1)
The HSMR compares the number of deaths between hospitals both within and across different jurisdictions (state or national), after adjusting for characteristics such as patient age and comorbidities.
The authors argued that this risk-adjustment process was problematic because of inconsistencies in the way comorbidities were recorded in hospital statistics. They pointed to evidence that the HSMR varied substantially depending on the risk-adjustment model used.
“In a recent US study… 12 out of 28 hospitals with higher than expected hospital-wide mortality as classified by one method had lower than expected mortality when classified by one or more of the other methods”, they wrote.
The authors also suggested that problems with coding and classifying palliative care could limit the value of the HSMR.
The HSMR is one of a core set of national indicators that have been endorsed by Australian Health Ministers for reporting on hospital quality and safety. The HSMR is already used in the UK and Canada to compare performance between hospitals.
However, the MJA authors said there was insufficient evidence that reporting HSMR improved quality of care or patient outcomes, and that if HSMR comparisons were to become available on facilities such as the MyHospitals website it might mislead the public.
“Unfavourable HSMRs based on incorrect data or analyses can trigger external inquiries that stigmatise individual hospitals, [and] lower morale and public confidence …”, they wrote.
“The HSMR is currently not ‘fit for purpose’ as a screening tool for detecting low-quality hospitals and should not be used in making interhospital comparisons.”
The authors argued that the HSMR might be better suited for monitoring changes in mortality within individual hospitals over time, particularly if complemented by disease-specific HSMRs.
In an accompanying editorial, Professor David Ben-Tovim, from Flinders University, said that the Australian Commission on Safety and Quality in Health Care was undertaking a rigorous development process for several indicators, including the HSMR. (2)
Professor Ben-Tovim said mortality indicators were a “prompt for hard questioning”, but acknowledged the challenge was that these questions may focus on the indicators themselves rather than the underlying care processes.
He said although public reporting posed risks for institutional reputation and morale this had to be balanced against a need for accountability and informed public choice.
“One of the most important virtues of public reporting is that it makes it harder for vested interests, whether they are in government or institutions, to suppress unwelcome information,” he said.
– Sophie McNamara
1. MJA 2011; 194: 645-648
2. MJA 2011; 194: 623-624
Posted 20 June 2011