×

Rapid response systems

Rapid response systems (RRSs) have become a routine part of the way patients are managed in general wards of acute care hospitals (Box 1).1 They are used in most hospitals in Australasia, North America and the United Kingdom and are increasingly being used in other parts of the world. They operate across the whole hospital and aim for early identification of seriously ill patients, at-risk patients and patients whose condition is deteriorating, using abnormal observations and vital signs (calling criteria). If any of these criteria are breached, the bedside nurse or doctor triggers a rapid response by clinicians who have the expert skills, knowledge and experience to initiate a coordinated response to any hospital medical emergency.

Traditionally, the most junior doctor and the bedside nurse were the first-line management team caring for patients in acute care hospitals. Interns were expected to assess and manage patients with deteriorating conditions, with little experience in the complexities involved in caring for more seriously ill patients. In the early 1990s, it started to become evident that many potentially preventable deaths and serious adverse events were occurring in acute care hospitals.2,3

Errors were ascribed to the system in which clinicians operated, rather than to individual incompetence.4 Hospitals operated in silos, where patients were admitted under a specialist in one area of medicine. This has certain strengths, such as the admitting doctor being ultimately responsible for the patient’s care. It had been a successful mode of operation for some time, but several changes have made it less effective.5 The population of patients in hospitals has shifted from relatively young patients with a single diagnosis to increasingly older patients with multiple comorbidities who undergo more complicated diagnostic procedures and treatment regimens.6 The needs of these older patients cannot necessarily be met by a specialist with limited experience outside his or her own area of expertise.7 The consultant, even if immediately available, may not have the appropriate skills to recognise and manage seriously ill patients requiring critical care interventions. Similarly, the consultant’s team of trainees, although more immediately available, may not have had the training necessary to manage seriously ill patients.

Patients with deteriorating conditions were not being recognised. More than 80% of those who suffered a cardiac arrest in hospital had documented deterioration in vital signs in the 8 hours before the arrest.8 More than 50% of those who died in hospital without a do-not-resuscitate order had severe antecedent derangements in vital signs.9 About 70% of patients admitted to an intensive care unit (ICU) from the general wards had vital sign abnormalities for at least 8 hours before being admitted to the ICU.10 Patients with deteriorating conditions were also managed suboptimally. A recent report from the United Kingdom found that the three most common reasons for potentially avoidable mortality in UK hospitals were mismanagement of deterioration (35%), failure of prevention (26%) and deficient checking and oversight (10%).11 RRSs have the potential to overcome all these problems.

Establishing a system around patient needs

It is common in hospitals for clinicians in one specialty to seek the opinion of clinicians in another specialty by requesting a consultation. Typically, this is not time critical. However, when a patient’s condition is deteriorating, the consultation must be as prompt as possible. For this to occur, an agreed way of defining at-risk patients is needed. This underpins the need for a standardised and objective set of calling criteria superseding the usual consultation process. Early intervention is more effective than waiting until a patient is so seriously ill that he or she requires expensive and invasive management in an ICU or, even worse, waiting until he or she suffers a preventable cardiac arrest or dies a preventable death.1

Vital signs

Before the widespread implementation of RRSs, there was little research into one of the most common interventions in acute care hospitals — the measurement of vital signs. Vital signs have been routinely measured and charted since Florence Nightingale used them for hospitalised patients in the Crimean War. The largest study on RRS effectiveness found that almost 50% of patients who died, had a cardiac arrest or were admitted to an ICU did not have vital signs measured before the event.12 Respiratory rate, the most accurate predictor of serious illness, is often not measured and, when it is measured, the measurement is often inaccurate.12,13 These findings have focused attention on the appropriate frequency for vital sign measurement, especially because hospitalised patients in general wards are at high risk of clinical deterioration.11 Deterioration in a patient’s condition can conceivably occur in the period during which vital signs are not usually measured.

Calling criteria

Vital sign abnormalities include: low systolic blood pressure (usually < 90 mmHg); high or low respiratory rate (eg, < 4 breaths/min or > 30 breaths/min); and abnormal pulse rate (eg, < 40 beats/min or > 140 beats/min). Potentially life-threatening observational abnormalities include seizures, airway obstruction and sudden decrease in level of consciousness. Staff concern is also an important criterion, empowering bedside nurses or doctors to seek timely assistance if they are worried about a patient who does not fit any other criterion. In mature systems, staff concern is a common reason for urgent assistance.14 Oxygen saturation abnormality, when available, is also a useful criterion.

Australian hospitals usually employ an RRS in which one calling criterion triggers a response. Hospitals in other countries may use scores, by adding vital sign abnormalities to trigger different levels of response.15 This could add a level of complexity and inaccuracy, and might encourage clinicians to focus on numbers rather than observation of the patient. It also excludes staff concern as a reason for seeking urgent attention.

Some centres are exploring the concept of encouraging family and visitors to trigger an urgent response. It is early days but, so far, there does not appear to be misuse of the system.16 Use of pathology results to identify patients at an even earlier stage in illness has also been explored.17 Although objective calling criteria are important, awareness of the RRS in itself can change an organisation’s culture, moving it from a traditional hierarchical and silo-based one to one with universal awareness that there are at-risk patients in a hospital and timely assistance is available.

The response

As with calling criteria, there is much variation in how organisations provide an urgent response. Some hospitals maintain separate cardiac arrest and rapid response teams. In the UK, it is common to have an outreach system, where nurses pre-emptively identify and manage at-risk patients across the hospital.18 A two-tiered system, where a member of the admitting team may be called for less serious abnormalities, is used in many organisations.19

Based on the evidence that hospitals suboptimally recognise and manage seriously ill patients,14,810 it is important to involve clinicians who have the appropriate training when caring for these patients, who often have complex needs. It is not surprising, therefore, that many rapid response teams use ICU staff.20 However, depending on the hospital setting, the urgent response could be provided by a doctor, nurse or paramedic, or by staff from any department in the hospital, as long as they have the appropriate skills, knowledge and experience.21

Other factors

Implementing an organisation-wide system such as an RRS involves challenging the way clinicians interact, bypassing entrenched hierarchies and constructing a system centred on patient needs. This requires more than standardised calling criteria and a rapid response (Box 2). All clinicians in the hospital must be aware of the system and support it. Similarly, senior administrators need to endorse and resource the system. An organisation-wide education program is required to teach staff how the system works and to empower people to call for assistance when needed.

It is also important to continually monitor the system and close the loop by making outcome indicators available to people at all levels of the organisation, especially to those responsible for and participating in the system.22 Some outcome indicators include cardiac arrest rates (which usually range between 0.5 and 6.0 cardiac arrests per 1000 admissions) and crude mortality rates. To make mortality rates more meaningful, patients with do-not-resuscitate orders are excluded. Data on cardiac arrests and deaths can be further refined by examining whether there were calling criteria that were not responded to appropriately in the 24 hours before the event. This gives insight into potential preventability. Delays in the rapid response can also be a useful indicator of the system’s effectiveness.21 Another important outcome indicator is the number of calls per 1000 admissions — an increase in the rate of calls is associated with reductions in mortality and cardiac arrest rates.23

Do rapid response systems work?

ICUs and RRSs are both systems for managing seriously ill and at-risk patients, but little robust research has been done to show the effectiveness of either. The general intuitive principle with such systems is matching the right people — with the right skills and knowledge — with the right patients at the right time.

It has been established that ICUs and RRSs identify and treat patients with a similar level of mortality risk.24 In other words, the boundaries between patients in general wards and patients in ICUs and high-dependency units (HDUs) are becoming blurred. One of the functions of an RRS is to act as a triage system, to identify sick ward patients who require ICU or HDU therapy. A 200-bed hospital with a 20-bed ICU will probably have less need for an RRS than an 800-bed hospital with a 10-bed ICU, as the more at-risk patients will already be in a highly monitored environment. In each hospital, RRS use also depends on patient casemix, average level of comorbidity and types of interventions undertaken.

Nevertheless, RRSs have been subject to evaluation. Not surprisingly, single-centre, before-and-after studies have almost universally shown significant reductions in outcome indicators such as mortality and cardiac arrest rates.25,26 Many of these studies were conducted by one or two “champions” who provided clinical leadership. The largest cluster randomised trial was underpowered and produced inconclusive results,12 possibly due to the contamination of control hospitals, less than satisfactory implementation and adherence, and variability in the effectiveness of implementation. Nevertheless, in a post-hoc analysis, the study did show a significant reduction in mortality in adult intervention hospitals.23 The largest meta-analysis on RRSs has shown 21% and 38% reductions in mortality and cardiac arrest rates, respectively, in paediatric hospitals, and a 34% reduction in cardiac arrest rates in adult hospitals.20 However, it is impossible to randomly assign patients to a group that receives early intervention by a rapid response team and a group that does not. Similarly, because of the almost universal uptake of RRSs in many countries,1 it is difficult to randomly assign hospitals. Other research methods must, therefore, be used. A recent study has shown a strong correlation between uptake of RRSs in New South Wales hospitals and reductions in cardiac arrest and cardiac arrest-related mortality rates, both of which decreased by about 50% over an 8-year period.27

Research now needs to shift to determining the most effective response teams, evaluating the sensitivity and specificity of calling criteria, assessing the cost-effectiveness of implementing RRSs, and defining the most effective RRS implementation strategies. Moreover, possible negative effects of RRS implementation — such as de-skilling of staff and putting excessive pressure on existing resources — need to be evaluated.

In the coming months, the Journal will publish a series of articles that explore how RRSs have changed our approach to patient safety, how RRSs have influenced end-of-life care in acute care hospitals, and how the use of cardiopulmonary resuscitation and cardiac arrest teams is changing.

1 Features of a rapid response system

  • Defines seriously ill patients, at-risk patients and patients whose condition is deteriorating using abnormal observations and vital signs (calling criteria)
  • Provides rapid response to seriously ill patients and those whose condition is deteriorating
  • Operates across the whole organisation
  • Is designed around patient needs
  • De-emphasises the usual hierarchies and interprofessional barriers
  • Provides rapid consultation by experts in critical illness

2 Strategies for maximising the impact of a hospital rapid response system

  • Engage the support of all doctors and nurses
  • Ensure that there is leadership and support from senior hospital executives
  • Implement strategies that promote hospital-wide awareness of the system
  • Ensure an urgent response to any staff concern, whether life-threatening or not
  • Ensure a 24/7 response by staff with appropriate skills, knowledge and experience
  • Build outcome indicators into the system and ensure targeted feedback of data
  • Conduct regular multidisciplinary meetings to discuss individual cases and outcome indicators

Incidents resulting from staff leaving normal duties to attend medical emergency team calls

Clinical emergency response systems such as medical emergency teams (METs),1 rapid response teams,2 patient-at-risk teams3 and critical care outreach teams4 are now used in hospitals worldwide to manage patients who have unexpected clinical deterioration. Currently, the optimal staffing structure for these systems remains unknown.5,6

At our hospital, MET personnel are not rostered solely for staffing the MET. Instead, MET staff have normal hospital duties to perform and, when a MET call is activated, they temporarily forgo their normal duties to attend.

This study was instigated after reports of potential adverse events, such as delayed medication dispensing, occurring as a result of staff leaving normal duties to attend MET calls. Our review found no publications in this area. The primary objective was to determine the rate of adverse events and incidents occurring as a result of hospital staff leaving normal duties to attend MET calls.

Methods

This single-centre, structured interview- and questionnaire-based study was conducted over an 18-week period between 29 July and 15 December 2013. The study was conducted in a 650-bed university teaching hospital in Sydney, New South Wales. Participants were all hospital staff who were recorded as attending a MET call on the hospital campus during the study period.

The primary outcome measure was the rate of adverse events and incidents directly related to MET staff leaving normal duties to attend MET calls. Secondary outcome measures were the rates of such events according to staff occupation.

Our hospital used a two-tiered MET system.7,8 The first tier recommended early clinical review by ward teams, and the second tier activated the MET. The MET, led by a medical registrar, included an intensive care registrar, an anaesthetic registrar, three residents or interns, a clinical nurse consultant, and nursing staff from the cardiology department. Security and environmental services staff attended MET calls outside of hospital buildings.

All MET staff had normal hospital duties to perform, and would forgo those duties to attend MET calls. MET attendance to MET calls was mandatory.

In 2013, cardiac arrests comprised 4.1% of MET calls at the hospital.

All staff attending and providing assistance at MET calls had their details recorded on attendance logs. On weekdays after each MET call, trained interviewers would contact the staff listed. Staff who consented were interviewed using a structured interview form. The following data were collected: number of days since the MET call; staff designation; issues resulting from leaving normal duties to attend the MET call; mechanism of reporting, such as line manager or computerised incident reporting system; and self-reported estimated time spent at the MET call.

To avoid intruding on staff when they were not at work, interviewers were instructed to make reasonable attempts to contact staff either in person, or using their hospital pager or phone, during working hours only. Staff who could not be contacted were sent a written questionnaire version of the structured interview. Completion and return of the questionnaire was considered as consent to participate.

Ethics approval was obtained from the hospital’s Human Research and Ethics Committee (CH62/6/2013-030).

Study definitions

The lack of standardised definitions for adverse events was problematic. The National Health and Medical Research Council (NHMRC) did not have definitions for adverse events that were unrelated to pharmaceutical products or medical devices.9

The original study definition of adverse event was an “anticipated or unanticipated event that causes, or requires an intervention to prevent, an unfavourable change in a person’s condition”.10,11 Institutional approval for the study to proceed, however, was conditional on altering the definition to that used by NSW Health.12 An adverse event was therefore defined as “an unintended patient injury or complication from treatment that results in disability, death or prolonged hospital stay, and is caused by health care management”.12 An incident was defined as “any unplanned event resulting in, or with the potential for, injury, damage or other loss”.12

Daytime was defined as 08:00–15:59, evening as 16:00–11:59, and night-time as midnight to 07:59. The response rate was defined as the number of completed interviews divided by the number of eligible reporting units.13

All incidents were classified according to severity assessment codes12 (Appendix 1) by the hospital manager for clinical quality and risk. Incidents were coded as minimum, minor, moderate, major or serious. Incidents that caused no injury or increased level of patient care, which required no additional review, and led to no financial or service losses were coded as minimum. All incidents were reviewed by an independent safety monitor, and managed using normal hospital procedures.

Statistical analysis

The MET call rate preceding this study was 17.2 MET calls per 1000 admissions. If our study was conducted similarly to a previous study that ran for 131 days and had a response rate of 64.1%,14 we predicted that 312 MET calls would occur and 1630 interviews would be completed. This would provide a 95% confidence interval of ± 9.7% if the primary outcome measure rate was 200 per 1000 MET participant attendances.

Statistical analysis was performed by an independent statistician, using Interactive Statistical Calculation Pages (John C Pezzullo; http://statpages.org/confint.html#Binomial), and SPSS, version 22 (IBM Corporation). Rates were calculated with binomial 95% confidence intervals, and subgroups were compared using the Pearson χ2 test, where appropriate.

Results

The hospital admitted 17 445 patients in the study period, during which there were 332 MET calls (a mean of 2.6 MET calls per day). The MET call rate was 19.0 MET calls per 1000 admissions (95% CI, 17.1–21.2).

There were 2663 MET call participant attendances recorded. A mean of eight staff members were recorded at each MET call.

Interviews or questionnaires were completed for 1769 staff, a response rate of 66.4%. Interviewers completed 1490 interviews, and 279 written questionnaires were returned (84.2% and 15.8% of total response, respectively). The median time from MET call to interview and MET call to questionnaire completion was 5 days and 21 days, respectively.

Of staff members participating at MET calls, where staff designation was recorded (= 2392), 2087 were MET staff (87.2%), 289 were ward staff (12.1%), and 16 were bystanders (0.7%). Of participating staff, where profession was recorded (n = 2405) 1545 were medical staff (64.2%) and 832 were nursing staff (34.6%).

There were no adverse events recorded. There were 378 recorded incidents. The incident rate was 213.7 incidents per 1000 MET participant attendances (95% CI, 194.8–233.5), and 1.1 incidents per MET call.

Using the severity assessment code, there were two incidents (0.5%) classified as minor, and 376 incidents (99.5%) classified as minimum. There were no incidents classified as serious, major or moderate. Three incidents (0.8%) were reported on institutional incident reporting systems. The types of incidents and the proportions of each are shown in Box 1.

Of the two incidents classified as minor, in the first, a patient absconded from the ward and was subsequently found. In the second, a patient sustained a fall without injury. Both incidents occurred while the patient’s nurse had left the ward to attend a MET call.

The incident rate for completed interviews and written questionnaires was 222.1 and 168.5 per 1000 MET participant attendances, respectively (P = 0.045).

Medical staff and nursing staff reported 243.0 and 156.8 incidents per 1000 MET participant attendances, respectively (P < 0.001). The types of incidents and the proportions of each are shown by role (medical or nursing) in Box 2, and overall proportions by staff type in Box 3. Most incidents (127; 38.3%) occurred during daytime hours, 113 in the evening (34.0%) and 92 during night-time (27.7%) (Appendix 2).

The median time spent by staff at MET calls was 20 minutes. The proportion of staff who spent 30 minutes or fewer at a MET call was 74.9%. Staff who spent 60 minutes or longer at the MET call reported significantly more incidents (Appendix 3).

There were 21 occasions (6.3% or about once every 6 days) where two MET calls occurred within 30 minutes, and two occasions (0.6% or about once every 2 months) where three MET calls occurred within 30 minutes.

Discussion

This study demonstrated three key findings about when MET staff temporarily left normal duties to attend MET calls. First, no major patient harm occurred. Second, MET calls caused significant disruption to normal hospital routines and inconvenience to staff. This occurred despite most staff spending 30 minutes or less at MET calls. Third, problems that did occur were significantly underreported using normal hospital reporting systems.

The observation that medical staff reported more incidents than nursing staff is consistent with work arrangements. Ward nursing staff provide cover when fellow staff members are indisposed. Medical staff and specialist nursing staff are less likely to have cover because of the specialised nature of their work. Improving cover if MET duty is predicted to affect activities such as procedures, clinics, ward rounds or meal breaks may reduce disruption.

Reducing disruption could also be achieved by reducing the number of junior MET staff and adding a further tier to the MET system, where a smaller MET attends middle-tier MET calls. This would work best in hospitals where the cardiac arrest rates are low. Superfluous staff should also be dismissed to normal duties as soon as practical.

Absolving MET staff of normal duties may reduce disruption; however, a standalone MET at our institution was previously not deemed justifiable because of the low MET call rate.

Whether our results can be extrapolated to other hospitals is uncertain. Our MET call rate appears to be low. Other Australian studies document MET call rates of 8.7–71.3 calls per 1000 admissions.1520 Hospitals with different MET call rates or MET configurations are likely to have different incident rates and patterns.

The very low formal incident reporting rate is not unexpected, as conventional reporting systems are not designed to detect the problems that this study examined.

The main strength of our study was the large number of respondents. The response rate was reasonable, given our intention not to intrude on staff recreational time, and difficulties interviewing staff working outside of business hours or part-time.

There did not appear to be a reporting bias with the use of the written questionnaires, as more incidents were reported from direct interviews. However, recall bias may have occurred in participants surveyed using written questionnaires because of time delay.

This is the first study to quantify the problems resulting from staff leaving normal duties to attend MET calls. However, our results cannot be generalised to other institutions due to differences in patient care and MET systems. Future studies are needed to quantify these problems in different MET systems, and also to identify which method of staffing the MET results in the least amount of disruption, while ensuring appropriate patient care and maximising efficiency.

1 Types and proportions of incidents reported as a result of staff leaving normal duties to attend medical emergency team (MET) calls

2 Types and proportions of incidents reported as a result of medical and nursing staff leaving normal duties to attend medical emergency team (MET) calls

3 Proportion of incidents, with 95% confidence intervals, reported as a result of staff leaving normal duties to attend medical emergency team calls, by staff type

Lung transplant recipients receiving voriconazole and skin squamous cell carcinoma risk in Australia

Clinical record

In April 2002, 3 months after her second bilateral lung transplantation, a 45-year-old female patient commenced treatment for necrotising Aspergillus tracheobronchitis with liposomal amphotericin B followed by voriconazole 200 mg twice daily for 13.2 months. In January 2003, mild erythema was noted on her forehead and cheeks, accompanied by dryness and scaling of her forearms and dorsa of her hands. A photosensitive drug reaction was suspected. Two months later, several skin squamous cell carcinoma (SCC) lesions and solar keratoses were noted on the dorsa of her hands. In May 2003, voriconazole administration was ceased because the fungal infection had resolved.

In November 2004, the patient commenced a second course of voriconazole 200 mg twice daily — again, to treat tracheobronchitis — which was continued for 10.6 months. Three months after she started treatment, a photosensitive rash was noted on her lower legs and forearms. In September 2005, six skin SCC lesions were excised from her forehead and left hand, and actinic keratosis lesions were noted on her right forehead, the backs of her hands and on her chest and ankles. In January 2006, a third course of voriconazole 200 mg twice daily was administered for 7.2 months to manage Aspergillus airways colonisation. Two months after initiation of voriconazole prophylaxis, three small unspecified skin lesions were observed on her right temple, nose and anterior chest.

The patient underwent a third bilateral lung transplantation in September 2006 and received posaconazole prophylaxis for nearly 2 years. Multiple actinic keratosis lesions were again observed on her forehead, chest and hands. Twelve months later, she was diagnosed with a parotid gland metastasis from an SCC. She also developed subcutaneous nodules on the forehead, consistent with in-transit metastatic SCC. Radiotherapy was initiated to treat locally invasive skin SCC. In January 2008, a dermal lesion just to the left of the central forehead scar appeared as a new dermal metastasis. Skin SCC lesions were also noted on both of her lips, her left index and right little finger, and there were multiple actinic keratosis lesions on her hands, chest and neck.

In April 2008, the patient underwent right parotidectomy. The histopathology report showed residual microscopic skin SCC deep in the intratemporal fossa region, indicating a high risk of tumour recurrence. Oral capecitabine was given in conjunction with radiotherapy. In October 2008 (2 months after treatment with posaconazole was ceased), she again developed several actinic keratosis lesions scattered on her hand and forehead. She started a second course of posaconazole in December 2008. The patient experienced ongoing facial pain with recurrent tumour in and around the trigeminal nerve. She received stereotactic radiotherapy for meningioma involving the right middle cranial fossa in March 2009. The patient died, owing to metastatic SCC, in October 2009.

Emerging evidence for causal associations between prolonged voriconazole exposure and skin SCC16 is of concern, given the frequent use of voriconazole prophylaxis, administered for months in patients after lung transplantation (LTx).7 In our institution, 13.7% of patients (14/102) receiving voriconazole after LTx between July 2003 and June 2010 had at least one episode of skin SCC. Drug-related photosensitivity is the most common cutaneous reaction that has been reported with voriconazole use.8 It has been postulated that long-term voriconazole therapy results in chronic phototoxicity, which, in turn, contributes to the development of skin SCC in transplant recipients.1

A study in the United States from 2003–20083 reported that a high cumulative voriconazole dose was not an independent risk factor for skin SCC in patients who have undergone LTx, in contrast to other findings.4 However, this same study reported that the occurrence of skin SCC among LTx patients was related to the duration of voriconazole therapy,3 which is supported by more recent findings.5,6 The inconsistencies between studies are likely to be related to differences in the methods employed to evaluate predictors and in study design, which may not have adequately controlled for potential confounding factors, such as patient sex, age, sun exposure, history of chronic obstructive pulmonary disease (which could be a proxy variable for smoking status) and level of immunosuppression. Longer and more intensive immunosuppressive regimens have been associated with a higher risk of developing skin SCC.9 Indeed, prolonged duration of voriconazole prophylaxis may have been a surrogate marker for a more compromised immune system or opportunistic infections that could, in turn, influence a patient’s risk of developing skin SCC.

Residing in geographical locations with high levels of sun exposure has also been identified as an independent risk factor for LTx patients developing skin SCC.3 Higher rates of skin SCC have been reported in areas of significant sun exposure, with increased exposure to ultraviolet (UV) radiation being a significant risk factor.3 Importantly, UV radiation is a known distinct mutagen of keratinocytes and induces an immunosuppressed condition that prevents tumour rejection.10 In our patient, multiple skin SCC lesions were noted on photoexposed areas, as has been reported in other patients given prolonged voriconazole therapy.1,2 Our patient reported a history of extensive sun exposure and sunburn and was aged 45 years when her first skin SCC was diagnosed. This is consistent with evidence of a higher incidence of skin SCC in older populations,5 which could be explained by accumulation of high-dose UV radiation over a prolonged period of time.

It is worth noting that aggressive skin SCC lesions and the parotid gland metastasis in this patient occurred during posaconazole prophylaxis (after discontinuation of voriconazole). While studies have reported regression or fading of skin SCC after switching from voriconazole to posaconazole or itraconazole,2 this was not observed in our patient. Her skin SCC could have continued to develop spontaneously despite cessation of voriconazole or could have been due to prolonged posaconazole exposure. Clearly, we need to institute surveillance programs to ascertain whether posaconazole confers a similar level of risk to voriconazole with respect to the development of skin SCC.

In summary, long-term administration of voriconazole demands caution, especially in patients with a risk of high-level sun exposure. Routine prospective dermatological examination should be performed, particularly in patients at high risk as defined by the intensity and duration of immunosuppressive therapy, history of prior skin SCC and geographical location.

Lessons from practice

  • Before commencing voriconazole therapy, obtain the patient’s history of sun exposure and conduct a baseline skin review, with particular vigilance for patients with light skin, patients who have a high level of sun exposure and those with a previous history of skin SCC.
  • Give the patient appropriate advice regarding sun protection (eg, wear sunscreen, hats and protective clothing).
  • Monitor the patient for a photosensitivity reaction while they are receiving voriconazole therapy. If a reaction is noted, consider switching to an appropriate alternative.
  • After voriconazole has been discontinued, conduct dermatological examinations at 3- to 6-month intervals, particularly for patients with previous skin SCC.
  • The overall risks versus benefits of using voriconazole should be considered in patients with previous skin SCC and those who had a recurrence or worsening SCC lesions subsequent to voriconazole administration.

Tossing a Snowball at the tip of the iceberg

Too many licensing authorities, not enough accountability or power to enforce standards

Fourteen health professions come into the jurisdiction of the Australian Health Professionals Regulation Agency (AHPRA). Having one national licensing authority for the professions instead of separate authorities in each state and territory makes good sense. For doctors, if AHPRA had stopped at one registration fee, and left the paraphernalia to the states and territories, it would have been a modest body and not enmeshed with a national medical board and a national medical council. This awkward arrangement is demonstrated in the recommendations of a recent parliamentary report.1

Peter Drucker, well known management guru, is quoted as saying “Much of what we call management consists of making it difficult for people to work”.2 The article quoting Drucker talks about a “mass of clutter — from bulging inboxes to endless meetings and long lists of objectives to box-tick”. When there are three national medical bodies with interlocking functions, it is unsurprising they are prey to the dysfunctions of bureaucracy.

These are resonant tones for those who have known simpler times. Then, you signed the register in front of a group of venerable peers who called you by your surname and said “I knew your father”. You were invited for a cup of tea, paid 10 quid a year for medical indemnity and were licensed to practise medicine for life.

Kim Snowball, a well respected former head of the Western Australian Health Service, faces a daunting task as the independent reviewer of the National Registration and Accreditation Scheme. He has authored a wide-ranging consultation paper.3 In it, he says that the Board’s “role is to protect the public from risks posed by health professionals”.3 From a medical perspective, anecdotally, the Medical Board of Australia has been criticised as being out of touch with its constituents, too slow to act on complaints, and unable to trace its registered membership when they change location. Urban and rural myths develop about outliers in the profession who are too impaired to ply their profession competently but continue to practise undetected. In a submission to the AHPRA inquiry, a rural practitioner wrote: “It appears that there is no supervision of the adherence to these restrictions and supervision requirements. Indeed AHPRA has acknowledged that it has no way of ensuring their restrictions are being adhered to”.4 This statement encapsulates much of the discontent.

The licensing of professional practice is tied to the maintenance of standards, although licensing indicates only the attainment of minimal requirements. In the 1990s, there was a perceived shortage of doctors in Australia. Community pressure from one-doctor towns wishing to recruit doctors was considerable. It was more than an anecdote that a community that was prepared to dump farm produce in front of the Western Australian parliament in protest against rising costs and falling returns5 went to water when its doctor threatened to leave. The mindset that any doctor will do is not conducive to maintaining high standards.

In answering the call for doctors, importing doctors from overseas was a ready-made solution — cheap and no waiting for them to graduate. The Australian Medical Council examination provided a way to assess overseas graduates. However, in a world of minimal standards, it was not long before corporate practice arrived in rural and outer urban settings where “area of workforce need” can be manipulated and throughput can readily become the only indicator of successful practice. Often, the workforce here consists entirely of overseas graduates. Given the numbers being supervised, frequently by one doctor, one would be forgiven for thinking that the supervisor is emulating Robert Towns6 rather than William Osler.

Traditionally, medical licensing is based on the apprenticeship model, whereas the growth of corporatisation is based on the indentured labourer model. Supervision under the indentured labourer model is a cursory glance over the cane fields of professional practice and, if the doctors are seen to be working and reaching their quotas, then the benchmark has been achieved.

This is the challenge for any registration body — what level of supervision should the licensing authority demand and what ability does it have to police it? In one known case, the supervisor approved by the Medical Board of Australia was 600 kilometres away, and owned the practice in which the supposedly supervised doctor worked.

In rural areas, most small settlements have hospitals. Their boards increasingly demand that the doctors be credentialled and privileged for a given scope of practice before being appointed. In the best cases this process is rigorously undertaken. Such a system can identify deficiencies, both in competence and attitude. Owners of medical practices should not escape similar scrutiny. A formal link between the national expectation and local reality should be considered.

It is important that the link between licensing and credentialling and privileging of medical practitioners is strengthened. No community is well served by corporatised medicine, uninterested in building up skills and the intellectual capital in that community.

There is a task for Snowball. The consultation paper is an excellent start, but it should attend more to supervision of new doctors. Is Snowball reluctant because there are so many layers of regulation without regard to enforceability?

Waiting for complaints is far from the best way to supervise a medical profession where, unwittingly, the regulatory agencies have opened Australia to an indentured labourer model.

A personal reflection on staff experiences after critical incidents

The effects of adverse iatrogenic events extend beyond patients and families to health care staff and organisations

Errors are common during the delivery of complex care in the Australian health care system.1 Adverse iatrogenic events (critical incidents) resulting in patient harm or death may be the most distressing for all involved. Many of these errors are preventable, but investments in programs to prevent health care-related adverse events have had varying success.2,3

After critical incidents occur, emphasis is rightly placed primarily on the immediate, interim and long-term care of the patient and family. At the same time, health care organisations must also manage the staff involved in the incident and ensure appropriate responses to reduce the risk of future events.

This is a personal account of how individual health care workers and organisations may respond, and then recover, after a devastating critical incident. Possible ideal responses after a critical incident and preventive workplace cultures are also considered.

Personal reflection

During my intensive care medicine training, I was involved in a team failure that resulted in the injury and subsequent death of a young family man due to a medical intervention. While I was not directly responsible, I was part of the team responsible for this lethal injury. My team members and I learned many profound and lasting lessons from that terrible day. A culmination of interhospital and intrahospital system problems and failures in team planning and communication contributed to this man’s death. Over subsequent years, these factors were explored and acted on at personal, departmental, hospital and coronial levels. I met the man’s family on the last day of the inquest. I promised them that I would incorporate what I had learned into my practice and share it with others to try to prevent similar tragedies. This short insight into my journey through the aftermath is part of that promise.

Immediately after the critical incident, I experienced many personal, emotional and professional challenges.4 When the patient subsequently died from his treatment injuries, these feelings were exacerbated by the knowledge of the devastating financial and social outcomes for his family. I believe my team members underwent similar experiences at different times after the event, before the inquest and beyond. Due to different shifts and roster rotations, ongoing clinical workload and our varied coping strategies, I did not have many opportunities to discuss these experiences with other members of the immediately involved team.

Sharing my experiences of the event and its aftermath with others in the hospital enabled me to make sense of the situation and to identify meaningful action for my own personal and professional recovery.5 Many of my experiences matched those described by Scott and colleagues in their six-stage adverse event recovery process: 1) chaos and accident response; 2) intrusive reflections phase; 3) restoration of personal integrity; 4) enduring the “inquisition”; 5) obtaining emotional first aid; and 6) moving on.6 My recovery began with the realisation that this must be a fully grasped, lifelong, patient-centred learning and quality improvement opportunity.

I felt many of the expected emotional responses, in fluctuating intensity, including self-critical thoughts, loneliness, shame, guilt, sleep disturbance, and profound and hurtful feelings of professional insecurity. I witnessed other staff members experiencing varying levels and periods of functional impairment in the workplace. I managed to avoid serious workplace difficulties7 by using institutional, peer and family supports — by seeking company, participating in open disclosure discussions and quality review sessions, and accessing mentors. As a result, I did not require professional treatment or sick leave.

I received feedback about my personal and professional performance by discussing, listening and reflecting during and in between these opportunities — often with solemnity, sheepishness, anger, frustration, trepidation, grief and dismay. I had periods of deep and painful reflection. I also had opportunities for open and non-judgemental discussion in both private and workplace settings with my own family and friends and with colleagues from different professions and disciplines. At the same time, ongoing risk management, quality improvement and medicolegal processes enabled further clarification of “what happened”. In retrospect, I am grateful for the structured processes that supported my psychological work in understanding the incident.

Others in my team had different needs, and some appeared to experience differing levels of support from the health care organisation. Many resorted to relying on the informal support of family, friends and peers when the structured organisational support did not meet their needs.

Organisational response

The organisational response to this critical incident was multifaceted and prolonged. Initially, within the first 24 hours, the staff involved completed written statements of their personal understanding of events. This was followed by a team debriefing led by senior staff. Meetings involving the wider departmental staff, mentors and administrators took place over the ensuing days, weeks and months, with later formal departmental presentations. Further recapping with the legal team around the inquest hearing was highly valuable.

The patient’s family were engaged in open disclosure processes from the time of the event. They were regularly informed of the patient’s progress before his death and the ongoing hospital responses. Departmental responses included a root-cause analysis to determine all contributory factors. “Human factors” were considered predominant reasons.

The departmental nursing and medical leaders and clinical governance and medicolegal teams conducted a detailed review and improvement of equipment, policies, guidelines, processes and procedures. Case presentations and reviews at quality and safety sessions and hospital grand rounds disseminated knowledge gained and lessons learned from the incident. Reports were also prepared for the insurers and medicolegal department. Staff orientation, induction and training processes were changed to include multidisciplinary crisis resource management, to improve the staff’s technical and non-technical skills.

My active and passive involvement in these processes, and associated formal and informal dialogue, assisted with my understanding of the event.

“Moving on” to future prevention

Emerging literature about the emotional and professional burdens carried by health care staff after critical incidents describes the patient and family as “first victims” and the staff involved as “second victims”.8 These terms seem pejorative, negative and unhelpful, yet I cannot find suitable alternatives. This terminology derives from the perceived gap between the support provided to staff by the employing health care organisation and the support that staff actually require, particularly when compared with the support (rightly) offered to the aggrieved patient and family.

A thematic analysis of interviews with Scandinavian multidisciplinary health care staff after adverse events explored their responses in detail.7 The range, depth and variability of emotional responses were confirmed, along with significant self-reported changes in professional performance and self-confidence. Variability in each individual’s post-event personal and professional needs was also noted. The authors recommended coordinated, structured, transparent and systematic organisational responses for patient and family support, coupled with personal and professional support for staff.

After this critical incident, my personal recovery continued through the interaction of individual and organisational responses. I pursued external learning to acquire the knowledge and skills required to prevent further such incidents in my own practice. As such, I do not believe I am a second victim. Rather, I am a member of a responsible team. We have learned and helped others by conceptually placing the bereaved family at the centre of our own recovery. This long and challenging process demonstrates the power and importance of patient-centred quality and safety initiatives. After critical incidents occur, structured immediate, interim and long-term care of patients, families and staff is needed to enable enlightened improvement. Health care staff may already be carrying a disproportionate and under-recognised mental health burden and may need more attention than is often given.5,9

Developing health care organisations to be high-reliability organisations (HROs) may help to reduce second-victim scenarios.1012 HROs are characterised by their ability to manage complex, time-pressured and demanding sociotechnical tasks while avoiding catastrophic failure. Their organisational performance is often matched by an ability to expand capacity in a crisis. This is achieved through planning for variability in human performance by accepting the possibility of failure. HROs have evolved multiple redundant preventive and adaptive systems that integrate safety, quality and workplace learning. Any error is reported and proactively examined, and prevention strategies are subsequently developed and integrated into the workplace systems. These active system responses are said to cause the “dynamic non-event” of critical incident prevention.10

However, even with the best preventive systems, critical incidents will still occur. After a critical incident, it is essential for health care staff to seek help for themselves, in addition to the support provided to the patient and family. Constructive and supportive incident responses for patients, families and staff must be activated and maintained over months to years. Lessons learned must be integrated into workplace systems. I implore readers to become proactive agents of personal and institutional change for building resilience and reliability, in honour of your patients. Remove the need for anyone to be labelled a second victim.

The cost-effectiveness of primary care for Indigenous Australians with diabetes living in remote Northern Territory communities

In reply: I acknowledge that data from community controlled health services were not included in our study.1 The high mobility of this population is well recognised and is most common between related communities.2 The bulk of primary care services in remote Northern Territory communities are provided through the 54 government clinics, and we have captured the movement between those services in our dataset. The lesser degree of movement between government and community controlled clinics3 would not have substantively affected our results or our conclusions.

We used propensity score matching4 to improve comparability of the low, medium or high primary care use groups. As shown in the Box, we adjusted for key confounders (age, sex, number of chronic diseases) and found no statistically significant differences between groups. All communities in this study were geographically classified as remote or very remote5 and were similar in terms of their SEIFA (Socio-Economic Indexes for Areas) score.6 Other factors raised by Whyatt and colleagues, including social acceptability and the behaviour of health care providers, may well have significant influence on decisions to use primary care services and, in part, explain the poorer outcomes among the low primary care users.

We are confident that the evidence generated by this study is of use to policymakers and health planners in their efforts to strengthen primary care in remote areas of Australia.

Proportion of patients in each primary care use group before and after propensity score matching, by age, sex and number of chronic diseases

 

Low-use (n = 6987)


Medium-use (n = 5926)


High-use (n = 1271)


χ2 significance (P)


 

Before

After

Before

After

Before

After

Before

After


Age (years)

               

15–29

48%

20%

47%

19%

20%

20%

523.3*

2.04

30–39

24%

23%

25%

25%

23%

23%

   

40–49

14%

26%

15%

27%

27%

27%

   

50–59

7%

18%

8%

17%

17%

17%

   

60–69

7%

13%

5%

12%

13%

13%

   

Sex

               

Male

50%

35%

39%

35%

33%

33%

523.3*

2.07

Female

50%

65%

61%

65%

67%

67%

   

Number of chronic diseases

         

0

63%

10%

43%

10%

10%

10%

2004.8*

11.12

1

17%

16%

22%

16%

16%

16%

   

2

9%

22%

17%

23%

23%

23%

   

3

7%

28%

13%

30%

31%

31%

   

4

4%

20%

5%

17%

16%

16%

   

5

1%

4%

1%

5%

5%

5%


P < 0.01. † P > 0.05.

Goals of care: a clinical framework for limitation of medical treatment

The development of clear, effective and consistent clinical processes for decision making relating to limitations of medical treatment and documentation of the decisions is an ongoing challenge for all health care systems.

We propose a clinical framework called “goals of care” (GOC). This approach has been introduced and audited in two Australian health services (Royal Hobart Hospital, Tasmanian Health Organisation — South, and Northern Health, Melbourne, Victoria) and is being considered elsewhere. It is influenced by the Physician Orders for Life Sustaining Treatment approach (http://www.polst.org), which is widely used in the United States, coupled with the innovation of assigning each patient episode to one of three treatment categories based on the overall medical treatment goals for that patient at that time.

The three-phase model

Medical decision making is based on determining the GOC for the patient. The patient’s situation is assigned to one of three phases of care according to a realistic assessment of the probable outcomes of medical treatment. These phases are clinically defined intentional categories that take heed of, but are quite distinct from, personal goals expressed by patients. Patients can move from one category to another during their illness trajectory. The phases are curative or restorative, palliative, and terminal;1 they are based on phases that were first described in 1990.2 The distinguishing features of each phase are shown in the Box.

The patient assessment is shared with the patient or substitute decisionmaker (SDM) and, if agreed, a GOC plan form is completed and placed in the alerts section of the patient’s medical record. A GOC plan is a medical order that clarifies limitations of medical treatment for a present condition; it is not the same as an advance directive, which is usually made by a person, in his or her own “voice”, to inform medical decision making for future episodes of impaired capacity. Goals are revised in the light of changes in medical condition, and appropriate limitations are then documented on a new form. A GOC plan replaces institutional or community-based not-for-resuscitation (NFR) orders.

We documented GOC plans using an original form (Appendix 1), which has been used at Royal Hobart Hospital for the past 3 years. A second, revised form (Appendix 2) is now being introduced more widely in Tasmania, after extensive experience and feedback from clinicians, medical records staff and others. It is simpler and has been modified for use in all settings, including homes and nursing homes.

The original developmental work was done in Hobart after the Royal Hobart Hospital completed a Respecting Patient Choices pilot site project in 2008. This project put a sharp focus on decision making at the end of life across the whole hospital community.

In 2010, a project officer position was created to enable the development of GOC as part of a statewide Healthy Dying Initiative. Based on the principles of health-promoting palliative care, this initiative aimed to empower the whole community, including the health sector, to deal with death in a more direct, open and therefore “healthy” way. Clinical decision making at the end of life was identified as a priority for policy and procedural reform. There were three initial components of the Healthy Dying Initiative: GOC, advance directive redesign and promotion, and encouragement of health-promoting activities relating to death and dying.

The project officer, a non-clinician with extensive experience in community development, helped design the GOC form, develop the policy protocol for its implementation and use, launch the new form, and facilitate initial training in individual hospital units. GOC education was then done jointly with the advance directive work in the wider community in collaboration with a designated officer in the Office of the Chief Health Officer, Department of Health and Human Services, Tasmania.

Audit results

On 1 March 2011, the GOC form and protocol came into effect at Royal Hobart Hospital; it replaced the NFR procedure and form, which were withdrawn with effect from that date.

A retrospective audit of admissions to the Assessment and Planning Unit during August 2011 was undertaken. It showed that GOC forms had been completed for 75% of admitted patients (135/181). A retrospective audit of admissions to the Assessment and Planning Unit during August 2009, before introduction of GOC, showed that NFR forms had been completed for 34% of admitted patients (55/162). (These data were compiled on 28 September 2009 and 26 September 2011, respectively.)

On 6 September 2012, a 1-day point prevalence audit of GOC form completion was undertaken throughout Royal Hobart Hospital, excluding paediatric and day-stay patients. Patient records were reviewed for the presence of a GOC form and/or other relevant documents, such as an advance directive. GOC forms had been completed for 52% of inpatients (148/283) and for 85% of medical inpatients (124/146) who had been admitted that day. For non-medical admissions, a GOC form was completed for 21% of patients (24/112). All 18 patients who subsequently died had dying recognised (GOC category D), and half of them received input from the palliative care service.

A GOC form was implemented at Northern Health on 12 August 2013. It was adapted from the version used at Royal Hobart Hospital, using input from Northern Health clinicians. It was mandated for all adult medical inpatients and for selected surgical patients. A 1-day point prevalence audit of medical patients on 17 November 2013 showed that treatment goals were completed for 81% of patients (82/101).7

Discussion

The purpose of GOC is to ensure that patients who are unlikely to benefit from medical treatment aimed at cure receive care appropriate to their condition and are not subjected to burdensome or futile treatments, particularly cardiopulmonary resuscitation and medical emergency team calls, especially when these are, or may be, contrary to their wishes.

One of the aims of GOC is to change the culture of medical decision making. GOC takes on the challenges of “prognostic paralysis” and the “no-surprises approach”,4 diagnosing dying,8 and prognostic uncertainty.9 There is evidence that many decisions to limit treatment occur in crisis situations, particularly during medical emergency team calls.10 Difficult decisions therefore tend to be made after hours, in the heat of the moment, by clinicians who do not know the patient and without patient or SDM input. GOC prompts treating teams to proactively determine treatment goals at a time when the assessment is likely to be of higher quality and discussions with the patient and family are easier to arrange.

Screening all patients on admission helps identify those who wish to decline treatments that might otherwise be given to them (particularly relevant for treatments that involve blood products). Those who are fit and otherwise well can be screened with the question “are there any treatments that you do not wish to have?”. Others, in light of their past history and current presentation, will require a more in-depth conversation that balances their hopes and expectations with what is medically achievable.

The default position for all patients is the curative or restorative phase, and all appropriate life-prolonging treatment should be deployed as indicated until it is clear that the clinical situation has changed. In other words, the default always favours preservation of life. It has become evident that there is an important subpopulation of patients for whom the goal is cure or restoration but specific limitations of medical treatment apply because of patient wishes or beliefs, and this is specifically articulated in GOC category B on the new Tasmanian form (Appendix 2).

GOC relies on high-quality clinical assessment and good communication skills. Most importantly, it requires clinicians to make a decision. While challenging and contested, differentiation between the palliative and terminal phases is essential. There is a large difference in the medical management and care of a person who has a potential prognosis of a year or two (eg, a patient who has incurable bone metastases due to prostate or breast cancer) and that for a person who may not survive a week.

There are many pertinent observations that can be used to diagnose dying, which can be divided into four principal domains: (i) disease activity; (ii) general functioning; (iii) specific clinical parameters; and (iv) evidence of “death talk” by patients and families. In combination, these observations can help to show whether death is anticipated within the next few days and allow a change of GOC to the terminal phase. Most of the evidence so far suggests that simple non-medical general function parameters are most predictive of impending death.11 For patients in the terminal phase, deployment of tools based on the Liverpool Care Pathway for the Dying Patient (LCP) may be considered. There has been positive experience of an LCP-type tool in Australia,12 despite some negative experiences associated with use of the LCP in the United Kingdom, for which the LCP has, perhaps unfairly, been blamed.13,14

If the diagnosis of dying is made too early and a patient’s condition unexpectedly stabilises, he or she will live on provided that the care implemented is proportionate and matched to symptoms, according to principles presented, for example, to the Senate of Canada by the Chief Coroner of Ontario in 1997.15 There are often oscillations in patient condition as the terminal phase approaches, but, once patients are deemed to be in the terminal phase, it is unusual for them to sustainably “upgrade” back to the earlier palliative phase.

The GOC process has proved to be safe, effective and widely acceptable for addressing the limitation of medical treatment in two Australian health services that encompass large acute tertiary hospitals, with aged care and related subacute services. Feedback from clinical staff has been positive, and compliance is variable but rising. So far, there have been no reported major incidents or complaints in which GOC has been causally implicated in an adverse outcome. Comparison with the NFR era is difficult as the population denominator now consists of all admitted patients, not just those deemed unsuitable for resuscitation.

Regular review at each patient encounter is important, with changes to GOC phase and/or treatment limitations as warranted by patient wishes or condition. A clear need was identified at an early stage of the initial GOC project to ensure that limitations determined and documented during an acute admission could be continued during ambulance transfers and within homes, nursing homes and other facilities. An arbitrary 90-day endorsement validity limit was initially stipulated, but this has been removed as it was found to be unnecessary and confusing. General practitioners and community nurses were also keen to see GOC initiated in the community setting, especially for palliative care clients, and this has informed the design of the new Tasmanian form (Appendix 2).

In a recent report, the Australian Commission on Safety and Quality in Health Care acknowledged that it is necessary to attempt to reverse acute clinical deterioration but also to recognise dying and deploy appropriate palliative and terminal care.16

There were extensive discussions about patients or SDMs being required to sign the GOC form to confirm adequate consultation and agreement. The developers have resisted this, arguing that it is a medical form to direct care, and not a patient directive. The emphasis should be on a process of medical assessment and communication that ideally results in clear patient agreement, and/or consensus with the SDM and those who care for and about the person concerned, regarding any limitations of medical treatment.

A requirement for SDMs to sign a GOC form might engender guilt by conveying a false concern about the locus of responsibility for causing death. It should, however, be clear that the doctor signing the form (on behalf of the medical specialist in charge) is taking responsibility for the clinical decision and all appropriate consultation with patients or their agents, as required by ethics and law.17 Ultimately, the decision about signature requirements will lie with individual institutions and/or jurisdictions that start using GOC. Similarly, the distinction between consent and receipt of information will need to be made clear by individual institutions.

We recommend that all health care providers consider replacing their NFR procedures with the GOC approach. GOC is a solid framework for limiting medical treatment that meets the challenge for medical leadership to address the culture of death avoidance in medical decision making.17,18 It also has the potential to help address widespread professional and public concerns about bad dying. Rigorous ongoing “postmarketing” surveillance, auditing and research are, of course, necessary to ensure patient safety and transparency of process.

The three-phase model of goals of care (GOC)

 

1. Curative or restorative phase (“beating it”)

2. Palliative phase (“living with disease, anticipating death”)

3. Terminal phase (“dying very soon”)


 

The default position for all patients — all appropriate life-prolonging treatment will be deployed as indicated (Categories A and B in our forms)

The disease is deemed to be incurable and progressive (Category C in our forms)

Death is believed to be imminent (ie, within a few days) — implementation of a terminal care pathway, where available, is indicated (Category D in our forms)

Aim

 

GOC are directed towards cure, prolonged disease remission and/or restoration to the pre-episode health status for those with chronic diseases, especially in the aged care context

GOC are modified in favour of comfort, quality of life and dignity; period of survival is no longer the sole determinant of treatment choice; life prolongation is a secondary objective of medical treatment, although palliative care might confer modest survival benefits, as shown in two lung cancer studies3

Comfort, quality of life and dignity are the only considerations

Prognosis

 

Life expectancy is probably indefinite (ie, normal) because the present health episode is unlikely to affect longevity; a key question could be “is there a reasonable chance of the patient leaving hospital and living the same life span as might have been expected before the episode?”; a key question in aged care and chronic disease settings (where the goals might be restorative) could be “is there a reasonable chance of the patient leaving hospital and/or returning to his or her previous level of functioning?”

Life expectancy is usually months, but sometimes years (if the latter is the case, “supportive care” might be a more appropriate term than “palliative care”, and patients might choose to have active treatment of disease until disease response ceases); a key question could be “would I/we be surprised if this patient died in the next 12 months?”4

Life expectancy is hours or days; a key question could be “would I/we be surprised if this patient died this week?”

Level of adverse effects

 

A high level of adverse effects and even a significant chance of treatment-related mortality might be accepted for curative treatment (eg, brain aneurysm surgery, bone marrow transplant); while pain and symptom control should always be addressed, comfort may be a secondary consideration if it conflicts with curative treatment

Active treatment of the underlying disease may be undertaken for specific symptoms (eg, radiotherapy or chemotherapy for palliative end point in cancer treatment) and/or short-term life expectancy gains; treatment-related adverse effects should be proportionate to the goals and acceptable to the patient

Active treatment of the underlying disease should stop; no treatment-related toxicity is acceptable (this applies to all medical, nursing and allied health interventions [eg, turns in bed if these are distressing])

Life-sustaining treatments

 

Given as needed

Life-sustaining treatments for other chronic medical conditions are usually continued (eg, treatment with insulin or anticonvulsants) in cases where cessation would result in premature death or preventable unpleasant symptoms such as hyperglycaemia and seizures (ie, symptoms unrelated to the main disease that is anticipated to cause death) or where quality of life would be adversely affected5

Life-sustaining treatments for other chronic medical conditions are usually stopped (eg, treatment with steroids, insulin or anticonvulsants), unless doing so would cause suffering

Medical provision of hydration and alimentation

 

Given as needed

Given if indicated and desired (eg, percutaneous endoscopic gastrostomy feeding for head and neck cancer patients with obstructed swallowing)

Usually ceased and replaced with feeding on request and rigorous mouth care

Cardiopulmonary resuscitation

 

Given as needed

Usually not recommended but should be discussed with the patient, if competent; if death and dying have already been explicitly discussed with the patient or person responsible, specific discussion of cardiopulmonary resuscitation might not be warranted6

Contraindicated

Equivalence of outcomes for rural and metropolitan patients with metastatic colorectal cancer in South Australia

Metastatic colorectal cancer (mCRC) is the fourth most common cause of cancer death in Australia.1 The past 15 years have seen improved outcomes in patients with mCRC, largely due to increased chemotherapeutic and biological treatment options and widespread adoption of liver resection for liver-limited mCRC.2 These improvements have led to an increase in reported median survival from 12 to 24 months since 1995. Despite these advances, patients with unresectable mCRC usually die from the disease, with 5-year overall survival of about 15%.2 Initial treatment for mCRC involves combination chemotherapy or single-agent therapy. Survival is improved in patients who ultimately receive all three active chemotherapy drugs (oxaliplatin, irinotecan and a fluoropyrimidine)3 and have access to biological agents, such as bevacizumab.2

Australia’s geographical challenges (large land area and low population density) contribute to difficulties in service provision and disparity of cancer outcomes.4 Some authors have suggested the observed higher death rate among Australia’s rural population is the result of a double disadvantage: higher exposure to health hazards and poorer access to health services.5,6 There is a complex interplay between remoteness of residence and other known causes of poor cancer outcome, including unequal exposure to environmental risk factors,5 less participation in cancer screening programs,79 delayed diagnosis,10 socioeconomic disadvantage,4,11 and higher proportions of disadvantaged groups such as Indigenous Australians.12 Despite these factors, an Australian study of patients with rectal cancer found that increasing distance between place of residence and a radiotherapy centre was independently associated with inferior survival.6 A recent analysis of cancer outcomes using population mortality data found that reductions in the cancer death rate between 2001 and 2010 were largely confined to the metropolitan population, with an estimated 8878 excess cancer deaths in regional and remote Australia, including 750 CRC deaths.13

Remoteness poses practical difficulties that may lead patients with cancer and their clinicians to make choices based on the need for travel, or because of perceived toxicity risks of different regimens. Population studies have shown that rural patients have reduced rates of radical surgery,9 less adjuvant radiotherapy,14 delays in commencing adjuvant chemotherapy15 and reduced clinical trial participation.16 Rural cancer patients can also face a significant financial and travel burden.17

Rural patients in South Australia have historically had limited access to regional oncology services, as population numbers outside metropolitan Adelaide are insufficient to support onsite oncologists. Until recently, this has meant that most chemotherapy is delivered in Adelaide, reflecting a more centralised service than in Australia’s eastern states. An effort is currently being made to shift to more rural chemotherapy delivery and an expanded visiting oncology service.18

In this study, we used the South Australian Clinical Registry for Metastatic Colorectal Cancer (SA mCRC registry) to investigate disparity in outcomes and treatment delivery for rural patients with mCRC compared with their metropolitan counterparts.

Methods

The SA mCRC registry is a state-wide population-based database of all patients diagnosed with synchronous or metachronous mCRC since February 2006. Previous registry-based analyses have led to the description of important associations of patient subgroups and outcomes.1921 Core data include age, sex, demographics, tumour site, histological type, differentiation and metastatic sites. Treatment data consist of surgical procedures, chemotherapy (including targeted therapy), radiotherapy, radiofrequency ablation, and selective internal radiation therapy. The date and cause of death for each patient in the registry is obtained through medical records review and electronic linkage with state death records. Approval for this study was granted by the SA Health Human Research Ethics Committee.

For this study, we included data collected between 2 February 2006 and 28 May 2012. We compared the oncological and surgical management (primarily metastasectomy) and survival of metropolitan versus rural patients. Based on the accepted registry definitions, patients residing in metropolitan Adelaide (postcodes 5000–5174) were designated the “city” cohort, with all other patients (postcodes 5201–5799) in the “rural” cohort. Patient characteristics, use of chemotherapy across first, second and third lines of treatment, choice of first-line chemotherapy, hepatic resection rates and survival were analysed and compared between the city and rural patient cohorts.

All analyses were undertaken using Stata version 11 (StataCorp). Overall survival (OS) analysis was done using conventional Kaplan–Meier methods. Survival was calculated from the date of diagnosis of stage IV disease to the date of death, with a final censoring date of 28 May 2012. The log-rank test of equality was used for comparisons. OS was used as the end point because this outcome measure was available in the registry data and to avoid misclassification of cause of death in disease-specific survival.

Results

Patient characteristics

Data from 2289 patients, including 624 rural patients (27.3%), were available for analysis (Box 1). There was a higher proportion of male patients in the rural than the city cohort (62.7% v 53.6%; P < 0.001). The colon was the primary site of malignancy in a higher proportion of city than rural patients (75.7% v 71.5%; P = 0.04). Kirsten rat sarcoma viral oncogene homolog (KRAS) mutation testing was performed in around 14% of patients in both cohorts, and the proportion of KRAS exon 2 wild-type tumours was not significantly different between rural and city cohorts (59.8% v 59.7%; P = 0.96). Clinical trial participation did not differ significantly between the cohorts (7.1% v 9.2%; P = 0.10).

Treatment

Chemotherapy

First-line chemotherapy was administered in 58.3% of rural patients, compared with 56.0% of city patients (P = 0.32) (Box 2). As a percentage of patients who received any chemotherapy, rates of second-line (22.5% v 23.3%; P = 0.78) and third-line (9.3% v 10.1%; P = 0.69) chemotherapy administration were also similar between rural and city cohorts. There were differences between the cohorts in the type of first-line treatment: rural patients had less use of combination chemotherapy (59.9% v 67.4%; P = 0.01) and biological agents (16.8% v 23.7%; P = 0.007) than city patients, though numerically these differences were small. When an oxaliplatin combination was prescribed, the oral prodrug of 5-fluorouracil, capecitabine, was used more frequently in rural patients than city patients (22.9% v 8.4%; P < 0.001). Only 21 rural patients (5.8%), and no city patients, received their first dose of first-line chemotherapy in a rural chemotherapy centre.

Non-chemotherapy

Adoption of any of the non-chemotherapy treatment modalities did not differ significantly by place of residence (Box 3). Of note, there was no significant difference in rates of hepatic metastasectomy between city and rural cohorts (13.7% v 11.5%; P = 0.17). Pulmonary metastasectomy rates were higher in city patients (3.2% v 2.1%; P = 0.10), but total numbers were small.

Survival

Among all patients, the median OS was 14.6 months for city patients and 14.9 months for rural patients (P = 0.18) (Box 4, A). Among patients receiving chemotherapy (with or without metastasectomy), the median OS was 21.5 months for city patients and 22.0 months for rural patients (= 0.48) (Box 4, B). For patients undergoing liver metastasectomy, the median OS was 67.3 months for city patients and was not reached in rural patients (P = 0.61) (Box 4, C).

Discussion

Our results demonstrate that rural patients with mCRC in SA receive comparable treatment and have equivalent survival to their metropolitan counterparts. In particular, patients in rural areas are treated with equivalent rates of potentially curative metastasectomy and chemotherapy, two key determinants of length of survival. These are the first Australian data specifically analysing rates of chemotherapy in rural patients with mCRC, and they suggest the excess colon cancer mortality seen in rural patients relates to factors other than access to treatment in the metastatic setting.

While there were no significant differences between the cohorts in rates of patients receiving chemotherapy across all lines of treatment, rural patients received less first-line combination chemotherapy, increased use of capecitabine and reduced use of biological agents in the first line than city patients.

First-line combination chemotherapy with intravenous infusional 5-fluorouracil, folinic acid and oxaliplatin (FOLFOX) has equivalent efficacy to oral capecitabine and oxaliplatin (XELOX).22 The choice between the two regimens is based on differing toxicities and practical considerations. FOLFOX requires a central venous catheter (CVC) and a second visit to a chemotherapy day centre every fortnight for ambulatory pump disconnection. XELOX has the advantages of single 3-weekly clinic visits and no CVC, but compliance with twice-daily chemotherapy tablets and potentially higher rates of symptomatic toxicity (hand–foot syndrome and diarrhoea) are limitations. The higher use of XELOX among rural patients reflects the relative practical benefits of this regimen where travel distances and access to nursing staff trained in CVC management are important considerations. The potential for toxicity of XELOX requires careful patient education and system approaches to enable early recognition and intervention in the event of severe toxicity among rural, often isolated, patients. Early follow-up telephone calls by a nurse practitioner or telemedicine consultations are potential strategies to provide this important aspect of care to rural patients.23,24

We observed a small but significant reduction in the rate of biological agents used in first-line therapy for rural patients, mostly due to reduced bevacizumab prescribing. It is possible clinicians were reluctant to “intensify” therapy in rural patients due to a lack of supervision or access to health care, particularly given risks of haemorrhage. It is also possible this small difference reflects a chance finding. The pattern of bevacizumab prescribing has evolved over the period captured in the registry, and an updated analysis of patients diagnosed since 2010 may provide further insights.

The equivalent rate of attempted curative metastasectomy in rural mCRC patients compared with city patients is reassuring, given this approach provides the only option for long-term survival in mCRC. The survival curves of patients undergoing liver metastasectomy showed a survival plateau at 5 years of 50% or greater for both city and rural patients (Box 4, C). This compares favourably with other modern surgical case series, with reported 5-year survival of 32%–47% after liver resection.25

Delivery of specialised health care services for rural Australians requires policymakers to carefully balance the merits of a centralised versus a decentralised system, with unique consideration for each region. For example, no regional centres in SA have a population sufficient to support a full-time resident medical oncologist and are instead serviced by a visiting (fly-in fly-out) oncologist. Limited infrastructure and staff training have also largely prevented widespread administration of chemotherapy in regional centres. Highlighting this point, we found that only 5.8% of rural patients receiving chemotherapy received their first cycle in a rural treatment centre. The SA Statewide Cancer Control Plan 2011–2015 lists the establishment of regional cancer services and chemotherapy centres as a key future direction to optimise care for rural cancer patients.18 Unfortunately, no publications have assessed outcomes of rural patients with mCRC treated in other regions of Australia, particularly in the eastern states where regional oncology services are common. While our analysis supports equivalent survival outcomes for rural patients treated within SA’s largely centralised service, the practical, social and economic advantages of regional cancer centres remain an important consideration not captured in our study. Given this, we consider that our findings highlight the positive outcomes achieved through high-quality, specialised care, rather than suggest that current regional services in Australia should also adopt a centralised approach.

As our analysis dichotomised patients into city and rural cohorts, it does not provide outcome information based on the degree of remoteness. Despite this limitation, chemotherapy and surgical treatment were almost entirely delivered in Adelaide, and thus our analysis appropriately distinguishes those patients who had to travel to access oncological care. The possibility of inadequate registry ascertainment of rural cases of mCRC also poses a possible limitation. However, we are confident this is not a source of bias, as the registry collects information from all histopathology reports in SA, which are processed centrally in Adelaide. An important limitation of our study is that we report only on mCRC, and stage I–III disease is not included. The impact of treatment differences in early-stage CRC (eg, quality and timeliness of surgery, use of adjuvant chemotherapy) on overall survival of patients with mCRC cannot be determined in this analysis. Reassuringly, however, about two-thirds of mCRC cases in both cohorts were synchronous (ie, no prior early-stage disease), suggesting this is unlikely to limit our conclusions. Further, the equivalent rates of synchronous diagnosis in rural and urban patients may suggest there was no major delay in diagnosis of rural patients.

Although higher cancer incidence and poorer outcomes have been consistently demonstrated for rural cancer patients in Australia, we found equivalent treatment patterns and survival for rural patients diagnosed with mCRC in SA since 2006 compared with their metropolitan counterparts. This confirms optimal treatment of rural patients results in equivalent outcomes to metropolitan patients, irrespective of disadvantage. Further, it suggests previously demonstrated disparate outcomes may be due to factors such as higher incidence of CRC as a result of burden of risk factors and potentially reduced screening participation, rather than treatment factors once mCRC has been diagnosed. Targeting these factors is likely to provide the greatest impact on reducing the excess cancer burden for rural Australians.

1 Patient characteristics, by city versus rural residence (n = 2289)*

Characteristic

City

Rural

P


No. (%) of patients

1665 (72.7%)

624 (27.3%)

Median age (range), years

73 (17–105)

72 (31–100)

0.11

Sex

     

Male

893 (53.6%)

391 (62.7%)

< 0.001

Female

772 (46.4%)

233 (37.3%)

 

Primary site

     

Colon

1260 (75.7%)

446 (71.5%)

0.04

Rectum

405 (24.3%)

178 (28.5%)

 

Synchronous disease

1070 (64.3%)

407 (65.2%)

0.67

Site of metastasis

     

Liver only

665 (39.9%)

226 (36.2%)

0.10

Lung only

128 (7.7%)

45 (7.2%)

0.70

Liver and lung only

178 (10.7%)

65 (10.4%)

0.85

All other sites

694 (41.7%)

290 (46.5%)

0.13

> 3 metastatic sites

138 (8.2%)

54 (8.7%)

0.38

KRAS testing

243 (14.6%)

87 (13.9%)

0.77

KRAS wild-type

145 (59.7%)

52 (59.8%)

0.96

Clinical trial participation

154 (9.2%)

44 (7.1%)

0.10


KRAS = Kirsten rat sarcoma viral oncogene homolog. * Data are number (%) of patients unless otherwise indicated. † P values calculated using χ2 tests.

2 Frequency of first-line, second-line and third-line chemotherapy, and regimens, by city versus rural residence

 

First-line treatment


Second-line treatment


Third-line treatment


Regimen

City

Rural

P

City

Rural

P

City

Rural

P


Total

933 (56.0%)

364 (58.3%)

0.32

217 (23.3%)*

82 (22.5%)*

0.78

94 (10.1%)*

34 (9.3%)*

0.69

Single-agent chemotherapy

271 (29.0%)

118 (32.4%)

0.23

58 (26.7%)

18 (22.0%)

0.40

21 (22.3%)

7 (20.6%)

0.83

Capecitabine

202

82

0.30

24

4

 

8

0

 

5-FU

58

31

0.29

3

3

 

4

1

 

Irinotecan

11

3

 

31

11

 

9

6

 

Oxaliplatin

0

2

             

Combination chemotherapy

629 (67.4%)

218 (59.9%)

0.01

115 (53.0%)

44 (53.7%)

0.92

49 (52.1%)

17 (50.0%)

0.83

FOLFOX

491

146

0.001

21

8

 

14

2

 

XELOX

53

50

< 0.001

15

4

 

8

2

 

FOLFIRI

76

18

0.12

62

26

 

15

7

 

XELIRI

1

0

 

1

2

 

2

2

 

MMC–5-FU or capecitabine

8

4

 

16

4

 

10

4

 

Other

33 (3.5%)

28 (7.7%)

 

44 (20.3%)

20 (24.4%)

 

24 (25.5%)

10 (29.4%)

 

Biological agent

221 (23.7%)

61 (16.8%)

0.007

97 (44.7%)

30 (36.6%)

0.21

72 (76.6%)

34 (100%)

0.003

Bevacizumab

185

52

 

60

22

 

16

14

 

EGFR mAb

15

5

 

26

8

 

52

19

 

Other

21

4

 

11

1

 

4

1

 

5-FU = 5-fluorouracil. FOLFOX = folinic acid–5-FU–oxaliplatin. XELOX = capecitabine–oxaliplatin. FOLFIRI = folinic acid–5-FU–irinotecan. XELIRI = capecitabine–irinotecan. MMC = mitomycin C. EGFR mAB = epidermal growth factor receptor monoclonal antibody. * Total rates of second-line and third-line chemotherapy use are expressed as a percentage of patients who received any chemotherapy. † Includes use of raltitrexed and MMC (as single agent and combination).

3 Frequency of non-chemotherapy treatments, by city versus rural residence

Treatment

City (n = 1665)

Rural (n = 624)

P


Lung resection

53 (3.2%)

13 (2.1%)

0.10

Hepatic resection

228 (13.7%)

72 (11.5%)

0.17

Surgery*

858 (51.5%)

345 (55.3%)

0.11

Ablation

12 (0.7%)

3 (0.5%)

0.53

Selective internal radiation therapy

10 (0.6%)

8 (1.3%)

0.10

Radiotherapy

299 (18.0%)

132 (21.2%)

0.08


* Includes resection of colorectal primary cancer.

4 Overall survival (OS) in city versus rural patients

A bowel cancer screening plan at last

More lives will be saved by fully implementing the National Bowel Cancer Screening Program in 2020

The 2014–15 federal Budget included an announcement of $95.9 million for the long-awaited full implementation of the National Bowel Cancer Screening Program (NBCSP) by 1 July 2020.1 From that date, all Australians aged 50 to 74 years will finally be invited to screen for bowel cancer every 2 years with a faecal occult blood test (FOBT).

The announcement included a plan to incrementally expand the program, currently offered to people aged 50, 55, 60 and 65 years. The program will include 70-year-olds (through a previous funding commitment in 2012) and 74-year-olds from July 2015; people turning 64 and 72 years from 2016; and those aged 54, 58 and 68 years from 2017. The four remaining age groups (52, 56, 62 and 66 years) will be included from 2018 to 2020.1

The rationale is consistent with results from a study by Cenin and colleagues published in this issue of the Journal, which prioritised age groups according to the mortality-reduction benefit that can be expected from FOBT screening.2 Benefit is derived from prioritising screening according to age-based risk and closing gaps in the existing age cohort to shift from 5-yearly to biennial screening.3

A final implementation plan for the NBCSP has been a long time coming. The program was introduced in August 2006 with the mail-out of FOBT kits to people turning 55 and 65 years. While sporadic funding increases in the interim have been welcomed, there have also been unacceptable delays and ongoing concerns. For example, the 2012–13 Budget provided a much-needed $50 million to expand the NBCSP; however, the final implementation date was set as 2034.4 Cancer Council Australia therefore believed it was critical to provide evidence of the enormous potential benefits of completing the program by an acceptable date of 2020.

Cenin and colleagues used the MISCAN (microsimulation screening analysis)-Colon model to examine mortality gains with full implementation of the NBCSP by 2035 compared with full implementation by 20202 — the year recommended by Cancer Council Australia in its 2013 election priorities. The model estimated that full implementation by 2020 would prevent 35 000 (100% extra) bowel cancer deaths over the following four decades.

Now that the future of the NBCSP is assured, it is essential to engage with general practitioners and other health care professionals to improve participation and facilitate continuous improvement in service delivery.

The most recent data available show that, of people invited to participate in the NBCSP between July 2012 and June 2013, only 33.5% did so.5 This is an unacceptably low rate. However, it was not unexpected, given the low awareness among Australians about bowel cancer,6 the novelty of population screening for men and the lack of targeted communication about the NBCSP. A large-scale communications campaign, during program expansion and after full implementation, will be needed to improve participation rates if the NBCSP is to fulfil its potential to reduce bowel cancer mortality.

Since the inception of the NBCSP, GPs have been identified as critical partners. The government has sought to promote GP involvement in the NBCSP through GP representation on relevant committees, and through engagement with the Royal Australian College of General Practitioners (RACGP). While general practice resources such as the RACGP “red book” recommend FOBT screening for 50–74-year-olds, it will become increasingly important to consult closely with the primary care sector and provide support to GPs to facilitate their role in the expanded NBCSP.

GPs are well placed to promote the use of FOBT as the recommended screening tool for average-risk people currently outside the NBCSP. This would help reduce the strain on colonoscopy services. More than 500 000 colonoscopies are conducted annually in Australia.7 While there is no national dataset on how many of these are performed on asymptomatic, average-risk patients, it is thought that a significant number are done as first-line screening.

Currently, 7.5% of FOBT tests completed through the NBCSP return a positive result.5 Of those patients, around 70% present for colonoscopy. Of these, one in 32 are diagnosed with a confirmed or suspected cancer and one in 17 are diagnosed with advanced adenoma.5 FOBT is therefore a valuable tool for prioritising the use of colonoscopy for patients who are at higher than average bowel cancer risk or are symptomatic.

Throughout the NBCSP’s expansion, there has also been discussion about virtual colonoscopy, flexible sigmoidoscopy and plasma DNA testing as alternative screening tools. There is no evidence to suggest virtual colonoscopy would be a feasible alternative to FOBT.8 Although flexible sigmoidoscopy has been shown to be effective in randomised trials,9 it is significantly more expensive than FOBT and questions remain about its acceptability. DNA biomarker tests using plasma and faecal stool samples are also available; however, they are unsuitable for screening, as they have significantly lower sensitivity than FOBT for advanced adenoma and for stage A cancer.1012

Importantly, South Australian data have shown that twice the number of stage A cancers were diagnosed in people invited to participate in the NBCSP compared with people who were not and who had presented with a symptom.13

The NBCSP’s potential to prevent a total of 70 000 Australian bowel cancer deaths over the next four decades is compelling.

Hip fracture: the case for a funded national registry

Let’s implement what we know and avoid deaths from hip fracture

The value of orthogeriatric care for hip fracture patients has been known for years, and a recent summary of international evidence has acknowledged the benefits.1 The NHS in England considered this so important that it offers serious financial incentives for hospitals to achieve an evidence-based standard of care — a “best-practice tariff” rewards hospitals that achieve the following key quality criteria:

  • surgery within 36 hours
  • shared care by surgeon and geriatrician
  • care protocol agreed to by geriatrician, surgeon and anaesthetist
  • assessment by geriatrician within 72 hours of admission
  • preoperative and postoperative abbreviated mental test score assessment
  • geriatrician-led multidisciplinary rehabilitation
  • secondary prevention of falls
  • bone health assessment.2

This incentive, together with the United Kingdom’s long established National Hip Fracture Database, has enabled monitoring of care and tracking of definite improvements.3 Hospitals are identified in the UK audit, so the poor performers cannot hide; this provides additional incentive to get things right.

Orthogeriatric care is not particularly complex. Like much of geriatric medicine, it is about doing a number of fairly simple things well.1 Geriatric assessment will help identify easily reversible problems before surgery (eg, electrolyte abnormalities, drug errors, fluid balance). Early surgery is safe and is the best way of relieving the severe pain of a hip fracture. The main driver of the best-practice tariff — Keith Willett, Professor of Orthopaedic Trauma Surgery, University of Oxford — has said: “I don’t believe the sun should set twice on a hip fracture” (personal communication). Early mobilisation with multidisciplinary care and good secondary prevention are key interventions after surgery.

In this issue of the Journal, Zeltzer and colleagues describe their investigation of the effects of orthogeriatric care in New South Wales. Their data suggest that there is unacceptable clinical variation.4 They found a statistically significant and clinically important difference in median adjusted 30-day mortality rate between 14 hospitals with an orthogeriatric service (6.2%) and 23 without (8.4%). Data from the Bureau of Health Information in NSW have also revealed important clinical variation between hospitals.5

While these data can tell us which hospitals have problems, only more detailed process data, such as data variables within a prospective clinical register, can help tell us why there is variation. Such data can then be used to implement change and improve care. Zeltzer et al suggest that the new Australian and New Zealand Hip Fracture Registry (http://www.anzhfr.org) will help improve hip fracture care. It is highly likely that if the Australian states and territories funded this register and made registration a requirement for activity-based funding that similar benefits to those seen in the UK could be achieved. This would contribute to a healthier old age.6 The stroke community, through the Australian Stroke Coalition (http://australianstrokecoalition.com.au), are moving in the same direction, as care for stroke patients has remarkable similarities to care for hip fracture patients: an acute intervention that needs timely administration (thrombolysis), organised multidisciplinary care (stroke units) and good secondary prevention.

If a rich country like Australia struggles to implement effective care, what hope is there for the Asia–Pacific region? The global health challenge is enormous, with over 400 000 people dying from falls each year.7 Hip fracture rates in China are about to soar because of demographic change. The number of people aged over 80 years in China will increase from the current 8 million to some 100 million by 2050.8 It will be a medical disaster for low- and middle-income countries to adopt some aspects of hip fracture care (expensive prostheses and surgery) without the other essentials (orthogeriatric care). The global challenge is to find the right incentives, training, health care services and funding to implement affordable effective health care. Orthogeriatric care in these countries is not an impossible dream as these services depend on people, rather than expensive technology.

I recommend that the managers and clinicians in those 23 NSW hospitals without orthogeriatric services now reorganise their services so the next 5000 patients with hip fracture who arrive at their emergency departments in the next 2 years receive a higher standard of care, have a lower risk of dying, and have a higher chance of better quality of life.

The key challenge of 21st century medicine is finding and implementing affordable health care, not only in low-and middle-income countries but also in Australia.