×

Self-poisoning by older Australians: a cohort study

The known Self-poisoning is less common among older people, but the numerous medicines they often use provide a ready source of toxins. Further, multiple comorbidities may exacerbate their toxicity and hinder recovery. 

The new Most self-poisoning by older people was intentional, but the proportion of unintentional poisonings increased with age. Hospital length of stay, rates of intensive care unit admission and cardiovascular adverse effects, and mortality were higher among older patients. 

The implications As our population ages, self-poisoning by older people is likely to be an increasing problem. Although self-poisoning is associated with higher morbidity and mortality than in younger patients, the risk of a fatal outcome is low when patients are treated in specialist toxicology units. 

As our population ages, self-poisoning and the associated morbidity are likely to be a growing problem. Self-poisoning is a burden on the health system and is a risk factor for subsequent suicide.1 Drug overdose is less common among older people than in younger adults,2 but is associated with higher morbidity and mortality.35 Distinct age differences in the nature and severity of self-poisoning have been reported.5 Stressors such as failing health, the death of a spouse, family discord, and loneliness may contribute to poisoning in older people.6 They often have several comorbidities and consequently take numerous medications, providing a ready source of toxins for self-poisoning. Further, multiple comorbidities and frailty may exacerbate the toxicity of these agents and hamper recovery from self-poisoning. Hopelessness and suicidality frequently increase with age, and depression is strongly associated with suicidality in older people. There is also a relationship between medical and psychiatric comorbidities and suicide by older people.7

While poisoning generally occurs in the context of deliberate self-harm or drug misuse, declining cognitive function can also be associated with unintentional overdose in older people.5 A recent report on self-poisoning in Australia8 provided only limited information about drug overdoses in older people, while an earlier, small study of deliberate self-poisoning by mature Australians included data only to July 1998.9

We examined the epidemiology and severity of self-poisoning by older people in a large regional centre in eastern Australia over a 26-year period, including in-hospital morbidity and mortality, and changes over time in the medications most commonly involved in self-poisoning. We compared these data with those for overdoses in a younger population.

Methods

We undertook a retrospective review of prospectively collected data for people presenting to the Hunter Area Toxicology Service (HATS) after self-poisoning during the 26-year period January 1987 – December 2012. Since 1987, HATS has provided a comprehensive 24-hour toxicology treatment service for a population of about 500 000 people. HATS currently has direct clinical responsibility for all adult poisoning patients in all hospitals in the greater Newcastle region, and provides a tertiary referral service to Maitland and the Hunter Valley.

HATS routinely records data for patients who present to hospital (even if the poisoning is uncomplicated) in a purpose-built database.10 A structured data collection form is used by HATS to prospectively capture information about patient demographics (age, sex), the drugs ingested, co-ingested substances, previous suicide attempts, whether the overdose was intentional or unintentional, management (including intensive care unit [ICU] admission), and complications of poisoning (hypotension, arrhythmias, ventilation requirement, death).11 At discharge, further information is collected, including hospital length of stay [LOS], and psychiatric and substance misuse diagnoses. Data are routinely entered into a fully relational Microsoft Access database distinct from the hospital’s main medical record system.

Data for all patients aged 65 years or more who presented following self-poisoning were extracted, analysed and compared with data for patients less than 65 years of age.

Statistical analysis

Continuous variables are reported as medians and interquartile ranges (IQRs) or ranges, and dichotomous variables as percentages. The statistical significance of differences in continuous variables was assessed in Mann–Whitney U tests, and of differences in dichotomous variables in χ2 tests (with Yates correction) or Fisher exact tests. P < 0.05 was deemed statistically significant. All analysis and graphics were performed in GraphPad Prism 6.0h (GraphPad Software).

Ethics approval

The Hunter New England Human Research Ethics Committee has previously granted an exemption from formal ethics approval for analysing data from the HATS database and patient information for research purposes.

Results

There were 17 276 admissions for self-poisoning over the 26-year period 1987–2012; 626 patients (3.6%) were aged 65 years or more and 16 650 (96.4%) were less than 65 years old. The older cohort included 344 women (55%), the younger group 10 258 women (62%; P < 0.001).

Changes in the epidemiology of the admissions of older patients over the 26-year period were analysed by dividing it into one 6-year and four 5-year segments. The proportion of patients admitted to hospital for self-poisoning who were 65 or older (3–4%) was relatively constant across the entire period. The median age of the older patients was 73 years (IQR, 68–79 years); that of the younger patients was 31 years (IQR, 23–41 years). There was a steady decline in the number of admissions for overdoses with increasing age (Box 1). Five hundred admissions in the older cohort (80%) and 14 837 in the younger cohort (89%) involved deliberate self-poisoning (P < 0.001). While the absolute number of self-poisonings decreased with age, the proportion of unintentional and iatrogenic poisoning admissions among patients over 65 increased (65–74 years, 15%; 75–84 years, 25%; 85–97 years, 34%; P < 0.001).

The median hospital LOS for the older patients across the entire period was 34 hours (IQR, 16–75 h); for the younger cohort, 16 hours (IQR, 9–25 h; P < 0.001). There was a progressive decline in LOS for the elderly cohort over the 26 years, from 46 hours (IQR, 23–86 h) in 1987–1992 to 26 hours (IQR, 14–35 h) in 2008–2012; in the younger cohort LOS was relatively constant across time (Box 2, A). The fall in LOS for older patients did not appear to be related to any change in self-poisoning rates with a particular class of drugs (data not shown).

In the older cohort, 133 people (21.2%) were admitted to an ICU, compared with 1976 of the younger cohort (11.9%; P < 0.001). The proportion of older patients admitted to an ICU declined over the 26-year period from a peak of 26% in 1993–1997 to 14% in 2008–2012; there was also a decline for the younger group (Box 2, B). The decline in the proportion of older patients who were ventilated was similar to that for those admitted to an ICU (Box 2, C). There were 24 deaths (3.8%) in the older cohort and 93 (0.6%) in the younger cohort (P < 0.001). Mortality decreased over time in the older cohort, from 8% to 1%, but remained relatively constant (0–1%) in the younger cohort (Box 2, D). The most common drug/toxin groups involved in fatal self-poisoning in the older cohort were opioids (5 of 24 deaths) and organophosphates (3 of 24; Appendix 1).

The incidence of cardiovascular adverse effects was higher in the older cohort. Hypotension was present in 65 older patients (10.4%) and 813 younger patients (4.9%); 59 patients aged 65 or more (9.4%) had arrhythmias, compared with 217 patients under 65 (1.3%). There were no differences between the two groups in the incidence of delirium, serotonin toxicity, or seizures.

There were 356 single-drug ingestions (56.9% of admissions) in the older cohort, compared with 7009 (42.1%) in the younger cohort (P < 0.001), and 28 many multiple drug (ie, more than five) ingestions (4.5%) in the elderly cohort compared with 250 (1.5%) in the younger cohort (P < 0.001; Appendix 2). Benzodiazepines were the drug class most commonly ingested by older patients (24.2%), followed by paracetamol (8.1%) and alcohol (7.3%); in contrast, alcohol was the most common drug ingested by younger patients (16.2%), followed by benzodiazepines (15.6%) and paracetamol (14.0%; Box 3). The proportion of benzodiazepine ingestions among older patients decreased over the 26-year period from a high of 35% in 1993–1997 to 15% in 2008–2012 (Box 4, A). The proportion of cardiovascular drug ingestions (Box 4, B) increased threefold, from 4% to 11%, with about one-third of poisonings unintentional or iatrogenic. In contrast, only 2% of toxic benzodiazepine ingestions were unintentional or iatrogenic. The proportions of tricyclic antidepressant and first generation antipsychotic ingestions fell across the study period, with a corresponding increase in those of newer antidepressants and second generation antipsychotics (Box 4, C). The overall proportion of ingestions involving antidepressants or antipsychotics was unchanged over the study period, accounting for 12% of admissions. The proportion of poisonings with analgesic drugs (paracetamol, opioids or salicylates) increased by about 50% (from 10% to 16%); paracetamol accounted for 60% of toxic analgesic drug ingestions (Box 4, D), but there was a much greater increase in the proportion of poisonings with morphine and oxycodone. Only four admissions of older patients (0.6%) involved recreational drugs, compared with 1306 admissions (7.8%) in the younger cohort (P < 0.001).

A history of previous suicide attempts, psychiatric illness, hospital admission for mental health problems, or drug or alcohol misuse was respectively identified for 198 (32%), 241 (38%), 174 (28%) and 147 (23%) admitted older patients. Each of these proportions was significantly smaller (P < 0.001) than for patients under 65 (Box 5).

Discussion

For people aged 65 years or more admitted to hospital for self-poisoning, the average LOS was twice as long, admission to an ICU more likely, the incidence of hypotension and arrhythmia significantly higher, and mortality greater than for younger patients. Hospital LOS for older patients steadily declined over the 26-year period, and this decrease was associated with a similar decline in the rates of ICU admission, ventilation, and mortality. This may reflect changes in management over time, with an increasing emphasis on reducing LOS, as well as improvements in services that aid older patients during the discharge process. A reduction in the proportions of self-poisonings with more toxic agents, such as tricyclic antidepressants and conventional antipsychotics, may have also contributed to the decline in LOS, although the number of admissions of older patients for self-poisoning with these drugs was small. Further, the reduced number of more toxic ingestions by the younger cohort did not affect their mean LOS over the study period (data not shown).

The proportion of patients admitted to hospital for self-poisoning who were at least 65 years old in our study (3–4%) was lower than the proportion of older people in the Australian population (11–14%) during the study period. Lower rates of admission of people over 65 years of age for poisoning with recreational drugs and the lower prevalence of psychiatric co-factors and alcohol and drug misuse probably explain this difference. Although the proportion of the population who were 65 or older increased during the study period, the proportion of self-poisonings involving older people remained relatively constant. This is reassuring, but underscores the importance of specialist toxicology services.

Among those aged 65 or more, the proportion of non-deliberate or unintentional self-harm admissions increased with age; the proportion was more than twice as great among the oldest patients as in the 65–74-year age group. In a profile of calls to a large American poisons information centre, therapeutic errors resulting in self-poisoning were increasingly prevalent among older people, rising from 14.5% in 50–54-year-old people to 25.3% in those aged 70 years or more.2 Psychiatric illness is associated with suicide at all ages,7 and 38% of older patients in our study had a history of psychiatric illness. Other risk factors for suicide include depression, alcohol misuse, prior suicide attempts, higher age, being male, living alone, and bereavement (especially among men).12 While in our study a history of psychiatric illness, a prior suicide attempt, and alcohol or drug misuse were all more frequent in patients under 65, one-third of older patients also had a history of attempted suicide, and almost one-quarter reported alcohol or drug misuse. Social isolation and loneliness, family discord, and financial trouble are also risk factors for suicide.6

In our study, opioids were the drugs most commonly associated with fatal self-poisoning by older patients. The rising proportion of opioid overdoses in this group is worrying, and may be reflect the increasing use of these agents, particularly for the treatment of non-malignant pain. While paracetamol was the second most commonly ingested drug in poisonings, the proportion has remained constant over time, and there were no fatalities, probably because a highly effective antidote, N-acetylcysteine, is available.13

Benzodiazepines were the drugs most frequently implicated in self-poisonings by older patients in our study, consistent with other reports. A Canadian study of 2079 people aged 65 or more found that benzodiazepines, opioids, other analgesics and antipyretics, antidepressants, and sedative hypnotics were the drug classes most frequently used in suicide attempts.5 In a Swedish study, a benzodiazepine was implicated in 39% of drug poisoning suicides by older people, and this proportion was rising despite a reduction in prescription sales.14 Both the prescription and ingestion of benzodiazepines have been reported to be associated with self-poisoning by older Australians.9 Benzodiazepines may exacerbate undiagnosed depression and impair impulse control in some individuals, leading to suicide attempts. The decline in the proportion of admissions for self-poisoning with benzodiazepines over the study period is therefore reassuring.

Self-poisoning with cardiovascular drugs by older people increased threefold over the study period, and probably contributed to the higher incidence of hypotension than in the younger cohort. In an American study of calls to a poisons information centre, there was a relatively high number of self-poisonings with β-blockers and calcium channel antagonists by older callers.2

The number of drugs taken by older patients appears to have a bimodal distribution, with the proportions of admissions for ingesting one drug or more than five drugs both being higher than among younger patients. The higher frequency of toxic ingestions of more than five drugs may reflect the increased use of Webster-pak and dosette boxes by older patients.

The data in our study have inherent limitations. The HATS database does not capture the number of people who died outside hospital, nor those with less severe poisonings; that is, people who visited their primary care physician rather than presenting to an emergency department. Other poisoning studies in Newcastle have also not included patients treated only by their local medical officer or in private hospitals.15,16 Further, despite the use of a prospective data collection form, retrospective review of medical records is often required to complement prospectively collected data. However, this is rarely required for the minimum dataset we analysed; almost all data were recorded at admission. Finally, the HATS database does not capture medical comorbidities, so that we were unable to correlate these with the outcomes we reported. The key strengths of our study include the fact that the longitudinal data were gathered over an extended period, and that core data fields were consistently recorded.

Education strategies for preventing poisoning have traditionally focused on children, but the morbidity and mortality for this age group is extraordinarily low.2 Although less common in older people, self-poisoning can be a highly significant clinical event. Suicidal intent is more common, as is a lethal outcome. Some prevention efforts may be better directed to protecting our expanding population of older citizens. The importance of potentially remediable factors, such as depression and rates of benzodiazepine prescribing, should not be overlooked. The overall low mortality among older people presenting to hospital after self-poisoning reflects the standard of care received by these patients.

In summary, self-poisoning by older people is likely to be an increasing problem as our population ages. Self-poisoning by older people is associated with higher morbidity and mortality than in younger patients, and unintentional self-poisoning is also more common. The steady decrease in LOS over the 26-year period and the declines in the rates of ICU admission and death are encouraging. Despite the higher overall rate of completed suicide by older people, our data indicate that the risk of a fatal outcome following self-poisoning is low when the patient is treated in a specialist toxicology unit.

Box 1 –
Number of admissions for self-poisoning, greater Newcastle region, 1987–2012, by 5-year age bands*


* Age group labels indicate the starting age for each band; eg, 15 years = 15 to less than 20 years of age.

Box 2 –
Median hospital length of stay (A), proportion of admissions to intensive care units (B), proportion of patients requiring mechanical ventilation (C), and in-hospital mortality (D) for patients admitted to hospital for self-poisoning, greater Newcastle region, 1987–2012

Box 3 –
Types of drugs most frequently ingested by patients admitted to hospital for self-poisoning, greater Newcastle region, 1987–2012, by age cohort*

Drug class/name

Patients≥ 65 years

Patients< 65 years


Total number of patients

626

16 650

Total number of ingested substances

1198

33 205

Benzodiazepines

290 (24.2%)

5180 (15.6%)

Paracetamol

97 (8.1%)

4633 (14.0%)

Opioids

37 (3.1%)

1221 (3.7%)

Salicylates

24 (2.0%)

335 (1.0%)

Alcohol

87 (7.3%)

5374 (16.2%)

Tricyclic antidepressants

54 (4.5%)

1332 (4.0%)

Selective serotonin re-uptake inhibitors

33 (2.8%)

1650 (5.0%)

Serotonin/noradrenaline re-uptake inhibitors

13 (1.1%)

767 (2.3%)

Antidepressants (other)

15 (1.3%)

428 (1.3%)

Angiotensin 2 receptor blockers/angiotensin converting enzyme inhibitors

42 (3.5%)

178 (0.5%)

β-Blockers

30 (2.5%)

259 (0.8%)

Calcium channel blockers

28 (2.3%)

111 (0.3%)

Digitalis glycosides

14 (1.2%)

20 (0.1%)

Vasodilators

13 (1.1%)

154 (0.5%)

Anticonvulsants

39 (3.3%)

1475 (4.5%)

Antipsychotics (typical)

35 (2.9%)

1200 (3.6%)

Antipsychotics (atypical)

22 (1.9%)

1673 (5.0%)

Lithium

22 (1.8%)

228 (0.7%)

Antihistamines

16 (1.3%)

727 (2.2%)

Proton pump inhibitors

15 (1.3%)

129 (0.4%)

Statins

14 (1.2%)

50 (0.2%)

Non-steroidal anti-inflammatory drugs

13 (1.1%)

1091 (3.3%)

Other drugs

85 (7.1%)

1356 (4.1%)

Nitrates

14 (1.2%)

22 (0.1%)

Other non-therapeutic substances

33 (2.8%)

516 (1.6%)


* Three drug groups frequently implicated in self-poisoning by people under 65 years of age were rarely involved in self-poisoning by older patients: amphetamines (1.9% of admissions of people under 65), antibiotics (1.1%), and anticholinergic agents (1%).

Box 4 –
Major drug classes implicated in admissions for self-poisoning of people aged 65 years or more, greater Newcastle region, 1987–2012. (A) Alcohol and benzodiazepines; (B) cardiovascular drugs; (C) antidepressant and antipsychotic drugs; (D) analgesics

Box 5 –
The proportions of patients admitted to hospital for self-poisoning, greater Newcastle region, 1987–2012, with a history of previous suicide attempt, psychiatric illness, admission to hospital for a mental health problem, or substance misuse, by age cohort

Management of adverse events related to new cancer immunotherapy (immune checkpoint inhibitors)

Important therapeutic advances have been made in the field of cancer immunotherapy in recent years. In particular, immune checkpoint inhibitors (ICIs) have revolutionised the treatment landscape of advanced cancer over the past 5 years, with 15–20% of patients achieving long term disease control beyond 5 years.1 ICIs are monoclonal antibodies which block cell surface molecules involved in the regulation of T cell activation, including cytotoxic T lymphocyte antigen 4 (CTLA-4), programmed cell death protein 1 (PD-1) and its ligand (PD-L1). In normal homoeostasis, these inhibitory molecules are involved in preventing excessive inflammation and autoimmunity.2 However, in the tumour microenvironment, these molecules are overexpressed and promote immune tolerance rather than tumour destruction.3 Blockade of these molecules can restore the appropriate anti-tumour response and potentially improve patient survival (Box 1). Other inhibitory and activating molecules are also involved in balancing T cell regulation, some of which are undergoing further research as therapeutic targets.24

ICIs with Pharmaceutical Benefits Scheme funding for use in Australia currently include ipilimumab (an anti-CTLA-4 antibody), nivolumab and pembrolizumab (anti-PD-1 antibodies) for advanced malignant melanoma. Besides these, a number of other ICIs are undergoing evaluation in clinical trials across Australia and globally. These include durvalumab, atezolizumab and avelumab (anti-PD-L1 antibodies) and tremelimumab (an anti-CTLA-4 antibody). ICIs have been most utilised in the management of advanced melanoma, non-small cell lung cancer and metastatic renal cell carcinoma. However, they have shown some benefit in a range of other malignancies including Hodgkin lymphoma, gastric cancer and prostate cancer.5 Therefore, the number of patients receiving ICIs is likely to increase in the coming years.

Immune-related adverse events (irAEs) are a direct consequence of impaired self-tolerance from loss of T cell inhibition and altered immune regulation. They encompass a distinctive range of autoimmune toxicities, which differ significantly from the side effects of cytotoxic chemotherapy. Any organ can potentially be involved, with dermatologic, gastrointestinal, hepatic or endocrine toxicity occurring most frequently.5,6 The incidence of the most common irAEs is presented in Box 2.714

While prescription of ICIs may be limited to oncology specialists, knowledge of ICIs and their toxicities is relevant to a wide range of health care professionals who may encounter patients experiencing such complications.

This article describes the most common and clinically important irAEs that occur with ICIs and summarises the available evidence on management.

Literature search

A PubMed search was performed for articles published until 30 April 2016 using key terms “immune checkpoint inhibitor”, “CTLA-4”, “PD-1”, “PD-L1”, “ipilimumab”, “pembrolizumab” and “nivolumab”, and “immune-related adverse event”, “toxicity” and “side effects”. Product information for ipilimumab, pembrolizumab and nivolumab was accessed online through the manufacturers’ websites.

General management of irAEs

Most irAEs are generally manageable if identified and treated promptly. However, some patients can experience life-threatening toxicity leading to morbidity and mortality. The importance of patient education in the management of irAEs cannot be overemphasised. Detailed education and written information should be provided to all patients along with a medical alert card highlighting the name of the drug and who to contact in case of emergency. Patients should be advised to promptly report any potential toxicity as delay in the initiation of treatment of irAEs can lead to serious consequences. Patients should be managed at, or in close consultation with, centres familiar with the use of ICIs and treatment of irAEs with clearly defined management algorithms, especially to support junior medical staff who might not be familiar with these drugs. Specialist oncology nurses should be involved in patient education and maintain regular contact to monitor for side effects. Safety checklists should also be followed during routine medical oncology clinic reviews, bearing in mind the toxicity profile of these agents.

The time to onset of irAEs varies depending on the ICI and the organ affected (Box 3). With ipilimumab, most irAEs occur during the initial induction phase, although toxicity can develop even after treatment has been completed.5 Dermatological irAEs typically emerge first, followed by gastrointestinal, hepatic and then endocrine side effects.6,15,16 The median time to onset tends to be later with anti-PD-1 agents, and the range of time over which irAEs occur is much longer than for ipilimumab.17,18

Algorithms to guide management of common irAEs related to ipilimumab were developed by the manufacturer. These algorithms can be accessed online.19 They have been widely adopted and similar algorithms have been applied to other ICIs;17,20 however, no prospective trials have been performed to determine optimal management regimens. Some institutions, including the Fiona Stanley Hospital in Perth (Immune-related adverse events associated with use of ICIs: clinical guideline, unpublished internal document), have developed their own multidisciplinary guidelines to ensure a standardised approach to treatment of irAEs.21

Treatment depends on irAE type and severity, with the grading system outlined in Common Terminology Criteria for Adverse Events version 4.022 used to accurately and objectively gauge severity (Box 4).

Oral or intravenous (IV) corticosteroids are frequently used in the management of irAEs, depending upon the grade of toxicity. Most of the irAEs can be managed with early detection and prompt initiation of high dose steroids. Steroid tapering should be gradual over 2–4 weeks once patient symptoms are improving. Before resuming ICIs, irAE severity should return to grade 1 or have resolved entirely. Other immunosuppressive or immunomodulatory therapies may be indicated for severe or steroid-refractory cases. Prophylactic antibiotics to prevent opportunistic infections should be considered during immunosuppression, including prolonged systemic steroid therapy.5,21

ICIs are usually withheld while irAEs are treated, and permanently discontinued in severe cases. For mild or moderate irAEs, the decision to reintroduce ICIs requires careful consideration of the risks and benefits. The ongoing need for additional immunosuppression has been considered a contraindication to reintroducing ICIs.15,17,21 However, the actual risk of developing recurrence of irAEs in this situation is not known. It may be reasonable to consider further cautious use of ICIs if irAEs are controlled with low dose immunosuppression, depending on the balance of risks and benefits for individual patients.

Dermatological toxicity

Skin irAEs generally develop within a few weeks of commencing treatment, although delayed onset of rash has also been reported.23 Pruritus and rash are commonly reported with all ICIs (Box 2).24,25 The typical rash is maculopapular and mild, although severe cutaneous reactions such as Stevens–Johnson syndrome or toxic epidermal necrolysis, drug rash with eosinophilia and systemic symptoms, and pyoderma gangrenosum-like ulceration have been reported with ipilimumab.24,26 The management of rash is outlined in Box 5.

Vitiligo has been reported in patients treated for melanoma but not in other cancers.24,25 Vitiligo is considered to be potentially predictive of durable response to ICIs.27

Mild, localised pruritus usually responds to the combination of oral antihistamines and topical corticosteroids. Intense or widespread pruritus may require systemic corticosteroids (prednisolone 0.5–1 mg/kg/day). Mirtazapine or γ-aminobutyric acid agonists, such as gabapentin or pregabalin, have been suggested for intractable pruritus.24

Gastrointestinal toxicity

Diarrhoea occurs frequently with ICIs, with the highest incidence reported with combination therapy with ipilimumab and nivolumab (Box 2).8 Diarrhoea may occur with colitis, with signs and symptoms including blood or mucous in the stool and abdominal pain.5 Cases of small bowel perforation, ischaemic gastritis and pancreatitis have also been reported.26

Stool microscopy and culture should be performed to exclude infectious causes of diarrhoea.28 Computed tomography (CT) scans can detect features of colitis,29 and lower gastrointestinal endoscopy with colonic biopsy may help confirm or exclude colitis.30 Severe colitis is uncommon but can be complicated by large-bowel obstruction or perforation. Therefore, urgent and thorough assessment of diarrhoea is required for patients receiving ICIs.

Suggested management of diarrhoea and colitis is described in Box 5. Infliximab (5 mg/kg IV) can be effective in steroid-refractory cases and additional doses may be given at 2 weeks and 6 weeks if symptoms persist or recur.28,30 Tacrolimus or mycophenolate mofetil have also been suggested for refractory disease.5,30 Surgical intervention might be required in selected cases.

Hepatotoxicity

Immune-mediated hepatitis causing abnormal liver function tests occurs quite frequently with ICIs, particularly with combination therapy (Box 2). Fulminant hepatitis causing liver failure is rare.30 Raised levels of alanine aminotransferase or aspartate aminotransferase, with or without raised bilirubin levels, may be detected in asymptomatic patients. Therefore, routine liver function testing should precede each ICI dose. Other causes of liver dysfunction should be considered, particularly viral hepatitis, hepatotoxic drugs, alcohol or liver metastases. The reported radiological features by magnetic resonance imaging, CT scan or ultrasonography are varied, but imaging can be helpful to exclude other pathology.31 Liver biopsy may also be necessary to exclude alternative diagnoses.30

Management of hepatitis is outlined in Box 5. Severe, steroid-refractory hepatitis may respond to additional immunosuppressive therapy (eg, mycophenolate mofetil 500–1000 mg twice daily).30 Successful use of anti-thymocyte globulin has been reported in at least two cases.32,33 Infliximab is not recommended owing to potential hepatotoxicity.6

Endocrinopathies

The most frequent endocrinopathies related to ICIs are thyroid dysfunction and hypophysitis (Box 2). Rare cases of primary adrenal insufficiency and type 1 diabetes mellitus have also been reported. In contrast to other irAEs, endocrinopathies are usually irreversible and require long term hormone replacement.

Transient, asymptomatic elevation of thyroid-stimulating hormone occurs in some patients and may progress to hypothyroidism. Hypothyroidism is more commonly associated with anti-PD-1 therapy compared with ipilimumab.7,9 Hyperthyroidism is less common and may be due to transient thyroiditis preceding hypothyroidism. Thyroid dysfunction management is summarised in Box 5.

While sporadic autoimmune hypophysitis is very rare, hypophysitis with hypopituitarism is an important endocrine complication of ICI use, particularly ipilimumab.34 Magnetic resonance imaging scans demonstrate typical changes of diffuse pituitary enlargement, and hormone assessment may reveal hypoadrenalism, hypothyroidism and hypogonadism.35 Asymptomatic hypophysitis requires hormone replacement as directed by an endocrinologist. However, if symptoms such as headache or visual disturbance are present, high dose steroid treatment is recommended (prednisolone 1–2 mg/kg/day or IV equivalent) tapered over 2–4 weeks once improving, with continuation of long term hormone replacement required in most cases.35

Primary or secondary adrenal insufficiency can present as an adrenal crisis.6 This is an emergency and requires immediate treatment with IV corticosteroids (eg, hydrocortisone or methylprednisolone), along with IV fluids and supportive measures.6,19 Short term high dose steroids are continued until stable, and then weaned down to a physiological replacement dose.

Pulmonary toxicity

Pneumonitis is generally not common with ICIs (Box 2). Interestingly, the incidence is higher with anti-PD-1 therapy for lung cancer (3–5%) than for melanoma (1–2%).5 Symptoms include dry cough and dyspnoea. CT imaging shows ground glass or nodular lung infiltrates, and pulmonary function tests can be useful.36 Exclusion of infection is necessary, which may require bronchoscopy with bronchoalveolar lavage. Pneumonitis management is described in Box 5.

Rheumatological toxicity

Arthralgia and myalgia are relatively frequent with all ICIs, but are predominantly mild or moderately severe (Box 2). Cases of polyarticular inflammatory arthritis, myositis and vasculitis have been reported, but these are rare.5,26 Mild arthralgia or myalgia can be managed with simple analgesia (paracetamol or non-steroidal anti-inflammatory drugs), while moderate symptoms may require medium dose steroids (prednisolone 10–20 mg/day).5 Severe symptoms need high dose steroids (prednisolone 1 mg/kg/day) with rheumatology consultation to consider additional immunosuppressive therapy.

Neurological toxicity

Neurological irAEs are uncommon (Box 2), although high grade events have been reported rarely, including myasthenia gravis, chronic inflammatory demyelinating polyneuropathy, transverse myelitis and Guillain-Barré syndrome.26,37,38 ICIs should be withheld and neurological consultation obtained in such cases. Corticosteroids have been used effectively for ICI-related myasthenia gravis and Guillain-Barré syndrome, although intravenous immunoglobulin or plasmapheresis may be necessary in steroid-refractory cases.5,37

Peripheral neuropathies have been reported, which may be sensory, motor or mixed.6,19 These are often transient and usually mild. Management of peripheral neuropathy is described in Box 5.

Cases of aseptic meningitis, cranial nerve palsy and posterior reversible encephalopathy syndrome have also been described.36

Nephrotoxicity

Acute kidney injury (AKI) with elevated serum creatinine level is uncommon with ICIs (Box 2). Cases of interstitial nephritis, granulomatous nephritis and lupus-like glomerulonephritis have been reported.36,39 ICIs can be continued with weekly creatinine monitoring for grade 1 AKI (creatinine levels 1.5 to 2 times baseline). For grade 2 AKI (creatinine 2 to 3 times baseline), ICI therapy should be withheld and steroids commenced (prednisolone 0.5–1 mg/kg/day). Grade 3 (creatinine levels > 3 times baseline) or grade 4 (life-threatening consequences or dialysis indicated) AKI requires high dose steroids (prednisolone 1–2 mg/kg/day or IV equivalent) and discontinuation of ICI therapy. Nephrology consultation and renal biopsy may be necessary to guide treatment.

Discussion

ICIs have improved the prognosis of patients with advanced cancer, with unprecedented survival rates seen in certain cancers such as advanced melanoma. However, their associated burden of toxicity is not insignificant. The incidence and spectrum of irAEs seen with expanded use appears to be similar to those recorded in the initial clinical trials.4042 Some older studies suggested that the development of irAEs may be associated with better clinical response.43 However, this has not been a consistent finding, with immune-related toxicity and positive clinical response able to occur independently.3,5

Additional costs to the health care system are incurred from treatment-related toxicity and unplanned hospital admissions. Data on hospital admission rates related to irAEs are still limited; however, recent data from patients receiving combination anti-CTLA-4 and anti-PD-1 therapy showed that almost half required hospital admission for management of irAEs.44 Hospital admissions may be prolonged in some cases, although data are not available on the median duration of such admissions.

It may be possible to reduce these additional costs through early recognition and treatment of irAEs. Biomarkers to stratify irAE risk before treatment, or to predict irAEs before clinical disease emerges on treatment, would be valuable but are currently lacking. In the absence of such biomarkers, careful monitoring of clinical and laboratory signs of emerging irAEs is needed.

While algorithms similar to those outlined in this article are widely used, evidence from prospective studies would be valuable to optimise management. Further research is also required into the use of ICIs in patients with existing autoimmune disorders, as such patients have traditionally been excluded from clinical trials. Evidence from case reports and case series suggests that while some patients do tolerate ICIs, others experience flares of their underlying disease.5,45,46

The decision to resume ICI therapy or not following an irAE can be complicated, considering possible further serious toxicity with potential for sustained remission. Most patients have a poor prognosis if untreated and are often more willing to accept the risk of adverse events related to therapy.47 It is known that a high proportion of patients are unable to complete therapy due to irAEs, particularly with combination CTLA-4 and PD-1 inhibitor therapy.44 However, there is evidence that the overall survival of patients who discontinue ICIs because of irAEs is not reduced compared with patients who complete therapy.48

Use of anti-PD-1 therapy has been reported in patients with previous severe irAEs related to ipilimumab.46,49 Recurrence of the previous irAE appears to be rare in this situation; however, new irAEs may still occur.

A trial of planned sequential use of anti-CTLA-4 therapy followed by anti-PD-1 therapy, or the reverse, has also been reported.50 Nivolumab followed by ipilimumab showed better response but increased toxicity compared with the reverse sequence. In this trial, toxicity with one ICI did not predict risk of toxicity with a subsequent ICI. The frequency and kinetics of irAEs with sequential ICI therapy may be different to single-agent therapy.51

Early symptoms of toxicity can be non-specific; therefore, patients may initially present to general practitioners or to local hospitals that have less familiarity with ICIs and irAEs. Even within tertiary care hospitals, doctors practising outside of medical oncology often lack awareness of these drugs and their potential toxicities. However, owing to the diverse manifestations of irAEs, many different specialties will encounter patients with such problems. Strategies that prompt health care providers to consider the possibility of irAEs, such as medical alert cards or electronic alerts, should be part of routine management, along with education of patients and their carers to recognise signs of toxicity.21

Individual institutions should consider developing modified guidelines that incorporate local practices and expertise. Improving general awareness of ICI toxicities and establishing a standardised, multidisciplinary approach to the treatment of irAEs may reduce the associated morbidity and mortality. Anticipating and preparing for rare but potentially serious complications is important. For example, access to high cost drugs for off-label indications may be difficult in some settings; therefore, forward planning with the hospital’s drug and therapeutics committee can expedite access to such drugs for urgent treatment of irAEs.

The development of ICIs has certainly strengthened the arsenal available to medical oncologists in the fight against several advanced cancers. Optimising management of irAEs will limit the collateral damage and hopefully further improve disease outcomes and quality of life for patients.

Box 1 –
Mechanism of action of immune checkpoint inhibitors


A. Activation of tumour-specific cytotoxic T cells first requires interaction with the appropriate antigen. The tumour antigen is displayed by the major histocompatibility complex (MHC) on the surface of antigen-presenting cells (APC), which can then bind to the T cell receptor (TCR). Full T cell activation also requires a co-stimulatory signal, which is provided by B7 molecules on the APC and binding to CD28 molecules on the T cell. B. Following T cell activation, inhibitory receptors such as cytotoxic T lymphocyte antigen 4 (CTLA-4) and programmed cell death protein 1 (PD-1) are expressed on the T cell surface to regulate the immune response. CTLA-4 binds to B7 molecules with higher affinity than CD28 molecules and transmits an inhibitory signal to the T cell, leading to inactivation. PD-1 binds to its ligand PD-L1, which may be expressed by tumour cells. An inhibitory signal is also transmitted via PD-1, which causes T cell inactivation and prevents the immune response against the tumour. C. These inhibitory signals can be blocked using therapeutic monoclonal antibodies (known as immune checkpoint inhibitors). Anti-CTLA-4 antibodies prevent B7–CTLA-4 binding, and allow the co-stimulatory signal via B7–CD28 binding to be restored. Anti-PD-1 and anti-PD-L1 antibodies disrupt the inhibitory signal to the T cell via PD-1. The result in all cases is restoration of T cell activation and the tumour-specific immune response.

Box 2 –
Incidence of immune-related adverse events associated with immune checkpoint inhibitors714

Immune-related adverse event

Ipilimumab79


Ipilimumab–nivolumab8,9


Nivolumab912


Pembrolizumab7,13,14


All grades

Grade 3 or 4

All grades

Grade 3 or 4

All grades

Grade 3 or 4

All grades

Grade 3 or 4


Dermatological

Rash

15–33%

0–2%

28–41%

3–5%

4–26%

0–1%

10–15%

< 1%

Pruritus

25–35%

0–2%

33–35%

1–2%

6–19%

0–1%

11–14%

0

Vitiligo

2–9%

0

7–11%

0

7–11%

0

9–11%

0

Gastrointestinal

Diarrhoea

23–37%

3–11%

44–45%

9–11%

8–19%

0–3%

8–17%

1–3%

Colitis

8–13%

7–9%

12–23%

7–8%

1%

< 1%

1–4%

1–3%

Hepatitis

1–4%

0–2%

22–30%

11–19%

1–6%

0–3%

1–3%

0–2%

Endocrine

Hypothyroidism

2–15%

0

15–16%

< 1%

4–9%

0

8–10%

< 1%

Hyperthyroidism

1–2%

< 1%

10%

1%

2–4%

< 1%

2–4%

0

Hypophysitis

2–7%

2–4%

8–12%

2%

< 1%

< 1%

< 1%

< 1%

Pneumonitis

0–4%

0–2%

6–11%

1–2%

1–5%

0–3%

0–5%

0–2%

Rheumatological

Myalgia

2–13%

< 1%

10%

0

2–6%

0–1%

2–7%

< 1%

Arthralgia

5–9%

< 1%

11%

< 1%

5–8%

0

9–12%

< 1%

Arthritis

0

0

nr

nr

nr

nr

0–2%

0

Neurological

Headache

2–8%

< 1%

3–10%

0–1%

4–7%

0

2–3%

0

Paraesthesia

1%

< 1%

nr

nr

2%

0

< 1%

0

Renal

0–3%

< 1%

3–6%

1–2%

1–2%

< 1%

< 1%

0

Haematological

Anaemia

< 1%

< 1%

nr

nr

2–4%

1%

1–3%

0–1%


nr = not recorded.

Box 3 –
Time to onset of immune-related adverse events with different immune checkpoint inhibitors

Box 4 –
Common Terminology Criteria for Adverse Events version 4.0 (CTCAE)*23

Grade

Definition


1

Mild; or
Asymptomatic or mild symptoms; or
Clinical or diagnostic observations only; or
Intervention not indicated

2

Moderate; or
Minimal, local or non-invasive intervention indicated; or
Limiting age-appropriate instrumental activities of daily living

3

Severe or medically significant but not immediately life threatening; or
Hospitalisation or prolongation of hospitalisation indicated; or
Disabling; or
Limiting self-care activities of daily living

4

Life-threatening consequences; or
Urgent intervention indicated

5

Death related to adverse event


* CTCAE displays grades 1 through 5 with unique clinical descriptions of severity for each individual adverse event based on this general guideline. † Instrumental activities of daily living refer to preparing meals, shopping for groceries or clothes, using the telephone, managing money, etc. ‡ Self-care activities of daily living refer to bathing, dressing and undressing, feeding self, using the toilet, taking medications, and not bedridden.

Box 5 –
Management of the most common immune-related adverse events (irAEs)

irAE

Grade 1 (G1, mild)

Grade 2 (G2, moderate)

Grade 3 (G3, severe)

Grade 4 (G4, life threatening)


Rash

< 10% BSA:

  • topical corticosteroids
  • continue ICI

10–30% BSA:

  • as per G1
  • if persists or worsens, commence steroids (prednisolone 0.5–1 mg/kg/day); once improving, wean ≥ 1 month
  • delay ICI until G1 and steroid dose < 10 mg
  • if persists > 2 weeks, consider dermatology consultation and skin biopsy

> 30% BSA:

  • skin biopsy and dermatology consultation
  • commence systemic steroids (prednisolone 1 mg/kg/day or IV equivalent); once improving, wean ≥ 1 month
  • withhold ICI until G1

Life-threatening consequences; urgent intervention indicated:

  • systemic steroids (prednisolone 1–2 mg/kg/day or IV equivalent)
  • permanently cease ICI

Diarrhoea or colitis

< 4 bowel actions/day over baseline:

  • symptomatic treatment (oral fluid and electrolyte replacement, anti-diarrhoeal agents)
  • continue ICI

4-6 bowel actions/day over baseline; abdominal pain, mucous or blood in stool:

  • as per G1 if well
  • withhold ICI
  • if persists > 5 days, commence steroids (prednisolone 0.5 mg/kg/day or IV equivalent) until G1 then wean ≥ 1 month
  • consider gastroenterology consultation and endoscopy
  • if persists or worsens despite steroids for 3–5 days, treat as G3

≥ 7 bowel actions/day over baseline; severe abdominal pain, peritoneal signs:

  • IV fluid and electrolyte replacement
  • cease ICI
  • gastroenterology consultation and endoscopy
  • systemic steroids (prednisolone 1–2 mg/kg/day or IV equivalent) until G1 then wean ≥ 1 month
  • if persists after 3 days, consider infliximab 5 mg/kg (C/I in sepsis or perforation)
  • once improving, wean steroids ≥ 2 months

Life-threatening consequences; urgent intervention indicated:

  • as per G3
  • permanently cease ICI
  • surgical intervention may be required

Hepatitis

AST/ALT up to 3 times ULN and/or total BILI up to 1.5 times ULN:

  • screen for alternative causes
  • routine monitoring of LFTs
  • continue ICI

AST/ALT 3–5 times ULN and/or total BILI 1.5–3 times ULN:

  • withhold ICI
  • monitor LFTs every 2–3 days
  • if persists >5 days or worsens, commence steroids (methylprednisolone 0.5–1 mg/kg/day or oral equivalent) until G1 then wean ≥ 1 month

AST/ALT 5–20 times ULN and/or total BILI 3–10 times ULN:

  • cease ICI
  • commence steroids (IV methylprednisolone 0.5–1 mg/kg/day or oral equivalent) until G1 then wean ≥1 month
  • if no improvement after 3–5 days or worsens, add other immunosuppression (MMF)

AST/ALT > 20 times ULN and/or total BILI > 10 times ULN:

  • permanently cease ICI
  • as per G3, except steroid dose (methylprednisolone 2 mg/kg/day IV)

Thyroid dysfunction

Asymptomatic, intervention not indicated:

  • monitor TFTs
  • continue ICI

Symptomatic; therapy indicated:

  • withhold ICI
  • endocrinology consultation
  • Hypothyroidism:
  • thyroxine replacement
  • Hyperthyroidism:
  • β-blocker for symptom control
  • consider carbimazole or steroids

Severe symptoms; hospitalisation indicated:

  • withhold ICI
  • as per G2, plus commence steroids (prednisolone 1–2 mg/kg/day or IV equivalent); continue until G1 then wean ≥ 1 month

Life-threatening consequences; urgent intervention indicated:

  • as per G3

Pneumonitis

Asymptomatic; intervention not indicated:

  • consider delaying ICI while investigating
  • monitor symptoms closely every 2–3 days
  • respiratory and ID consultations
  • consider steroids (prednisolone 1 mg/kg/day) if other causes excluded
  • re-image after 3 weeks

Symptomatic; intervention indicated:

  • withhold ICI
  • monitor symptoms daily (consider hospital admission)
  • respiratory and ID consultations
  • commence steroids (prednisolone 1 mg/kg/day or IV equivalent), wean ≥ 1 month once improving, continue until resolved
  • if not improving after 2 weeks, treat as G3

Severe symptoms; oxygen indicated:

  • permanently cease ICI
  • hospital admission
  • respiratory and ID consultations
  • high dose steroids (methylprednisolone 2–4 mg/kg/day IV)
  • prophylactic antibiotics
  • if improving, wean steroids ≥ 6 weeks
  • if not improving after 48 hours, add other immunosuppression (IVIG, infliximab, MMF, CYC)

Life-threatening respiratory compromise; urgent intervention indicated:

  • as per G3
  • may require admission to intensive care unit

Neurological

Asymptomatic or mild symptoms:

  • continue ICI
  • monitor regularly

Moderate symptoms:

  • withhold ICI
  • neurology consultation
  • commence steroids (prednisolone 0.5–1 mg/kg/day)
  • resume ICI when returned to baseline

Severe symptoms:

  • cease ICI
  • neurology consultation
  • commence steroids (methylprednisolone 1–2 mg/kg/day IV or oral equivalent); wean ≥ 1 month once improving
  • if worsens, consider IVIG, plasmapheresis or other immunosuppression

Life-threatening symptoms:

  • as per G3

ALT = alanine aminotransferase. AST = aspartate aminotransferase. BILI = bilirubin. BSA = body surface area. C/I = contraindicated. CYC = cyclophosphamide. ICI = immune checkpoint inhibitor. ID = infectious diseases. IV = intravenous. IVIG = intravenous immunoglobulin. LFTs = liver function tests. MMF = mycophenolate mofetil. TFTs = thyroid function tests. ULN = upper limit of normal.

The possible risks of proton pump inhibitors

These drugs have revolutionised the management of gastrointestinal diseases, but their long term use may have risks

Proton pump inhibitors (PPIs) suppress gastric acid secretion by selectively inhibiting the enzyme hydrogen–potassium adenosine triphosphatase, located in gastric parietal cells. These drugs superseded H2-receptor antagonists as first-line acid suppressants in the 1980s, and their potent effect has revolutionised the management of common upper gastrointestinal (GI) disorders, including gastro-oesophageal reflux disease, peptic ulcer disease, and functional dyspepsia. These drugs are also widely used as part of regimens designed to eradicate Helicobacter pylori infection, and as prophylaxis against the deleterious effects of non-steroidal anti-inflammatory drugs on the GI tract.

PPIs are among the most commonly administered medications worldwide. In recent years, there has been a marked increase in prescribing PPIs,1 concurrent with an overall reduction in their cost with the advent of inexpensive generic formulations. This reduction in cost is likely to have contributed to injudicious overprescribing of PPIs, with up to 60% of primary care physicians making no attempt to reduce patients’ doses over time, and almost 50% of patients receiving long term PPI therapy having no clear indication for its continuation.2 Despite their well documented benefits, the use of PPIs may be associated with an increased risk of certain GI and non-GI conditions.

Gastrointestinal risks of proton pump inhibitor use

Commensal bacteria are thought to influence and maintain metabolic and immune pathways in the gut. Manipulation of GI microbiota has been identified as a potential target for therapeutic intervention in the management of several GI disorders, including irritable bowel syndrome and inflammatory bowel disease, by means of faecal microbial transplantation or the use of prebiotics, probiotics, symbiotics or antibiotics. Changes in the faecal microbiome have also been identified in PPI users, with a reduction in the α-diversity of bacterial communities observed in the gut,3 presumably secondary to the alteration in pH of the intestinal luminal contents. This reduction in bacterial diversity may be associated with an increased risk of developing GI infections. A meta-analysis of observational studies reported increased odds of taking PPIs for patients with Clostridium difficile-associated diarrhoea (odds ratio [OR], 1.96; 95% CI, 1.28–3.00) and other enteric infections, including with Salmonella and Campylobacter species (OR, 3.33; 95% CI, 1.84–6.02).4

Aside from any potential association with enteric infection, diarrhoea is a common side effect of PPI use. The cause of this is uncertain, although a case–control study noted increased odds of microscopic colitis (OR, 3.37; 95% CI, 2.77–4.09),5 and, in a meta-analysis, small intestinal bacterial overgrowth (SIBO) was found to be more common in PPI users (OR, 2.28; 95% CI, 1.24–4.21).6 Given that both of these conditions present with diarrhoea, which is a side effect of PPIs, it may be that this association is the result of detection bias, as individuals using PPIs are more likely to be investigated to rule out microscopic colitis or SIBO.

The hypergastrinaemia documented in long term PPI users has led to concerns about the possibility of enterochromaffin-like cell hyperplasia being associated with PPIs, and therefore a theoretical increase in the risk of neuroendocrine tumour development, although this has not been substantiated.7 Observational studies also report an increase in the incidence of gastric fundic gland polyps (FGPs) in patients using PPIs long term. Although these polyps are benign, this has raised the possibility that long term PPI use could lead to an increased risk of gastric cancer. A recent systematic review and meta-analysis investigating the association between PPI use and the incidence of FGPs suggested a statistically significant increase in the odds of developing such polyps (OR, 2.45; 95% CI, 1.24–4.83), with a longer duration of exposure further increasing this risk.8 In the same meta-analysis, the risk of PPI-associated gastric cancer was examined in a small number of studies of heterogeneous design, and a small increase in the risk of gastric cancer was observed (risk ratio [RR], 1.43; 95% CI, 1.23–1.66). However, follow-up in these studies extended to a maximum of 36 months, a timeframe one would consider too short to develop gastric cancer de novo. This suggests that the association may be the result of protopathic bias, or reverse causation, with PPI therapy more likely to be initiated in individuals presenting with dyspepsia, which may herald early gastric cancer.

The anti-secretory properties of PPIs can also lead to the development of vitamin B12 and magnesium deficiencies. Gastric acid is required to facilitate the breakdown of digestible vitamin B12, and the risk of vitamin B12 deficiency may be increased in people with achlorhydria. In a retrospective case–control study investigating risk factors for the development of vitamin B12 deficiency, PPI exposure greater than 2 years was associated with a significant increase in the odds of vitamin B12 deficiency (OR, 1.65; 95% CI, 1.58–1.73), with a dose–response relationship observed.9 Similarly, the intestinal absorption of magnesium is adversely affected by an increase in luminal pH. A meta-analysis of studies examining this question also suggested increased odds of hypomagnesaemia with PPI use (OR, 1.78; 95% CI, 1.08–2.92).10 The United States Food and Drug Administration recommends monitoring of serum magnesium levels before commencing PPI therapy and periodically thereafter if long term use is likely to be required.

Non-gastrointestinal risks of proton pump inhibitor use

Conflicting evidence regarding the association between PPI use and the risk of bone fracture has been reported. Potential pathophysiological mechanisms include impaired intestinal calcium absorption and alterations in bone metabolism, mediated by inhibition of osteoclast proton pumps. A meta-analysis reported increased odds in patients taking PPIs of hip fracture (OR, 1.25; 95% CI, 1.14–1.37) or vertebral fracture (OR, 1.50; 95% CI, 1.32–1.72).11 However, no dose–response relationship was observed and significant heterogeneity was noted. Given the observational nature of the included studies, it is likely that these findings are secondary to unmeasured confounding, with PPI users being more likely to have a comorbid condition, such as obesity or tobacco smoking, that is associated with both PPI use and an increased risk of fracture.

The association between PPI use and community-acquired pneumonia (CAP) has been investigated, again with conflicting results from the numerous observational studies. A meta-analysis of these studies indicated an increase in the odds of developing CAP with short term PPI use (OR, 1.92; 95% CI, 1.40–2.63), but not with prolonged use (OR, 1.11; 95% CI, 0.90–1.38).12 This lack of a dose–response relationship, the absence of any plausible pathophysiological mechanism to explain the association, and the significant heterogeneity of the included studies cast doubt on this association, again suggesting prescription channelling may have influenced the study findings.

The most common side effect of antiplatelet medication is GI haemorrhage secondary to peptic ulcer disease, resulting in the prophylactic use of PPIs for gastro-protection in many patients with cardiovascular disease. A reduction in the in vitro antiplatelet effect of clopidogrel, when used in conjunction with omeprazole, led to concerns about a possible increase in cardiac event rates when these drugs were co-prescribed after percutaneous coronary intervention.13 However, a randomised controlled trial (RCT) investigating the incidence of GI complications and subsequent cardiac events in patients randomly allocated to receive either dual antiplatelet therapy and omeprazole, or dual antiplatelet therapy and placebo found that there was no difference in the cardiac event rate in patients receiving omeprazole or placebo (hazard ratio [HR], 0.99; 95% CI, 0.68–1.44), but a significant reduction in overt upper GI bleeding was noted with omeprazole (HR, 0.13; 95% CI, 0.03–0.56).14

The incidence of dementia is increasing. A recent cohort study suggested PPI use may be a risk factor for the onset of dementia in patients aged over 75 years, after adjusting for potential cardiovascular, cerebrovascular, and pharmacological confounders (HR, 1.44; 95% CI, 1.36–1.52).15 However, morbidity and mortality associated with adverse events, including those associated with GI bleeding, were not assessed, meaning that the effect of unmeasured confounding could not be accounted for and, in a case–control study, PPIs appeared to protect against dementia (OR, 0.93; 95% CI, 0.90–0.97).16 Despite this, it is plausible that PPIs may increase the risk of dementia, as there is evidence suggesting that PPIs cross the blood–brain barrier, and may promote the production, or impair the degradation, of amyloid,17 although another potential mechanism could be through the effects of PPIs on vitamin B12 absorption, as discussed above.

Conclusion

Although PPIs have revolutionised the management of several upper GI disorders, their use may be associated with intestinal and extra-intestinal complications. Most studies that have exposed these risks are observational and retrospective in nature. In the only RCT that has examined the impact of one of these associations on patients, no increase in the risk of cardiac events was observed with PPIs. Unlike RCTs, the inherent flaws of observational studies do not account for unmeasured bias and confounding, and only provide evidence for the presence or absence of an association, rather than a causal link between PPI use and the risk of such complications. Moreover, the magnitude of the increased associations observed in many of these studies, although statistically significant, is unlikely to be large enough to be clinically relevant, and is probably explained by a combination of residual confounding, prescription channelling, and protopathic bias. Patients should therefore be informed that the benefits of PPIs outweigh any potential deleterious effects. Nevertheless, just like any other drug, PPIs should be prescribed judiciously, with a clear indication and regular review of the appropriateness of continued long term use to minimise these theoretical risks. Currently, no routine monitoring of PPIs is recommended other than periodic measurement of serum magnesium levels if long term use is required.

Medicines to treat side effects of other medicines? Sometimes less is more beneficial

About two-thirds of people who are 75 years and older take five medicines or more. This is known as “polypharmacy”. Many of these people are on more than ten medicines a day (hyperpolypharmacy).

It can be appropriate to prescribe multiple medicines for someone with complex or multiple illnesses if there is evidence of the benefits, and harms are minimised. But we know taking multiple medicines strongly increases the risk of unwanted side effects such as drowsiness, dizziness, confusion, falls and injuries and even hospitalisations.

Older people may be taking medicines that are not working or no longer needed; medicines may have been prescribed to treat the side effects of other medicines (prescribing cascade); other treatment options may be more suitable; or they may have difficulty taking the medicines.

Reducing these inappropriate and unnecessary medicines in older people is one of the most important challenges of modern medicine.

For example, let’s consider Robert, a 60-year-old man who starts taking blood-pressure-lowering medicine to reduce his chance of having a heart attack or stroke. He tolerates this well for many years and continues taking the medicine, as his doctor told him he needs to take it for the rest of his life.

By the age of 80 he is on ten medicines for various conditions (including arthritis, reflux and sleeping problems) and starts to get dizzy spells. On occasions the dizziness has led to a fall, and his concerns about falling have made him less independent.

What can be done in Robert’s case? Reviews of studies where medicines are selectively and carefully stopped (also called “deprescribing”) show reducing specific types of medicines, such as blood-pressure-lowering medicine, can be done without causing harmful withdrawal effects. In the case of medicines such as antidepressants or sleeping tablets, reduction can even have the benefits of reducing the risk of falls and improving cognition.

However, discussions between doctor and patient about coming off medicines are not easy and there is little guidance on how to do it. We are used to talking about why medicines need to be started, but less familiar with stopping or reducing the doses of medicines.

The evidence for the benefit of many medicines is less clear for older patients as randomised controlled trials generally study younger populations with no other illnesses. This means, especially for older people, that the balance of potential benefits versus harms depends on what is important to the individual patient.

Robert’s priority might be living independently by avoiding dizziness and reducing his risk of having a fall. He therefore may prefer to reduce his blood-pressure medicine. His friend James may be more concerned about avoiding death or disability from a heart attack or stroke. He therefore may prefer to continue his medication and accept he may experience dizzy spells as a result.

Biases in the way we think often influence decisions on medicine. Patients may not realise change is an option, doctors may incorrectly assume patients always want to stay on their medicines, and older people may experience cognitive changes that make it more challenging for them to be involved in an informed decision. But all of these issues can be overcome with good communication.

Using Robert’s situation as an example, here are four steps to ensure an informed and shared decision about deprescribing.

1. You have options

Continuing, reducing the dose, or discontinuing blood-pressure medicine should all be identified as options to manage the problem of dizziness.

2. Discuss the harms and benefits

The likelihood of preventing a heart attack or stroke by continuing blood-pressure medicine versus preventing dizziness or falls by reducing or stopping the medicine should be discussed. This should take into account other medicines taken and the strength of the evidence for the relevant age group.

3. What does the patient want?

Robert’s concerns about reduced independence after a fall now, versus death or disability from a heart attack or stroke in the future, should be discussed. This includes talking about possible trade-offs between quality of life in the short term and life expectancy in the long term.

4. Make a decision

A decision about maintaining, reducing or stopping blood-pressure medicine should be made by Robert together with his doctor, carer and/or family, depending on who Robert wants to be involved. The decision made now can be changed later on.

If patients have any concerns about their medicines, experience troublesome symptoms or think a medicine may no longer be needed, they should talk to their GP or pharmacist about reviewing their medicines and discussing the potential for reducing or stopping. More information about medicines and older people is available on the NPS Medicinewise website.

Deprescribing, or carefully ceasing medicines, is not taking away care; it is a positive strategy that reduces avoidable harmful effects and can improve quality of life.The Conversation

 

Jesse Jansen, Senior Research Fellow, Sydney School of Public Health, University of Sydney; Andrew McLachlan, Professor of Pharmacy (Aged Care), University of Sydney; Carissa Bonner, Postdoctoral Research Fellow, University of Sydney, and Vasi Naganathan, Associate Professor of Geriatric Medicine, University of Sydney

This article was originally published on The Conversation. Read the original article.

Other doctorportal blogs

 

What are the top 10 drugs used in Australia?

For the first time in 20 years, statins have not topped the list of the most costly drug for the Australian government to fund.

Australian Prescriber has released the top 10 drugs used in Australia as well as the top 10 by cost to government. The figures are based on PBS and RPBS prescriptions.

Atorvastatin dropped out of the top 10 by cost to government, however it still topped the lists for daily dose and prescription counts.

The most expensive drug for the government is Adalimumab, a monoclonal antibody indicated for the treatment of Rheumatoid Arthritis, Juvenile Idiopathic Arthritis, Psoriatic Arthritis, Psoriatic Arthritis,  Ankylosing Spondylitis. Crohn’s Disease, Ulcerative colitis, Psoriasis and Hidradenitis Suppurativa which cost the government $311 616 305 for 176,062 prescriptions from July 2014 – June 2015.

Related: Millions wasted on discarded drugs

Two injectable drugs to treat age-related macular degeneration appeared for the first time. Aflibercept came in at third most expensive for the government, costing nearly $193 million for 123 123 prescriptions and Ranibizumab was fourth most expensive, costing nearly $180 million for 116 311 prescriptions.

On the most prescribed list were two statins (Atorvastatin in number 1 and Rosuvastatin  in number 3) as well as proton-pump inhibitors (Esomeprazole in number 2 and Pantoprazole in number 5), analgesics (paracetamol in number 4) and type 2 diabetes (Metformin in number 6).

See the full list at Australian Prescriber.

Latest news:

Guideline for the diagnosis and management of hypertension in adults — 2016

Blood pressure (BP) is an important common modifiable risk factor for cardiovascular disease. In 2014–15, 6 million adult Australians were hypertensive (BP ≥ 140/90 mmHg) or were taking BP-lowering medication.1 Hypertension is more common in those with lower household incomes and in regional areas of Australia (http://heartfoundation.org.au/about-us/what-we-do/heart-disease-in-australia/high-blood-pressure-statistics). Many Australians have untreated hypertension, including a significant proportion of Aboriginal and Torres Strait Islander people.1

Cardiovascular diseases are associated with a high level of health care expenditure.2 Controlled BP is associated with lower risks of stroke, coronary heart disease, chronic kidney disease, heart failure and death. Small reductions in BP (1–2 mmHg) are known to markedly reduce population cardiovascular morbidity and mortality.3,4

Method

The National Blood Pressure and Vascular Disease Advisory Committee, an expert committee of the National Heart Foundation of Australia, has updated the Guide to management of hypertension 2008: assessing and managing raised blood pressure in adults (last updated in 2010)5 to equip health professionals across the Australian health care system, especially those within primary care and community services, with the latest evidence to prevent, detect and manage hypertension.

International hypertension guidelines68 were reviewed to identify key areas for review. Review questions were developed using the patient problem or population, intervention, comparison and outcome(s) (PICO) framework.9 Systematic literature searches (2010–2014) of MEDLINE, Embase, CINAHL and the Cochrane Library were conducted by an external organisation, and the resulting evidence summaries informed the updated clinical recommendations. The committee also reviewed additional key literature relevant to the PICO framework up to December 2015.

Recommendations were based on high quality studies, with priority given to large systematic reviews and randomised controlled trials, and consideration of other studies where appropriate. Public consultation occurred during the development of the updated guideline. The 2016 update includes the level of evidence and strength of recommendation in accordance with National Health and Medical Research Council standards10 and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology.11 No level of evidence has been included where there was no direct evidence for a recommendation that the guideline developers agreed clearly outweighed any potential for harm.

Most of the major recommendations from the guideline are outlined below, together with background information and explanation, particularly in areas of change in practice. Key changes from the previous guideline are listed in Box 1. The full Heart Foundation Guideline for the diagnosis and management of hypertension in adults – 2016 is available at http://heartfoundation.org.au/for-professionals/clinical-information/hypertension. The full guideline contains additional recommendations in the areas of antiplatelet therapy, suspected BP variability, and initiating treatment using combination therapy compared with monotherapy.

Recommendations

Definition and classification of hypertension

Elevated BP is an established risk factor for cardiovascular disease. The relationship between BP level and cardiovascular risk is continuous, therefore the distinction between normotension and hypertension is arbitrary.12,13 Cut-off values are used for diagnosis and management decisions but vary between international guidelines. Current values for categorisation of clinic BP in Australian adults are outlined in Box 2.

Management of patients with hypertension should also consider absolute cardiovascular disease risk (where eligible for assessment) and/or evidence of end-organ damage. Several tools exist to estimate absolute cardiovascular disease risk. The National Vascular Disease Prevention Alliance developed a calculator for the Australian population, which can be found at http://www.cvdcheck.org.au.

Treatment strategies for individuals at high risk of a cardiovascular event may differ from those at low absolute cardiovascular disease risk despite similar BP readings. It is important to note that the absolute risk calculator has been developed using clinic BP, rather than ambulatory, automated office or home BP measures.

Some people are not suitable for an absolute risk assessment, including younger patients with uncomplicated hypertension and those with conditions that identify them as already at high risk.14

Blood pressure measurement

A comprehensive assessment of BP should be based on multiple measurements taken on several separate occasions. A variety of methods are available, each providing different but often complementary information. Methods include clinic BP, 24-hour ambulatory and home BP monitoring (Box 3).

Most clinical studies demonstrating effectiveness and benefits of treating hypertension have used clinic BP. Clinic, home and ambulatory BP all predict the risk of a cardiovascular event; however, home and ambulatory blood pressure measures are stronger predictors of adverse cardiovascular outcomes (Box 4).15,16

Automated office BP measurement involves taking repeated blood pressure measurements using an automated device with the clinician out of the room.17,18 This technique generally yields lower readings than conventional clinic BP and has been shown to have a good correlation with out-of-clinic measures.

The British Hypertension Society provides a list of validated BP monitoring devices.19 Use of validated and regularly maintained non-mercury devices is recommended as mercury sphygmomanometers are being phased out for occupational health and safety and environmental reasons.

Treatment thresholds

Although the benefits of lowering BP in patients with significantly elevated BP have been well established, the benefit of initiating drug therapy in patients with lower BP with or without comorbidities has been less certain. A meta-analysis of patients with uncomplicated mild hypertension (systolic BP range, 140–159 mmHg) indicated beneficial cardiovascular effects with reductions in stroke, cardiovascular death and all-cause mortality, through treatment with BP-lowering therapy.20 Corresponding relative reductions in 5-year cardiovascular disease risk were similar for all levels of baseline BP.21

Decisions to initiate drug treatment at less severe levels of BP elevations should consider a patient’s absolute cardiovascular disease risk and/or evidence of end-organ damage together with accurate blood pressure readings.

Treatment targets

Optimal blood pressure treatment targets have been debated extensively. There is emerging evidence demonstrating the benefits of treating to optimal BP, particularly among patients at high cardiovascular risk.17,20

The recent Systolic Blood Pressure Intervention Trial investigated the effect of targeting a higher systolic BP level (< 140 mmHg) compared with a lower level (< 120 mmHg) in people over the age of 50 years who were identified as having a cardiovascular 10-year risk of at least 20%.17 Many had prior cardiovascular events or mild to moderate renal impairment and most were already on BP-lowering therapy at the commencement of the study. Patients with diabetes, cardiac failure, severe renal impairment or previous stroke were excluded. The method of measurement was automated office BP,18 a technique that generally yields lower readings than conventional clinic BP. Patients treated to the lower target achieved a mean systolic BP of 121.4 mmHg and had significantly fewer cardiovascular events and lower all-cause mortality compared with the other treatment group, which achieved a mean systolic level of 136.2 mmHg. Older patients (> 75 years) benefited equally from the lower target BP. However, treatment-related adverse events increased in the more intensively treated patients, with more frequent hypotension, syncopal episodes, acute kidney injury and electrolyte abnormalities.

The selection of a BP target should be based on an informed, shared decision-making process between patient and doctor (or health care provider), considering the benefits and harms and reviewed on an ongoing basis.

Recommendations for treatment strategies and treatment targets for patients with hypertension are set out in Box 5.

Box 1 –
Key changes from previous guideline

  • Use of validated non-mercury sphygmomanometers that are regularly maintained is recommended for blood pressure (BP) measurement.
  • Out-of-clinic BP using home or 24-hour ambulatory measurement is a stronger predictor of outcome than clinic BP measurement.
  • Automated office blood pressure (AOBP) provides similar measures to home and ambulatory BP, and results are generally lower than those from conventional clinic BP measurement.
  • BP-lowering therapy is beneficial (reduced stroke, cardiovascular death and all-cause mortality) for patients with uncomplicated mild hypertension (systolic BP, 140–159 mmHg).
  • For patients with at least moderate cardiovascular risk (10-year risk, 20%), lower BP targets of < 120 mmHg systolic (using AOBP) provide benefit with some increase in treatment-related adverse effects.
  • Selection of a BP target should be based on informed, shared decision making between patients and health care providers considering the benefits and harms, and reviewed on an ongoing basis.

Box 2 –
Classification of clinic blood pressure in adults

Diagnostic category*

Systolic (mmHg)

Diastolic (mmHg)


Optimal

< 120

and

< 80

Normal

120–129

and/or

80–84

High-normal

130–139

and/or

85–89

Grade 1 (mild) hypertension

140–159

and/or

90–99

Grade 2 (moderate) hypertension

160–179

and/or

100–109

Grade 3 (severe) hypertension

≥ 180

and/or

≥ 110

Isolated systolic hypertension

> 140

and

< 90


Reproduced with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults — 2016. Melbourne: NHFA, 2016. * When a patient’s systolic and diastolic blood pressure levels fall into different categories, the higher diagnostic category and recommended actions apply.

Box 3 –
Criteria for diagnosis of hypertension using different methods of measurement

Method of measurement

Systolic (mmHg)

Diastolic (mmHg)


Clinic

≥ 140

and/or

≥ 90

ABPM daytime (awake)

≥ 135

and/or

≥ 85

ABPM night-time (asleep)

≥ 120

and/or

≥ 70

ABPM over 24 hours

≥ 130

and/or

≥ 80

HBPM

≥ 135

and/or

≥ 85


Reproduced with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults — 2016. Melbourne: NHFA, 2016. ABPM = ambulatory blood pressure monitoring. HBPM = home blood pressure monitoring.

Box 4 –
Recommendations for monitoring blood pressure (BP) in patients with hypertension or suspected hypertension

Method of measuring BP

Grade of recommendation*

Level of evidence


If clinic BP is ≥ 140/90 mmHg or hypertension is suspected, ambulatory and/or home monitoring should be offered to confirm the BP level

Strong

I

Clinic BP measures are recommended for use in absolute cardiovascular risk calculators. If home or ambulatory BP measures are used in absolute cardiovascular disease risk calculators, risk may be inappropriately underestimated

Strong

Procedures for ambulatory BP monitoring should be adequately explained to patients. Those undertaking home measurements require appropriate training under qualified supervision

Strong

I

Finger and/or wrist BP measuring devices are not recommended

Strong


Reproduced with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults — 2016. Melbourne: NHFA, 2016. * Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology.11 † National Health and Medical Research Council standards;10 no level of evidence included where there was no direct evidence for a recommendation that the guideline developers agreed clearly outweighed any potential for harm.

Box 5 –Recommendations for treatment strategies and treatment targets for patients with hypertension, with grade of recommendation and level of evidence*

A healthy lifestyle, including not smoking, eating a nutritious diet and regular adequate exercise is recommended for all Australians including those with and without hypertension.

  • Lifestyle advice is recommended for all patients (grade: strong; level: –).
  • For patients at low absolute cardiovascular disease risk (5-year risk, < 10%) with persistent blood pressure (BP) ≥ 160/100 mmHg, antihypertensive therapy should be started (grade: strong; level: I).
  • For patients at moderate absolute cardiovascular disease risk (5-year risk, 10–15%) with persistent systolic BP ≥ 140 mmHg and/or diastolic ≥ 90 mmHg, antihypertensive therapy should be started (grade: strong; level: I).
  • Once decided to treat, patients with uncomplicated hypertension should be treated to a target of < 140/90 mmHg or lower if tolerated (grade: strong; level: I).
  • In selected high cardiovascular risk populations where a more intense treatment can be considered, aiming for a target of < 120 mmHg systolic BP can improve cardiovascular outcomes (grade: strong; level: II).
  • In selected high cardiovascular risk populations where a treatment is being targeted to < 120 mmHg systolic BP, close follow-up of patients is recommended to identify treatment-related adverse effects including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: II).
  • In patients with uncomplicated hypertension, angiotensin-converting enzyme (ACE) inhibitors or angiotensin-receptor blockers (ARBs), calcium channel blockers and thiazide diuretics are all suitable first-line antihypertensive drugs, either as monotherapy or in some combinations unless contraindicated (grade: strong; level: I).
  • The balance between efficacy and safety is less favourable for β-blockers than other first-line antihypertensive drugs. Thus β-blockers should not be offered as a first-line drug therapy for patients with hypertension that is not complicated by other conditions (grade: strong; level: I).
  • ACE inhibitors and ARBs are not recommended in combination due to an increased risk of adverse effects (grade: strong; level: I).

Treatment-resistant hypertension

Treatment-resistant hypertension is defined as a systolic BP ≥ 140 mmHg in a patient who is taking three or more antihypertensive medications, including a diuretic at optimal tolerated doses. Contributing factors may include variable compliance, white coat hypertension or secondary causes for hypertension.Few drug therapies specifically target resistant hypertension. Renal denervation is currently being investigated as a treatment option in this condition; however, to date, it has not been found to be effective in the most rigorous study conducted.22

  • Optimal medical management (with a focus on treatment adherence and excluding secondary causes) is recommended (grade: strong; level: II).
  • Percutaneous transluminal radiofrequency sympathetic denervation of the renal artery is currently not recommended for the clinical management of resistant hypertension or lower grades of hypertension (grade: weak; level: II).

Patients with hypertension and selected comorbidities

Stroke and transient ischaemic attack:

  • For patients with a history of transient ischaemic attacks or stroke, antihypertensive therapy is recommended to reduce overall cardiovascular risk (grade: strong; level: I).
  • For patients with a history of transient ischaemic attacks or stroke, any of the first-line antihypertensive drugs that effectively reduce BP are recommended (grade: strong; level: I).
  • For patients with hypertension and a history of transient ischaemic attacks or stroke, a BP target of < 140/90 mmHg is recommended (grade: strong; level: I).

Chronic kidney disease:

Most classes of BP-lowering drugs have a similar effect in reducing cardiovascular events and all-cause mortality in patients with chronic kidney disease (CKD). When treating with diuretics, the choice should be dependent on both the stage of CKD and the extracellular fluid volume overload in the patient. Detailed recommendations on how to manage patients with CKD are available.23

  • In patients with hypertension and CKD, any of the first-line antihypertensive drugs that effectively reduce BP are recommended (grade: strong; level: I).
  • When treating hypertension in patients with CKD in the presence of microalbuminuria or macroalbuminuria, an ARB or ACE inhibitor should be considered as first-line therapy (grade: strong; level: I).
  • In patients with CKD, antihypertensive therapy should be started in those with BP consistently > 140/90 mmHg and treated to a target of < 140/90 mmHg (grade: strong; level: I).
  • Dual renin-angiotensin system blockade is not recommended in patients with CKD (grade: strong; level: I).
  • For patients with CKD, aiming towards a systolic BP < 120 mmHg has shown benefit, where well tolerated (grade: strong; level: II).
  • In people with CKD, where treatment is being targeted to less than 120 mmHg systolic BP, close follow-up of patients is recommended to identify treatment-related adverse effects, including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: I).
  • In patients with CKD, aldosterone antagonists should be used with caution in view of the uncertain balance of risks versus benefits (grade: weak; level: –).

Diabetes:

  • Antihypertensive therapy is strongly recommended in patients with diabetes and systolic BP ≥ 140 mmHg (grade: strong; level: I).
  • In patients with diabetes and hypertension, any of the first-line antihypertensive drugs that effectively lower BP are recommended (grade: strong; level: I).
  • In patients with diabetes and hypertension, a BP target of < 140/90 mmHg is recommended (grade: strong; level: I).
  • A systolic BP target of < 120 mmHg may be considered for patients with diabetes in whom prevention of stroke is prioritised (grade: weak; level: –).
  • In patients with diabetes, where treatment is being targeted to < 120 mmHg systolic BP, close follow-up of patients is recommended to identify treatment-related adverse effects including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: –).

Myocardial infarction:

  • For patients with a history of myocardial infarction, ACE inhibitors and β-blockers are recommended for the treatment of hypertension and secondary prevention (grade: strong; level: II).
  • β-Blockers or calcium channel blockers are recommended for symptomatic patients with angina (grade: strong; level: II).

Chronic heart failure:

  • In patients with chronic heart failure, ACE inhibitors and selected β-blockers are recommended (grade: strong; level: II).
  • ARBs are recommended in patients who do not tolerate ACE inhibitors (grade: strong; level: I).

Peripheral arterial disease:

  • In patients with peripheral arterial disease, treating hypertension is recommended to reduce cardiovascular disease risk (grade: strong; level: –).
  • In patients with hypertension and peripheral arterial disease, any of the first-line antihypertensive drugs that effectively reduce BP are recommended (grade: weak; level: –).
  • In patients with hypertension and peripheral arterial disease, reducing BP to a target of < 140/90 mmHg should be considered and treatment guided by effective management of other symptoms and contraindications (grade: strong; level: –).

Older people:

  • Any of the first-line antihypertensive drugs that effectively reduce BP can be used in older patients with hypertension (grade: strong; level: I).
  • When starting treatment in older patients, drugs should be commenced at the lowest dose and titrated slowly as adverse effects increase with age (grade: strong; level: –).
  • For patients > 75 years of age, aiming towards a systolic BP of < 120 mmHg has shown benefit, where well tolerated, unless there is concomitant diabetes (grade: strong; level: II).
  • In older people whose treatment is being targeted to < 120 mmHg systolic BP, close follow-up is recommended to identify treatment-related adverse effects including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: II).
  • Clinical judgement should be used to assess benefit of treatment against risk of adverse effects in all older patients with lower grades of hypertension (grade: strong; level: –).

Adapted with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults – 2016. Melbourne: NHFA, 2016. * Grade of recommendation based on the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology;11 level of evidence according to the National Health and Medical Research Council standards10 — no level of evidence included where there was no direct evidence for a recommendation that the guideline developers agreed clearly outweighed any potential for harm.

The importance of supportive care for patients with cancer

Supportive care encompasses managing symptoms and side effects before, during and after cancer treatment

Supportive care involves the management of the symptoms of cancer and the side effects of treatment across the whole course of cancer, from diagnosis through treatment to survivorship and end-of-life care. Cancer requires multimodality treatment, including chemotherapy, immunotherapy, radiotherapy and surgery, each of which can be used to palliate symptoms but may also have adverse effects that need managing. Likewise, supportive care needs multidisciplinary team support, including cancer specialist clinicians, nurses, pharmacists and allied health practitioners in addition to palliative care practitioners.

Breakthroughs in cancer treatment are widely heralded but less prominence is given to the significant advances made in supportive care. With a strapline of “supportive care makes excellent cancer care possible”, the Multinational Association of Supportive Care in Cancer (MASCC) will celebrate 25 years of promoting these advances at its scientific meeting in Adelaide in June 2016. Good supportive care enables the delivery of optimal therapy and improves a patient’s quality of life. Where has the most progress been made?

Broadening the spectrum of treatable side effects

Febrile neutropenia can be a life-threatening side effect of chemotherapy if patients develop a bacterial infection when their neutrophil levels are low. This usually occurs at mid-cycle, 10–15 days after chemotherapy. On recovery, subsequent chemotherapy doses could be reduced; however, this would diminish their efficacy when given with curative intent. The advent of the use of granulocyte colony-stimulating factor, longer-acting formulations and recombinant stem cell factor, given with chemotherapy, has significantly decreased the severity and duration of neutropenia, thereby reducing the chance of infection or hastening recovery.1 With this strategy, the dose intensity of chemotherapy can be maintained.

A side effect that may have a profound impact on the quality of life of a patient being treated for cancer is nausea and vomiting. The introduction of cisplatin was associated with severe nausea and vomiting which responds poorly to standard antiemetics such as metoclopramide. The vomiting was controlled by the discovery of new antiemetic drug classes. The 5-hydroxytryptamine 3 receptor antagonists (5-HT 3 RA) assist in alleviating vomiting in the first 24 hours after chemotherapy. The neurokinin 1 receptor antagonists (NK1 RA) are more effective in delayed emesis that occurs between 24 hours and 5 days following chemotherapy. When these antiemetics are combined with dexamethasone, vomiting can be controlled in up to 80% of patients.2 Long-acting 5-HT 3 RAs are used and now long-acting NK1 RAs are being trialled, but they are yet to find their place in antiemetic combinations. The MASCC antiemetic guidelines are updated regularly to keep up with these advances.3 A further challenge is that nausea is not as well controlled as vomiting with these newer agents. Phenothiazines (eg, chlorpromazine) and atypical antipsychotics (eg, olanzapine) that inhibit multiple receptors may be a more appropriate choice in the setting of nausea. Other investigations suggest that what is reported as nausea may be a symptom cluster where multiple symptoms may need to be treated.4

Pain control is one of the largest supportive care examples of disparities in cancer treatment. It is highlighted by the restricted availability of morphine in many low income countries. There are international programs such as the Global Access to Pain Relief Initiative to address this problem, but much more needs to be done. A recent finding that low dose morphine was superior to other weak opioids for moderate cancer-related pain will be helpful, given the relatively low cost of morphine.5

Therapy-induced mucosal injury to the gastrointestinal tract remains a problem, particularly from new targeted cancer therapies such as tyrosine kinase inhibitors and vascular endothelial growth factor receptor inhibitors. However, the lack of effective treatment has stimulated many studies of the mechanisms of mucositis and the inflammatory response to an injury and of the cytokines that mediate the response. This will direct future treatments. Guidelines for the management of oral and gastrointestinal mucosal injury help standardise practice and keep clinicians current on new initiatives.6 Advances in enteral nutrition are also relevant to this area. The study of nutrition and the alleviation of cancer cachexia is a continuing field for supportive care research.

A largely non-pharmacological approach is being advocated for cancer-related fatigue, which is a lack of energy to carry out routine daily activities and is very commonly associated with both cancer and its treatment. There are many possible causes which need to be identified and corrected. Some patients may be anaemic or have hypothyroidism. Some may have nutrition issues leading to anorexia and weight loss. Others may be depressed or have difficulty sleeping. In addition, a lack of exercise has been found to contribute to this symptom, and there is increasing evidence of the impact of exercise for alleviating fatigue.

For some side effects, there is conflicting evidence over optimal management. An example is the role of morphine in dyspnoea associated with cancer. It is now apparent that the most appropriate routes of administration of morphine are oral and intravenous rather than nebulised.7 Broader medical research is required for other toxicities. For example, the prevention and treatment of skeletal complications coincidentally paralleled research on osteoporosis and the development of bisphosphonates and a monoclonal antibody that binds to the receptor activator of nuclear factor KappaB ligand.8 These treatments can be effectively used for patients with metastatic bone disease to decrease pain and reduce fractures. There are other side effects that emerge as prime targets for further research. A good example is the need to control the impact of treatment on the neurological system, where there is little progress in prevention and limited management options.

Psychosocial and spiritual wellbeing

In terms of the impact on patients’ quality of life, the management of the psychosocial sequelae of cancer is at least as important as dealing with the physical symptoms. Moreover, these symptoms extend from diagnosis through to the survivorship phase. The issue here is not so much the treatment of symptoms such as anxiety and depression, but whether the symptoms are recognised. The challenge is to predict patients at risk and employ preventive strategies.

Aside from psychosocial distress, spiritual wellbeing has been found to have an independent impact on quality of life.9 Spirituality is about patients seeking meaning in their experiences and trying to achieve a sense of peace; it is not just a matter of faith. However, as a MASCC survey found, even among clinicians who focus on supportive care, this area presents a challenge. It is important to patients that their treatment team recognises their existential distress.10

New challenges

The largest of the new challenges in supportive care comes from the transition from conventional chemotherapy, with its well known side effects, to the newer targeted therapies and a range of different toxicities. Taking the example of new immune checkpoint inhibitors, such as anti-cytotoxic T lymphocyte-associated protein and programmed death 1 (PD1) antibodies, there are a range of adverse effects including skin toxicities, gastrointestinal side effects and oral complications, as well as immune-related infections. Their mechanisms require definition and preventive or management strategies need to be developed and implemented.

Skin toxicities associated with cytotoxic chemotherapy include extravasation injury, alopecia, photosensitivity, graft-versus-host reactions, and rarer rashes and syndromes such as hand–foot syndrome. Toxicity to nails and hair may also occur. There are more severe toxicities with targeted therapies. For example, the papulopustular reaction requires treatment in a third of patients who received epidermal growth factor receptor inhibitors (EGFRi). Guidelines were produced for the prevention and management of this type of severe skin reactions and a tool, the MASCC EGFR Inhibitor Skin Toxicity Tool (MESTT), has been developed to help record the skin adverse events associated with EGFRi therapy.11

Proto-oncogene BRAF inhibitors are associated with a spectrum of skin eruptions, including seborrhoeic dermatitis, keratosis pilaris eruptions, hyperkeratotic plantar papules similar to those seen with sorafenib (a protein tyrosine kinase inhibitor) and, perhaps strange for a drug used to treat melanoma, squamous cell skin cancers. The newer PD1 inhibitors have been associated with a maculopapular rash in up to 40% of patients and often require treatment with steroids.12

A new “toxicity” that has emerged with the advent of personalised medicine — the use of the molecular profile of a patient’s cancer to make treatment decisions — has been called financial toxicity. The high prices of the targeted therapies and the tests to identify targets are often an unintended economic harm of therapy, and a cause of financial distress to patients and their relatives. It is important to ascertain which patients are at risk and scales have been published to assess the financial burden of cancer care based on the ability to pay, as determined by income stream and liquid assets.13 There are other scoring systems, such as the Comprehensive Score for Financial Toxicity, that can be used to measure this toxicity of treatment to identify which patients most need support.14 The problem that needs resolution here is determining the value of a drug, based on patient outcome, and then better aligning the cost of the drug to that value.

There are many aspects of supportive care that make a good quality of life in patients with cancer possible.

A decade of Australian methotrexate dosing errors

Methotrexate is a synthetic folic acid analogue used for its antineoplastic and immunomodulating properties. It competitively inhibits folic acid reductase, decreasing tetrahydrofolic acid production and inhibiting DNA synthesis. Low dose methotrexate (administered weekly in doses of 7.5–25 mg) is indicated for rheumatoid arthritis, psoriasis and inflammatory bowel disease.1

The unusual dosing schedule of low dose methotrexate is associated with a risk that it will be prescribed, dispensed or administered daily instead of weekly. Used appropriately, methotrexate is considered safe and efficacious; accidental daily dosing, however, can potentially be lethal. Higher or more frequent doses can result in gastro-intestinal mucosal ulceration, hepatotoxicity, myelosuppression, sepsis and death.2 Indeed, there are several literature reports of serious morbidity and mortality linked with methotrexate medication errors.37 A study of medication errors reported to the United States Food and Drug Administration over 4 years identified more than 100 methotrexate dosing errors (25 deaths), of which 37% were attributed to the prescriber, 20% to the patient, 19% to dispensing, and 18% to administration by a health care professional.8,9

Current efforts to reduce the likelihood of these errors include the guideline that a specific day of the week for taking methotrexate is nominated.10 Additional care in counselling is recommended to ensure that patients are aware of the dangers of taking extra methotrexate and of signs of methotrexate toxicity.4 In Australia, oral methotrexate is available in packs of 2.5 mg × 30, 10 mg × 15, and 10 mg × 50 tablets. In 2008, the 15-tablet pack was introduced to reduce the risk of toxicity, with the 50-tablet pack being placed on a restricted benefit listing for patients prescribed more than 20 mg per week.

Although overseas data have been published in the form of case reports3,6 and reviews of adverse event databases,9 Australian data on methotrexate medication errors are lacking. In this article, we describe cases of methotrexate medication errors resulting in death reported to the National Coronial Information System (NCIS), summarise reports involving methotrexate documented in the Therapeutic Goods Administration Database of Adverse Event Notifications (TGA DAEN), and describe methotrexate medication errors reported to Australian Poisons Information Centres (PICs).

Methods

This study investigated medication errors recorded in the NCIS, TGA DAEN and PIC datasets. For the purposes of our study, “medication error” was defined as an incident occurring anywhere in the medication process, including prescribing, dispensing or administration. For the error to be included in our study, methotrexate must have been taken by the patient on 3 or more consecutive days.

Data were collected from the NCIS to identify deaths linked with methotrexate medication errors. The NCIS database has a record of reportable deaths from July 2000 onwards for all states except Queensland, for which data are available from January 2001. This database was searched on 4 July 2015 for closed cases from the period 2000–2014, searching for methotrexate in the “cause of death” fields, and by searching for deaths caused by antineoplastic agents in “complications of health care” as “mechanism/object”. A keyword search of attached documentation (findings, autopsy reports) was not performed. Results were manually reviewed for inclusion.

Data were obtained from the open access TGA DAEN for methotrexate adverse events reported from January 2004 to December 2014. Cases coded as “accidental overdose”, “drug administration error”, “drug dispensing error”, “inappropriate schedule of drug administration”, “medication error”, or “overdose” were extracted and manually reviewed for inclusion.

There are four PICs in Australia that together provide around-the-clock poisoning advice to health care professionals and members of the public across Australia. We retrospectively reviewed the New South Wales, Victorian, Western Australian and Queensland PIC databases. South Australia, the Northern Territory, the Australian Capital Territory and Tasmania do not have PICs, but calls from these states are diverted to the New South Wales and Western Australian Poisons Centres. Data from the Victorian PIC were available from May 2005, and from the Queensland PIC from January 2005; other PIC databases were searched for 1 January 2004 onwards, with cases included if they occurred on or before 31 December 2015. Methotrexate cases were manually reviewed for inclusion.

Methotrexate dispensing data from January 2004 to August 2015 were obtained from the Pharmaceutical Benefits Schedule Item Reports website (http://medicarestatistics.humanservices.gov.au/statistics/pbs_item.jsp). Item numbers 1622J, 1623K and 2272N were included, corresponding to the oral methotrexate available in Australia. The price of 10 mg × 50 tablets is above the Pharmaceutical Benefits Scheme (PBS) co-payment threshold, while those of other pack sizes are below the threshold. The PBS dataset did not capture items under the co-payment threshold until April 2012. Data on the dispensing of 10 mg × 15 and 2.5 mg × 30 tablet packs during the study period is available only for concession card holders; that is, using PBS data for the entire population overestimates the proportion of scripts dispensed for the 10 mg × 50 pack size. As the price of the medicine of interest lies under the co-payment threshold, the study population was restricted to concessional beneficiaries to better reflect use of the medicine.11

We used medians and interquartile ranges (IQRs) to describe the data, and performed statistical analyses with Excel (Microsoft) and SPSS 22 (IBM).

Ethics

Ethics approval for the use of NCIS data was granted by the Victorian Justice Human Research Ethics Committee (approval number CF/12/19007); ethics approval for the use of PIC data was granted by the Sydney Children’s Hospitals Network Human Research Ethics Committee (approval number LNR-2011-04-06).

Results

We identified 22 instances in the NCIS dataset where methotrexate was listed as a cause of death, including 12 with documented bone marrow suppression. Dosing errors were recorded in seven cases (five men, two women); methotrexate had been taken for between 3 and 10 consecutive days. One further deceased patient took more than the prescribed dose of methotrexate (based on tablets remaining and dispensing date), but was not included because consecutive daily dosing was not conclusively established. Abnormal blood cell counts were documented for all seven deaths linked with dosing errors (median age, 78 years; range, 66–87 years). Reasons for the errors included dosette packaging errors by pharmacists (three cases), prescribing error (one), mistaking methotrexate for another medication (one), dosing error by carer (one), and prescriber–patient miscommunication (one). Causes of death without a documented dosing error included alveolar damage or pulmonary fibrosis (five cases), pneumonia (three), sepsis (three), pancytopenia (one), chronic liver disease (one) and gastro-intestinal haemorrhage (one).

The TGA DAEN included 16 reports of methotrexate-related adverse events meeting our search criteria, including five deaths. These were reviewed for inclusion, and unintended daily dosing was documented in ten cases (median age, 58 years; IQR, 42–74 years; range, 41–85 years; eight women), including two deaths (two women, aged 71 and 83 years).

The PIC dataset contained 92 cases of methotrexate-related medication error meeting our inclusion criteria. Between 2005 and 2013, the annual number of events was fairly stable (four to nine cases per year; Box 1). Interestingly, there was an increase during 2014–2015 (16 and 13 cases respectively). We compared PIC exposures with prescribing and dispensing habits, as increased supply might explain an increase in medication errors. The number of methotrexate concessional scripts dispensed during 2005–2015 is shown in Box 1. Most methotrexate was dispensed in 10 mg × 50 tablet packs, the rate of dispensing of which increased steadily during the study period, while that of 2.5 mg tablets had been decreasing. The rate of dispensing of the smaller pack size (10 mg × 15), introduced in 2008, has grown, but has not reached that of the larger pack size, which still accounted for 47% of scripts (and 79% of 10 mg doses) in 2014.

In the PIC dataset, exact ages were recorded in 51% of cases (median age, 65 years; IQR, 52–77 years; range, 28–91 years); at least 18 patients were over 75 years of age. Fifty-five of the 92 patients were women; sex was not recorded in seven cases. Call records documented a range of symptoms, including stomatitis, vomiting, reduced blood cell count and fever. The median number of consecutive days for which methotrexate was taken was 5 (IQR, 4–9 days; range, 3–180 days); the distribution was skewed (Box 2), with 20 cases involving methotrexate taken for 3 consecutive days. The median daily dose taken was 10 mg (IQR, 10–15 mg; range, 2.5–60 mg). Box 3 summarises data on the doses and durations of administration in medication errors reported to PICs.

Where documented, reasons for errors in the PIC dataset included mistaking methotrexate for another medication (11 cases), often folic acid (six cases) or prednisone (four); carer or nursing home error (five); methotrexate being newly prescribed for the patient (five); dosette packing errors by the pharmacy (four); misunderstanding instructions given by the doctor or pharmacist (two); the patient believing it would improve efficacy (two); prescribing error (one); and dispensing or labelling error (one).

There was little overlap in the cases recorded in the three datasets. One DAEN case was a match for one in the NCIS dataset, and there was one possible match of a DAEN case with PIC data. Two of the NCIS deaths were also recorded in PIC databases.

Discussion

This study examined methotrexate dosing errors captured by a range of reporting systems. This included seven deaths in the NCIS dataset (2000–2014), ten cases in the TGA DAEN (including two deaths, 2004–2014) and 92 PIC cases (2004–2015). These datasets had little overlap, with 91 unique reports of methotrexate medication error identified in the three datasets for 2004–2014. Although these events are relatively rare, they can have serious consequences, and all are preventable. Serious toxicity (including death) was noted after as little as 3 consecutive days of methotrexate administration.

This study highlights the benefits of searching both the coronial and TGA datasets, as there was only one case common to these two datasets (with the NCIS capturing more cases than the TGA). Deaths are reported to the coroner according to legislation that defines a “reportable death”, and there is no requirement to report deaths caused by adverse drug events. Indeed, a recent study by our group found that there is little cross-reporting of drug-related deaths by the TGA and the coroner (unpublished data). This raises the question of whether there might be more deaths related to methotrexate dosing errors that are not reported to either body. Similarly, although our PIC dataset includes all Australian poisons calls, not all methotrexate medication errors result in a call to a PIC. Methotrexate medication errors causing toxicity and death may thus be more common than our study suggests.

Further limitations of our study include the delayed release of findings by the coroner. At the time of data extraction, case closure rates for 2013 and 2014 averaged about 75% and 50% respectively. Our study may therefore underestimate the numbers of deaths, especially those occurring during 2013–2014. The TGA dataset documents occasions of methotrexate dosing errors, but the DAEN does not establish causality, so that the deaths recorded in the TGA DAEN represent associations with the medication (rather than causal links). The PIC dataset has limitations, including non-standardised methods for coding calls and the fact that it lacks outcome information (Australian PICs do not routinely conduct follow-up calls). As the PBS dataset did not capture items under the co-payment threshold prior to 2012, we analysed only dispensing data for the concessional population. Although this provides an indication of prescribing trends, it may not be generalisable to the entire population.

The variability in the amount and duration of methotrexate administration prior to a toxic reaction is interesting. The NCIS database showed that taking methotrexate for 3 consecutive days can be fatal, but a small proportion of PIC patients took the drug daily for weeks before they presented to hospital. Such diversity of response could be caused by the marked variability in genes involved in methotrexate absorption, transport, metabolism and excretion.2,12 Variability in renal function and hydration could also affect methotrexate clearance.2 The median age of patients in the NCIS dataset was more than 10 years higher than that in the PIC dataset, suggesting that increased age may be a risk factor for death related to methotrexate dosing errors.

These data revealed a worrying increase in methotrexate medication errors in the PIC dataset for 2014–2015, despite the mentioned efforts to reduce the incidence of these events. It is difficult to explain this increase, but the risk of methotrexate medication error may be increasing as the population ages. Older people may be at increased risk because of a range of problems that includes confusion, memory difficulties, and age-related decline in visual acuity.

This study indicates that ongoing harm is occurring as the result of methotrexate errors. More needs to be done by the manufacturers, the TGA and health professionals to reduce these risks and to improve the harm–benefit balance of weekly methotrexate. One possibility would be to adjust the packaging. A further reduction in pack size may be warranted; for example, each box of the Rheumatrex Dose Pack8 in the United States contains doses for 4 weeks only (similar to the manner in which the weekly dosed bisphosphonates are packaged). Current methotrexate pack sizes in Australia can exceed a year’s supply, depending on the prescribed dose. Although supply of the largest methotrexate pack was changed to a restricted benefit in 2008, this did not result in a reduction in the number of scripts dispensed (Box 1). In addition, uptake of the new, smaller pack has been slow (Box 1). More must therefore be done to discourage prescribing of unnecessarily large quantities of methotrexate.

Further, because it is recommended that folate be co-prescribed with methotrexate, folate and methotrexate could be packaged together in a manner similar to that for oral contraceptives/sugar pills or combination calcium/vitamin D with bisphosphonates. This would be particularly useful given that one of the reasons for methotrexate medication errors we identified was confusion of the medication with folate. The limitations of this approach include the lack of national consensus on the ideal regimen for folate supplementation, with insufficient evidence to justify strongly recommending a specific dose.13 This approach would also require an industry partner to develop such a product,8 and with current prices there may be a limited return on this investment.

Formulating methotrexate as a distinctively coloured tablet could reduce the risks of medication errors by pharmacists, pharmacy technicians and patients; the documented confusion of methotrexate with folic acid tablets is probably related to both being small yellow tablets. Further packaging changes could be made, including clear labelling of the box with a statement such as “Warning: this medicine is usually taken weekly. It can be harmful if taken daily.” Similar labelling changes have been recommended in Australia14 and elsewhere,15 and the results of this study suggest that more needs to be done to mandate sponsors (the companies supplying methotrexate in Australia) to enact these changes.

As some of the dosing error events can be attributed to prescribing or dispensing errors, warnings in prescribing and dispensing software could be improved. Prescribing software could include a pop-up alert when methotrexate is prescribed daily, with the manual entry of an oncological indication needed to override the warning.16 Dispensing software could include alerts if methotrexate is being dispensed too frequently. We identified at least three fatalities caused by daily methotrexate included in pharmacy-filled dosette boxes. Education of pharmacists and their assistants could be improved to increase vigilance and checking of dosette packs containing methotrexate. Further, patients who are prescribed methotrexate for the first time are at particular risk, and extra care in counselling these patients is needed. This includes providing clear verbal and written instructions about dosage.

In conclusion, our study found that methotrexate medication errors, some resulting in death, are still occurring despite a number of safety initiatives. The increase in the number of events during 2014–2015 is particularly concerning. Methotrexate use is likely to continue increasing as Australia’s population ages, so that additional measures are needed to prevent these errors. We have outlined some potential strategies, including altering the packaging, improving education, and including alerts in prescribing and dispensing software.

Box 1 –
Methotrexate medication error events reported to Australian Poisons Information Centres, and the quantity of each methotrexate pack size dispensed to concessional patients through the Pharmaceutical Benefits Scheme, 2005–2015

Box 2 –
Number of consecutive days for which methotrexate was administered in events reported to Australian Poisons Information Centres, 2004–2015*


* Data from the Victorian Poisons Information Centre were available from May 2005, and from the Queensland Poisons Information Centre from January 2005.

Box 3 –
Dose and duration of 84 methotrexate medication errors reported to Australian Poisons Information Centres, 2004–2015*


Each point represents a unique event; eight of the 92 reported events are not included because the amount of methotrexate taken each day was ambiguous or not included in the call record. * Data from the Victorian Poisons Information Centre were available from May 2005, and from the Queensland Poisons Information Centre from January 2005.

The unfulfilled promise of the antidepressant medications

We need more effective treatments for depression, because current treatments avert less than half of the considerable burden caused by the illness.1 Antidepressants are the most commonly used medications, taken by 10% of adult Australians each day, and at a rate that has more than doubled since 2000 to be among the highest in the world.2 Two broad forces have been argued to have driven this trend — in Australia, as in other economically developed countries. First was the broadening of the diagnostic concept of depression with publication of the Diagnostic and statistical manual of mental disorders, third edition (DSM-III) in 1980. Previously, depressive illness was considered to have two subtypes — a “neurotic” illness that responded to psychological therapies and a rarer melancholic depression that had a biological cause and responded to medications. But starting with the DSM-III, the distinction was dropped and the categories were collapsed into the broader “major depressive disorder”. This was followed shortly afterwards by the release of the first selective serotonin reuptake inhibitors (SSRIs) — the short-lived zimelidine in 1982, and then fluoxetine in 1986 — and the ensuing cultural phenomenon that encouraged us to think of depression as resulting from a chemical imbalance that could be corrected with medication. Evidence for depression being caused by a serotonin deficiency is inconclusive and contested.3

The use of antidepressants has continued to rise despite accumulating evidence that they are not as effective as was previously thought. Recent meta-analyses show a modest overall effect size of about 0.3 (although it is larger in severe depression compared with mild depression).4 The overall effect size, while modest, is similar to that of other treatments in medicine: of similar magnitude, for example, to corticosteroids for chronic obstructive pulmonary disease.5 Earlier studies had reported much larger effect sizes for the medications, in part driven by the influences of the pharmaceutical industry on selective publishing of positive results, and the substitution of outcome measures to report ambiguous findings as positive.6 Revelations of these publication strategies have done significant damage to the reputation of the medications and to the pharmaceutical companies who make and market them.7

Antidepressant use in children and adolescents has increased over the past two decades in the same way as it has for the population in general,8 and similarly, meta-analyses of their use in this group have shown smaller effect sizes than had previously been reported.9 Reanalysis of previously published trial data has shown how it was manipulated to inflate the effectiveness of the medications — by substituting pre-specified outcome measures with those more favourable to the trial medications.10 However, just as significantly, meta-analytic approaches have confirmed what had previously been suspected clinically: that antidepressants can induce an increase in suicidal thoughts and behaviours (although not completed suicides) in some young patients.9 A recent study reported that antidepressants can also cause an increase in aggressive behaviour in children and adolescents.11 Both problems are likely to be mediated by the capacity of antidepressants to cause increased agitation in some young people. Although these problems are not common (the number needed to treat for antidepressants in youth depression is 10, while the number needed to harm, in terms of increased suicidal thoughts and behaviours, is 1129), they should be considered when assessing the potential benefits and risks of using these medications in young patients.

The increasing power of placebos

Much effort has gone into delineating the reasons for the apparent falling effectiveness of antidepressants. There are likely to be multiple reasons, including the unearthing of unpublished negative trial results for inclusion in meta-analyses, thereby diluting the positive outcomes from published studies,6 and the inclusion of more “real world” patients (those with comorbid conditions and clinical complexity) in effectiveness trials. Perhaps the culprit given most attention, however, is the increasing rate of response to placebo, which is particularly high in young people. The proportion of patients responding to placebo has increased steadily over the past two decades, leading to a narrowing of the gap between response to medication and placebo.12 The placebo response is a complicated phenomenon. In part, it is driven by a positive expectation bias, but it also illustrates the statistical concept of regression to the mean, whereby patients with depressive symptoms at baseline tend to recover over time irrespective of treatment.

Why should these properties of the placebo be becoming more powerful? It is not clear. One hypothesis is that since the broadening of the diagnostic criteria for depression in the DSM-III, patients with less severe symptoms have been enrolled in treatment trials, and such patients are more susceptible to the placebo response. While there is some evidence for this,4 an analysis of severity cut-offs for study entry showed that where these were higher (ie, when depression had to be more severe for patients to be included), the placebo response rate was even greater.13 An alternative explanation is that patients in more recent trials have had a greater expectation that they will get better with medication: the placebo response rate is greatest in trials when the chance of receiving placebo is low (ie, in multi-arm trials), and lowest in two-arm trials when the chance is high, lending weight to this theory.14 The factors behind the increasing rate of response to placebo, and consequent decreasing effectiveness of medications, are evidently complicated.

Modest effect sizes are not confined to antidepressants

Despite these concerns, antidepressant medications are effective, even if only modestly so. Other treatments for depression are also effective, although the most studied of these — the psychotherapies — also have evidence of declining effectiveness in more recently published trials.15 Two particular psychotherapies have the most favourable evidence: cognitive behavioural therapy (CBT) and interpersonal psychotherapy (IPT). Both are structured, time-limited therapies that directly address the core features of depression. While both psychotherapies are effective, meta-analyses have shown that early studies reported inflated effect sizes.16,17 The reasons for this are clearer for psychotherapies than for medications. Many psychotherapy trials, especially those conducted earlier, adopted low-quality methods that were biased towards overestimating the interventions’ effects. Many therapy trials enrolled non-clinical participants (previously undiagnosed patients who scored above a threshold on a rating instrument), used non-active control conditions (eg, patients on a waiting list), analysed only participants who had completed treatment (rather than using the more rigorous intention-to-treat principle), or did not use blinded assessors.16,17 The effect size of high-quality psychotherapy trials (d = 0.2) is, consequently, less than a third of the effect size for low-quality trials (d = 0.7), and similar in magnitude to the effect size for antidepressants.17

There has been a recent focus on exercise and diet as potential interventions. While it is clear that exercise and healthy eating are associated with good mental health, it is less clear that they are effective interventions for depression.18,19 One reason for this is that adherence to exercise and diet plans is often insufficient to produce improvement,19 and even when they are adhered to, the effect sizes for such non-specific interventions are unlikely to be large. While there is yet insufficient evidence to suggest that exercise and dietary interventions can be effective as stand-alone treatments, they are still worth pursuing as adjunctive treatments — and the evidence suggests that clinicians do not recommend them often enough.20

Combined treatments

The modest effect sizes for depression treatments — and there are no well-studied treatments for depression that have large effect sizes — suggest that combining treatments might provide the best outcomes for patients. The combination of psychotherapy and medication is more effective than either alone. In adults, the effect of combined treatment compared with placebo is about twice that of medication only compared with placebo.21 Combined treatment also seems to be more effective in children and adolescents,22 although there have been fewer studies in these groups. The effects of psychotherapy and medication appear to operate independently of each other,21 providing a good rationale for their combination.

Despite the evidence of superior effectiveness for combined treatments, recent reports suggest that psychotherapy is being offered less rather than more often, at least in the United States. In the decade from 1998 to 2007, the percentage of adult patients with depression who were treated with psychotherapy declined from 54% to 43%.23 A similar decline was noted in children and adolescents, although more recent evidence suggests that this has been reversed with the increasing concerns about the safety of medications.8 In Australia, while we have clear evidence that the rate of antidepressant use is increasing, we lack comparable data for the use of psychotherapy. There are some promising signs that it is becoming easier to access psychotherapy. The federal government’s Better Access to Mental Health Care scheme was introduced in 2006, and allows general practitioners to refer depressed patients to qualified therapists for up ten sessions of Medicare-funded treatment. It has led to significant uptake and is helping to reverse the trend,24 albeit with demographic distortions in the groups who access the scheme. The uptake of psychotherapy is disproportionately higher in wealthier suburbs, and lowest in outer suburban and regional communities where rates of depression are highest.24 And although the gender gap is narrowing, use of psychotherapy is still disproportionately higher among women.25 While there is evidence that access to therapy is improving, we are yet to see whether this is translating into a reduction in the prevalence of depression.

An unfortunate nexus has developed between the diagnosis of depression of any severity and the reflexive prescription of medications as monotherapy, for which the medical profession must accept some responsibility. There is a long tradition of medical psychotherapy — important psychotherapies were developed by medical practitioners such as Sigmund Freud, Aaron Beck (CBT), and Gerald Klerman (IPT) — that seems to be in decline. Fewer doctors now have the expertise to deliver psychotherapy, the teaching of which has been de-emphasised in psychiatry training,26 and psychotherapy is now largely the domain of psychologists, social workers, and other health professionals. This appears to have had the effect of encouraging psychiatrists and other doctors to consider medication, which is their area of expertise, rather than psychotherapy as the first-line treatment for depression.

Future directions

The pharmaceutical industry has scaled back investment in developing new drugs for mental illnesses, mainly because of so many development failures,27 and it is unlikely that we will see new medications with substantially greater effectiveness in the coming years. The psychotherapies too have their limitations, and while they can be made more available, it is unlikely that new forms of psychotherapy will be developed that will have substantially greater effectiveness than existing therapies.

Some psychiatrists and researchers argue that reinstating melancholia as an illness, separate from neurotic depression, provides a solution to refining treatments.28 They argue that melancholic depression shows a distinct and selective response to antidepressant medications. Differentiating the illness subtypes, however, was never as clear in practice as some now argue. Australian psychiatrist, Sir Aubrey Lewis, pointed out in 1934 that the separation between the two was arbitrary — “a setting up of types or ideal forms, a concession to the requirements of convenient thinking in categories”29 — with most patients showing aspects of both. The belief that melancholia responds much better to medications has also not been reliably confirmed. A recent large study could find no difference in medication response between those with and without melancholic symptoms.30

Major depressive disorder is undoubtedly a heterogeneous disorder, and clearer distinctions between subtypes would make it easier to target treatments, but there is little at present to guide us as to how best to make such divisions. There is, however, a significant research effort aimed at characterising treatment biomarkers — genetic, brain imaging and neuropsychological parameters that might predict a patient’s response to particular treatments. With no likelihood that significantly better treatments for depression will emerge in the near future, better targeting of existing treatments towards patients who are most likely to respond to them is probably our best hope for improving treatment outcomes.

Treatment recommendations

While recent evidence might have tempered the initial enthusiasm for antidepressants, these agents still have a role in treating depression. Some patients show particularly strong responses to the medications31 (although we are not reliably able to predict who they will be), and there is good evidence that antidepressants are effective in preventing relapse of depression.32 The task for medical practitioners now is to place antidepressant medications in an overall treatment framework.

All patients should be offered psychotherapy where it is available, and medication should be considered if

  • the depression is of at least moderate severity;

  • psychotherapy is refused; or

  • psychotherapy has not been effective.

When medications are prescribed, they should be used in a way that maximises their chance of effectiveness. The dose should be increased if there has been no improvement after 4 to 6 weeks. The medication should be changed if there has been no improvement after a subsequent 6 weeks. Usually, the medication should be changed to another within the same class in the first instance (eg, from an SSRI to an alternative SSRI), and to an antidepressant of an alternative class (eg, from an SSRI to a serotonin-noradrenaline reuptake inhibitor, such as venlafaxine) if a second change is required. If this strategy is ineffective, more expert guidance is indicated: this might include considering augmentation strategies, such as lithium, or the use of neurostimulation (electroconvulsive therapy and transcranial magnetic stimulation).33 At all stages, therapy with ineffective medications should be ceased and unnecessary polypharmacy avoided. Alongside these treatment strategies, we should continue to recommend and encourage good eating and exercise, both of which are likely to help engender a healthy mind and a healthy body.

Biological models of mental illness: implications for therapy development

Systems approaches are needed to recognise the complexity of the biological bases of psychiatric disease

The bases of mental disorders can best be understood as a complex interplay between biological, psychological, social and lifestyle factors: a classic bio-psycho-social-lifestyle model. There are undoubtedly some disorders where a biological model alone is more appropriate — this applies particularly to the psychotic disorders — but even in such cases it must be acknowledged that these illnesses are strongly influenced by psychosocial and lifestyle factors. What makes a biological understanding of mental illnesses necessary, however, is that it opens the way for the development of rational treatments. This has been the quest since antiquity, with treatments predicated on the putative underlying biological causes: purging and bleeding patients to correct imbalances in the humours to treat melancholia, which was attributed to an excess of black bile, or removing sources of focal infection, such as the teeth, tonsils and even the colon, that were once regarded as causing mental disorders. While these models perhaps now seem far-fetched, they were not entirely implausible when one considers contemporary neuro-endocrine and neuro-inflammatory models of mental illness.

Biological explanations of mental disorders gained momentum in the early 1950s through a series of fortuitous discoveries in psychopharmacology coupled with “reverse engineering”. Firstly, chlorpromazine, initially synthesised as an antihistamine, unexpectedly alleviated hallucinations and other symptoms of schizophrenia. Similarly, it was noticed that iproniazid, originally used to treat tuberculosis, made some lucky patients inappropriately happy; imipramine (considered to be another antihistamine) and structurally similar to chlorpromazine, was also found to have antidepressant activity.

As chemical neuroanatomy advanced from the late 1950s, models of mental illness were developed. For example, the dopamine hypothesis of schizophrenia was based, in part, on the discovery that chlorpromazine inhibited dopaminergic transmission, leading to further drugs being developed that blocked dopamine D2 receptors. It was subsequently found that imipramine and monoamine oxidase inhibitors with antidepressant properties also modified catecholaminergic transmission, and depression was consequently seen as the result of reduced catecholamine levels in the brain. This concept was refined by incorporating serotonin (5-HT) and the complex regulation of multiple monoamine receptor types into the model, and the development of agents that specifically targeted serotonin transmission, the selective serotonin re-uptake inhibitors (SSRIs). This focus on the monoamines and other neurotransmitters led to new me-too medications, the key elements of which remained blocking D2 receptors in schizophrenia and stimulating serotonin and noradrenaline receptors in depressive and anxiety disorders. The robust antipsychotic properties of clozapine, despite its being only a weak D2 receptor antagonist, spurred exploration of other neurotransmitters implicated in schizophrenia, including roles for 5-HT2A and 5-HT2C receptors, leading to the development of further second generation antipsychotics.

The winding road to new therapies

While these neurotransmitter hypotheses are of great heuristic value, they do not sufficiently explain mental disorders, nor does targeting these transmitter systems completely ameliorate their symptoms. Further, more recent findings have implicated several other pathways.

So how do we develop the next generation of therapies for people with psychiatric disorders? The traditional route was to identify a singular molecular focus, which could then be targeted. While this approach has intrinsic scientific rigour, is tightly hypothesis-driven, and has mechanistic appeal, it has not been a fruitful approach for developing truly novel therapies. It is also problematic for mental disorders, for which there is no clearly identified final common functional pathway, and where the patterns of biomarker abnormalities in seemingly different conditions overlap to a significant degree. This problem is partially the product of existing phenomenologically based diagnostic systems, such as the Diagnostic and statistical manual of mental disorders (DSM), which cannot accurately define phenotypes that are based on phenomenology rather than biological sub-categories. To overcome the problem, the National Institute of Mental Health (United States) has adopted a shift in emphasis in its Research Domain Criteria (RDoC) from diagnosis to symptom domains.1 These symptom domains — negative valence, positive valence, cognitive function, arousal, and social process systems — are mapped against the underlying genes, molecules, neural circuits and neurophysiology that putatively underpin them. While this new approach is a welcome injection of novel thinking, it matches clinical needs poorly, and we are yet to see positive outcomes from its application. There are nevertheless hopes that it will help elucidate brain processes and models of mental illness that can be used to identify therapeutic targets.

One can also adopt systems-based philosophies that are more broadly directed at networks involved in the pathophysiology of mental illness.2 This approach is gaining traction with the identification of a number of non-monoaminergic systems and processes that have been implicated in the pathogenesis of several psychiatric disorders, including inflammation, dysregulated oxidative signalling, neurogenesis, apoptosis and mitochondrial dysfunction. These functionally interacting cellular pathways combine in contributing to the dysregulation of systems and networks in the genesis of many non-communicable disorders, which are rarely the outcome of an abnormality in a single element. The gut microbiome has recently been identified as a critical system in its own right, with profound impacts on immune regulation and other systems. As it can be influenced by diet, it is correspondingly being seen as a plastic target in the development of novel therapies.3

A number of studies have examined such systems, and this approach remains one of the more promising avenues for therapeutic development. It must be noted that many of the known drivers of psychopathology, as varied as stress, poor diet, smoking, physical inactivity, sleep disturbance and vitamin D insufficiency, have common effects upon inflammation and oxidative stress, for example. While such drivers of psychopathology are deemed to be lifestyle factors, they exert their effects through their impact on recognised biological systems. By the same token, seemingly psychological risk factors, such as early trauma, can lead to biological changes through epigenetic effects on the reactivity of the hypothalamic–pituitary–adrenal axis and the activity of the immune system.

Putting it all together

An integrative model incorporating lifestyle and risk variants, operative neurobiological pathways, and the impact of those pathways on brain structure and function is thus beginning to emerge. The systems biology approach emphasises the fact that these are not siloed systems and that they interact intimately in complex and sometimes unpredictable ways. To capture the underlying pathophysiology of such multisystem dysfunction, novel molecular techniques, including the “omics” platforms, are needed, buttressed by big data analytic techniques. Researchers have leveraged these systems approaches to productively target inflammation, and a number of leads are being followed up, including the therapeutic benefits of agents such as celecoxib, aspirin, statins, minocycline, and antibodies to immune factors (such as tumor necrosis factor [TNF-α]).4 Given the ubiquity of oxidative stress in neuropsychiatric disorders, the first generation of investigations of therapies that modulate redox biology have been promising.5 In addition, the first studies targeting mitochondrial dysfunction are underway. Disruptions of the circadian system are also found in mental disorders, and novel treatments for re-synchronising these systems (bright light, melatonin receptor agonists) offer new approaches to therapy.6

A key element underpinning biological models of mental illness is the genetic component. High hopes of identifying a single gene for specific psychiatric disorders have evaporated with the advent of molecular genetics. In the disorders with the greatest heritability, numerous single nucleotide polymorphisms (SNPs) have been identified, each of which alone has only a very weak effect. More than 100 SNPs associated with schizophrenia have been identified by genome-wide association studies. A genetic relationship with the major histocompatibility complex locus has been described, and complement component 4 (C4) alleles that affect the expression of C4A and C4B proteins have recently been associated with schizophrenia;7 functionally, these findings may offer an explanation for the loss of cortical grey matter in people with schizophrenia. They also concur with much earlier biomarker findings that implicated abnormal C4 expression in the pathophysiology of depression.8 This illustrates that bottom-up biomarker and top-down genetic approaches can complement each other in clarifying the pathogenesis of psychiatric disorders. Additionally, in silico approaches have been used in hypothesis generation for detecting potential therapeutic agents.9

However, it needs to be stressed that almost all current therapies arose by exploiting serendipitous clinical findings, and it makes sense not to abandon this avenue of drug discovery.10 It remains crucial that clinical acuity and research platforms such as epidemiology continue to be utilised, to facilitate the detection of unexpected associations between treatment and clinical disease burden. A clear understanding of biological models of psychiatric dysfunction and how they interact and complement each other, using the full array of neuroscientific approaches available, remains essential for developing more effective therapies for disabling mental disorders. But this needs to be an iterative and bi-directional process, with back translation of clinical findings to reverse engineer neurobiology, historically a fruitful avenue for uncovering the underlying pathophysiology of these disorders.